Automation framework evaluation

From OpenSFS Wiki
Revision as of 10:24, 22 January 2013 by Roman grigoryev (talk | contribs)
Jump to navigation Jump to search

In order to automatically service and track test requests, as well as deploy testing resources, an upper-level test automation framework would sit "on top of" the Lustre test infrastructure.

Some requirements/desires for the automation framework:

  • Aware of multiple clusters
  • Able to create virtual clusters as VMs
  • Able to automatically start testing based on various triggers, e.g. git commit hooks
  • Maintains a prioritizable job queue
  • Collect test output status in a database
  • Visually represent pass/failure in a clear, concise manner
  • Should facilitate easy interpretation of test "trends"; i.e. statistical-based test results



Framework(with link) Feature list
  1. Do tests describe intent clearly?
  2. Is system information gathered?
  3. Do results readable by man and computer?
  4. Is run and develop process fast?
  5. Does it able to be run on cluster in parallel?
More advantages More disadvantages
autotest

It is designed primarily to test the Linux kernel, though it is useful for many other functions such asqualifying new hardware. It used and developed by a number of organizations, including Google, IBM, Red Hat, andmany others. Developed on Python

  1. Test have meta information but framework don't use it.
  2. yes, several types of sys info collected, include profiling and crahes
  3. yes, own text easy-to-parse text report, html report and possible tap. have own sql-based reporting base and limited UI for it.
  4. yes
  5. with some limited programming - yes
  • ready for work linux kernel, supports some logging services and crashdump
  • under active developemnt
  • could work in "client" mode with minimum setup
  • has good internal unit test coverage

in server mode:

  • ready to work on sets of nodes and selecting nodes based on marks/lables
  • has jobs queue supports
  • has a basic web UI and test result database
  • very limited test selection, include list, exclude list, tagging support
  • has pretty simple web UI
  • test structure pretty complex
  • in server mode doesn't support configuration for multinode execution where hosts plays different roles, need configuration improving
STAF

The Software Testing Automation Framework (STAF) is an open source, multi-platform, multi-language framework, lead by IBM Core developed on C. You can interact with STAF from many languages (Java, C, C++, Python,Perl, Tcl, Rexx) and from the command line/shell prompt. Strictly says, it is not a framework but set of services and libraries for making own test

  1. No test management provided
  2. few options about host are collected
  3. -
  4. -
  5. with some limited programming - yes
  • multi-platform: works on Windows, Linux, AIX
  • test could be written on many languages
  • has UI interface for for remote work with server
  • has security levels
  • doesn't have linux kernel-specific and kernel crashes functionality(maybe special proxies or services are exist)
  • doesn't support configuration for multinode execution where hosts plays different roles, need configuration improving
  • doesn't have test management level
robotframework

Robot Framework is a python-based an open source generic test framework.

  • General purpose testing framework.
  • Supports multiple testing methodologies, such as data-driven test cases
  • provides a command line interface and XML based output files for integration into build infrastructure
  • Has built-in support for variables, which is useful for testing in different environments
  • provides tagging to categorize and select test cases to be run.
  • provides test-case and test-suite level setup and teardowns
  • provides an advanced keyword driven framework
  • keyworks are a synonym of a "function" doing "something"
  • Provides the ability to create reusable/custom higher-level keywords from the existing keywords
  • Easily extendable by Python, C# & Java
  • Doesn't run in parallel, but can run multiple instances to emulate it.
Xperior

Xperior is an open source framework developed on Perl by Xyratex for executing lustre tests from current shell-based testing framework. More info was presented at LAD '12.

  1. test could be described, about 10 fields now are used.
  2. only node status before run and lustre-diagnostic after test failure. Also there is few extension for reformat lustre after every tests, coverage collection, console log collection via netconsole, more could be easily added
  3. yes, yaml format of results and few convertors to junit, html and tap output documetns
  4. yes
  5. with external tool like jenkins - yes.
  • have separated lustre configuration
  • has some knowledge about current luste tests structure
  • compatible the most of acc-small lustre test set, allow to configure some test parameters (e.g. per-test execution timeout)
  • could be extended for new executors
  • test result is per-test yaml file and html report could be generated
  • could collect some logs from systems under tests on per-test basis, and this could be simple extended. Also there are script for uploading to mongo database.
  • has good internal unit test coverage
  • provides tagging to categorize and select test cases to be run. include/exclude lists
  • provides test-case and test-suite level setup, properties are inherited from suite to tests
  • doesn't maintain directly nodes, lustre mounts, cluster status
  • no security