1 / 15

Testing in the Grid

_____ ____ _ |_ _/ ___|_ __(_)___ | _,,,---,,_ | || | _| '__| / __| ZZZzz /,`.-'`' -. ;-;;,_ | || |_| | | | __ |,4- ) )-,_. , ( `'-' |_| ____|_| |_|___/ '---''(_/--' `-'_) Meow!. Testing in the Grid. Moran Avigdor & Igor Goldenberg 2007.

adelie
Download Presentation

Testing in the Grid

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. _____ ____ _ |_ _/ ___|_ __(_)___ |\ _,,,---,,_ | || | _| '__| / __| ZZZzz /,`.-'`' -. ;-;;,_ | || |_| | | | \__ \ |,4- ) )-,_. ,\ ( `'-' |_| \____|_| |_|___/ '---''(_/--' `-'\_) Meow! Testing in the Grid Moran Avigdor & Igor Goldenberg2007 Member of the IGT

  2. About GigaSpaces Provides middleware infrastructure product for applications characterized by High volume transaction processing and Very Low latency requirements Distributed (Grid) Application Server In Memory Data Grid (Caching) Compute Grid Messaging Grid Customer base Financial Services Telecom Defense and Government

  3. Historical background • GigaSpaces integration tests consist of distributed scenarios as replication, partitioning, local cache, messaging, etc. • Execution of 1,212 tests on a single machine took between 1 and ~2½ days, and a great deal to analyze the failures • Manual distribution of tests across machines increased the gathering and analysis time, and did not optimally utilize the machines (6 machines ~11 hours) • Today, we provide continuous feedback of 2,098 tests. If we scale out to 7 machines, we can get feedback in ~5 hours.

  4. Challenges to consider • Requirements: • Release builds faster • Increase coverage but keep constant-time regression cycles • Visibility (of reports) - on-time and comprehensive • Enable quick iterative development (IDE integration) • Formalize performance testing and make it part of the continuous tests • Reliability (and consistency) of results • Automation – minimal manual intervention! • High utilization of lab machines • Insights: • Integration may take at least 2-4 weeks • Increasing coverage also increases analysis time!

  5. TGris - Introduction • TGris is a word-play between TGrid (short for Testing Grid) and tigris (as in Panthera tigris - indicating power) • Provides distributed execution of tests (JUnit tests, or any other custom test, e.g. C++, .Net, TestNG, etc.) • Runs on top of GigaSpaces XAP (eXtreme Application Platform)- Java v1.4 and above • Reduces execution-time of test-suites due to its distribution and provisioning capabilities • Fully automated and can be easily integrated with your favorite build-server to provide continuous feedback

  6. Continuous Feedback Full automation cycle 24x7x365

  7. Architecture – Work stealing vs. Master Worker • Load balancing is one of the key techniques exploited to improve the performance of parallel programs. Optimal performance however can only come by balancing the instantaneous load at the processor.  One important architectural technique commonly employed to attempt to accomplish this instantaneous load balancing is work stealing • The Master-worker-pattern corresponds to the typical master-worker-scheme: a master dispatches pieces of work to several workers. Each worker does his piece of work. When finished he sends his result to the master and requests for a new piece of work (and so on), until all work is done. This scheme is important in practice, since it automatically balances load

  8. Architecture – Flow Submitter collects all test results agents establish connection with TGrid server The user submits atest-suite into TGrid Agent takes a test undera time-bound transaction Agent forks the test’slauncher in a separate JVM agents can run on any OSwith installed Java Test-suite is split into multiple test-units Reporter receives thetest-suite results andgenerates a report A test-result is returned by the test’s launcherto the agent. Agent destroys thelauncher VM and returnsthe result into TGrid

  9. Architecture – Service Provider Interface SuiteSubmitterImpl TGridSubmitter ISuiteSubmitter WikiReporter BenchmarkReporter IReporter report start submit TestSuite SuiteDescriptor BenchmarkFactory DotNetSuiteFactory abstractTestSuiteFactory TestSuite configure new DotNetTestUnit BenchmarkTest ITestUnit DotNetTestClass DotNetTestLauncher BenchmarkLauncher ITestLauncher ITestUnit launch ITestUnit BenchmarkConfig TestUnitConfiguration configure returns DotNetTestException DotNetTestResult ITestUnitResult BenchmarkResult

  10. Architecture – SPI flow public interface ITestLauncher { public void setup(ITestUnit testUnit) public ITestResult launch() public void tearDown() } public interface IReporter { public generateReport(List<ITestUnitResult> testResult) }

  11. Test Routing capabilities • A test can be routed providing a set of SLA criteria,such as specific operating system, java version, etc. • Criteria requirements are annotated on top of the test.One such annotation is @TargetPlatformCapability,for example:@TargetPlatformCapability( os=PlatformCapability.OS.WINDOWS, javaVersion="1.5")public class HelloWorldTest {//will be routed to an agent running on Windows with JDK >= 1.5} • Static SLAs include: java vendor, host (address or IP), number of processors, OS type. match?

  12. Divide into different Groups 1 LAN or 2 different LANs

  13. Partition for scalability

  14. Mixed environment Unix, Windows TGris can be mountedon any file-system likeNFS or SMB Scripts are available forboth Windows and Unix

  15. Future Plans • Available on http://www.OpenSpaces.org • Common out-of the box integrations – e.g. JUnit, TestNG • Distributed Test – e.g. simulating multiple clients

More Related