1 / 18

O pen LMIS

Scalability Testing Results and Conclusions. O pen LMIS . Scope of Testing. User Load Concurrent connections Transaction Rates Scalability. Approach. Establish performance benchmarks Preparation Test Environment and Data Progressive loading of system, and fine tuning Results

dugan
Download Presentation

O pen LMIS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scalability Testing Results and Conclusions OpenLMIS

  2. Scope of Testing • User Load • Concurrent connections • Transaction Rates • Scalability

  3. Approach • Establish performance benchmarks • Preparation Test Environment and Data • Progressive loading of system, and fine tuning • Results • Conclusions

  4. Performance Goals • Performance Benchmarking • Use baseline data available from Tanzania and Zambia • Goal is to support a country on the scale of Nigeria • 4 times the workload of Tanzania: 4x population, 4x facilities, 4x system users, 12x Requisitions – based on modeling activity with monthly rather than quarterly replenishment cycles, as are currently run in Tanzania. (detailed test metrics are listed in the appendix) • Define extreme-case hypothetical test scenario: • Requisitions submitted for all Programs every month • 25% of all monthly user activity occurs on the last day of the month • Historical data preloaded in DB

  5. Steps • Environment Setup • App Server and Web Server deployed on a shared VM • Production Database Engine deployed on a VM • Reporting Database Engine deployed on a separate VM • Nagios used for system monitoring • JMeter running on multiple machines to generate simulated users’ activity

  6. Steps, continued • Preparation • Reference Data (used to populate new Requisitions, etc.) • Historical transaction data • JMeter Scripts to synthesize all users’ activities • Execution • Merge reference data into JMeter Scripts • Execution of JMeter scripts • Profile system with YourKit to identify any memory leaks • Analyze system logs to identify performance bottlenecks

  7. First Rounds of Testing • Test run with three VM system configuration (Production Database Server, Replication/Reporting Database Server, App+Web Server) *Percentage of cumulative timed out requests

  8. Server Configuration, first round of tests

  9. Performance Tuning & System Refinements • Application Tuning • JSON payload optimization for Save-Requisition and Approve-Requisition work flows • Non Full Supply product data selectively loaded while viewing requisition • Database indexes created to improve query response time • Bug Fixing • Environment Modifications &Tuning • Add an additional VM to separate the App Server and the Web Server • Apache configuration optimized to support higher user load • c3p0 (connection pooling) tuned to maximize usage of database connection pool • Tomcat configuration optimized to maximize number of concurrent requests • Distributed JMeterinstances across multiple workstations to be able to generate a simulated user load of 10,000 parallel users

  10. Second Rounds of Testing • Tests run with four VM system configuration (Production Database Server, Replication/Reporting Database Server, App Server, Web Server) *Percentage of cumulative timed out requests

  11. Server Configuration, second round of tests

  12. Summary of Results • The projected work loads for Tanzania and Zambia were covered by the system running on a three-server environment. • The system scales to support substantially larger workloads by running the Application Server and the Web Server on individual dedicated machines.

  13. Conclusions • A three-server environment would be the minimum configuration for supporting the workloads of Tanzania or Zambia. • System performance can be improved by • Using individual dedicated machines for the Application Server and the Web Server. • Add additional application servers and a load balancer.

  14. Typical Server Configurations

  15. Considerations • In retrospect, our worst-case testing scenario was excessive. No organization would allow all their health centers to wait until the last minute to submit their requisitions. In order to maintain an even workload at the warehouses and for the delivery fleet, the organization would instead divide the health centers into groups, and schedule their replenishment-cycle activities (including deadlines for submitting their requisitions) uniformly throughout the month.

  16. Considerations, cont’d • Our testing tools (i.e., the set of computers running Jmeter to simulate users’ activity) had a stable internet connection to the VMs hosted in AWS cloud. The absence of a stable internet connection could render a cloud-hosted production system totally inaccessible at random times throughout the work day.

  17. Test criteria:

  18. Test criteria, cont’d:

More Related