1 / 25

LDAP TIO INDEX server performance test

LDAP TIO INDEX server performance test. General setup for the LDAP TIO Index-server performance test. Henny Bekker @ SURFnet. The LDAP-Index performance test. The setup of the test lab W hat software is used in the test The general set u p for the performance test The b enchmark setup

anise
Download Presentation

LDAP TIO INDEX server performance test

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LDAP TIO INDEX server performance test General setup for the LDAP TIO Index-server performance test Henny Bekker @ SURFnet

  2. The LDAP-Index performance test.. • The setup of the test lab • What software is used in the test • The general setup for the performance test • The benchmarksetup • The dataset, objectClasses and attributes • What tokenization • What is tested and measured • The filter setting • Latency and number of requests per second

  3. The setup of the test lab • The TIO index client and server systems • One dedicated server running out of the box RedHat v7.1 • 400-MHz Intel Pentium II • 512 Mbyte RAM • 100 Mbps/Full-Duplex Ethernet interface • Maxtor 10-Gbyte IDE disk • 4 dedicated clientsystems running RedHat v7.1 • 266-MHz Intel Pentium II (or better) • 256Mbyte RAM • 100Mbps/Full-Duplex Ethernet interface • Maxtor 4-Gbyte IDE disk • Consoles connected using a ‘starview’ console switch • The network setup • A dedicated VLAN on a Cisco Catalyst 5000 switch • 100-Mbps Fast Ethernet Full-Duplex

  4. What software is used in the test • LDAP/TIO index servers • LIMS V 1.01 (Catalogix) • Desire/LDAP-TIO index v1.0 (Desire/DAASI) • MySQL v3.23.36 • OpenLDAP v1.2.10 • Apache v1.3.2 • IDDS v4.5 • TIO converter software • Tags v1.0 • Ldif2tio & tiocollapse • LDAP test software • Ldapgun v0.25 • Ldapsearch v2.0.7-14

  5. The general setup for the test • The benchmark setup • Plan the test (estimate load requirements) • The index(es) must fit in main memory • Designate PC’s for server and clients • Verify network performance • tcpblast and iperf • Execute verification test • UsingIDDS v4.5 • Ldapgun and ldapsearch • Use designated dataset • Loading TIOs into index servers • Run ldap clients for 10 minutes to ‘warm up’ the index and system disk cache

  6. The general setup for the test (cont.) • The dataset to be used • A test-set of approximately 450-K entries • SURFnet ±170-K entries • DFN ±120-K entries • Needs ±150-K entries (NO, SE) • Used tokenization • DNS: cn, o, ou, l, c, gn • RFC822: mail • Full: sn, objectclass • Only objectClass=person entries • Using anonymous bind • LDAPv3 subtree searches without chasing of referrals

  7. The general setup for the test (cont.) • Exemplary search requests on the FL-DSA & CAB • The Dutch FL-DSA (1500K requests) • 1300K baseObject search on objectClass • 60K onelevel search on objectClass • 20KbaseObjectsearch on cn • 10K onelevel search on ou • 8K onelevel search on cn • 5K onelevel search on o~ • 4K onelevel search on o or ou • 4K subtree search on uid • 4K subtree search on cn • 3K subtree search on cn~ or sn~ • 3K subtree search on ou • 3K subtree search on objectClass and cn • 2K subtree search on cn or sn or uid • 2K subtree search on sn • 72K + 60 other different searches

  8. The general setup for the test (cont.) • The end user LDAP server (750K requests) • 640KbaseObject search on objectClass • 53K onelevel search on objectClass • 20K onelevel search on o • 9K subtree search on cn or mail or sn • 3K subtree search on cn or mail or sn or givenname • 2K subtree search on cn • 23K + 60 other different searches • Very exotic filter settings for subtree searches • (|(cn=kluck)(sn=kluck)(uid=kluck)(ou=kluck)) • (|(cn~=polak)(sn~=polak)(ou~=polak)) • (|(mail=nie*)(|(cn=nie*)(|(sn=nie*)(givenname=nie*))))

  9. What is tested and measured • Defining the LDAP search filters • From ‘simple’ to ‘extravagant’ (with lots of booleans). • Must be dealt with by both LDAP/TIO servers • The filter settings: • (cn=foo) • (cn=*foo*) • (&(cn=foo)(sn=bar)) • (&(cn=*foo*)(sn=*bar*)) • (|(cn=foo)(sn=bar)) • (|(cn=*foo*)(sn=*bar*)) • (&(!(cn=foo))(sn=bar)) • (&(!(cn=foo))(sn=*bar*)) • (&(!(cn=not_a_foo_name))(sn=*bar*)) • (&(cn=foo)(sn=*bar*)(mail=*foo*)(givenname=*bar*))

  10. What is tested and measured (cont.) • Failed subtree searches • Lims: • (&(!(cn~=*foo*))(sn~=*bar*)) • (&(!(cn=*foo*))(sn=*bar*)(mail=*bar*)) • (&(!(cn=*foo*))(&(|(cn=*bar*)(cn=*fiets*))(|(sn=*auto*)(sn=*trein*)))) • (|(cn~=foo)(sn~=bar)(ou~=bar)) • Desire: • (&(cn=*foo*)(|(sn=*bar*)(sn=*fiets*))) • (&(|(sn=*foo*)(sn=*bar*))(cn=*fiets*)) • (&(!(cn=*foo*))(&(|(cn=*bar*)(cn=*fiets*))(|(sn=*auto*)(sn=*trein*)))) • (&(!(cn~=*foo*))(sn~=*bar*)) • No failures for IDDS

  11. What is tested and measured (cont.) • Performance test results • Number of requests per second for all 10 filters • One single client (test run) • Four clients with multiple threads (max 128) • Range: 4, 8 16, 32, 64, 128, 256 and 512 sessions • Latency tests • Measured during test-run • Hits and misses • Every query must give one or more ‘hits’ ? • Both implementations are indifferent to hits or misses • Measuring load on the index server • Using ‘vmstat’ running at 10 sec. Interval

  12. Desire LDAP/TIO-Index: Performance measurement

  13. Desire: CPU load 4 to 64 simultaneous sessions

  14. Lims LDAP/TIOIndex: Performance measurement

  15. Lims: CPU load 4 to 512 simultaneous sessions

  16. IDDS: Performance measurement

  17. IDDS: CPU load 4 to 512 simultaneous sessions

  18. Lims: Heavy load performance measurement

  19. Desire: CPU load 4 to 1024 simultaneous sessions

  20. Latency tests • Unloaded serverwith filter (&(cn=foo)(sn=bar)) • Desire:1000 queries in 630 seconds • Lims: 1000 queries in 8.95 seconds • IDDS: 1000 queries in 8,92 seconds • Unloaded serverwith filter (&(cn=*foo*)(sn=*bar*)) • Desire:1000 queries in 3500 seconds • Lims: 1000 queries in 11,5 seconds • IDDS: 1000 queries in 11,0 seconds

  21. Latency tests (cont.) • Heavy loaded serverwith filter (&(cn=foo)(sn=bar)) • Desire:1000 queries in 1976 seconds • Lims: 1000 queries in 17 seconds • IDDS: 1000 queries in 61 seconds • Heavy loaded serverwith filter (&(cn=*foo*)(sn=*bar*)) • Desire:1000 queries in >7200 seconds • Lims: 1000 queries in 24 seconds • IDDS: 1000 queries in 69 seconds

  22. Latency tests (cont.) • System status index server • In all cases enough free memory without swapping • CPU for 100% occupied • 80% user-time; 20% system-time • Lots of system interrupts and context-switches • 5000 interrupts per second • 4000 context switches per second • IDDS and Lims use threading • Desire does fork sub-processes

  23. Exchanging TIOs between Desire and Lims • To be done …

  24. Conclusion • The study of exchanging TIOs between Desire and Lims need to be finalized. • The Lims LDAP/TIO-Index server performs better the the Desire LDAP/TIO-Index server with a factor of 100 • The Lims LDAP/TIO-Index server performs as good as Innosoft IDDS v4.51 • Complex LDAP filters with boolean operators or filtertypes like ‘approx’ are not supported by the Lims and Desire LDAP/TIO-Index server

More Related