1 / 31

Experience with NetApp at CERN IT/DB

Experience with NetApp at CERN IT/DB. Giacomo Tenaglia o n behalf of Eric Grancher Ruben Gaspar Aparicio. Outline. NAS-based usage at CERN Key features Future plans. Storage for Oracle at CERN. 1982: Oracle at CERN, PDP-11, mainframe, VAX VMS, Solaris SPARC 32 and 64

raquel
Download Presentation

Experience with NetApp at CERN IT/DB

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experience with NetAppat CERN IT/DB GiacomoTenaglia on behalf of Eric Grancher Ruben Gaspar Aparicio

  2. Outline • NAS-based usage at CERN • Key features • Future plans Experience with NetApp at CERN IT/DB - 2

  3. Storage for Oracle at CERN • 1982: Oracle at CERN, PDP-11, mainframe, VAX VMS, Solaris SPARC 32 and 64 • 1996: Solaris SPARC with OPS, then RAC • 2000: Linux x86 on single node, DAS • 2005: Linux x86_64 / RAC / SAN • Experiment and part of WLCG on SAN until 2012 • 2006: Linux x86_64 / RAC / NFS (IBM/NetApp) • 2012: all production primary Oracle databases (*) on NFS (*) apart from ALICE and LHCb online Experience with NetApp at CERN IT/DB - 3

  4. Network topology • All 10Gb/s Ethernet • Same network for storage and cluster interconnect Internal HA pair interconnect filer1 filer2 filer3 filer4 Private network, both CRS and storage Ethernet switch Private 1 Ethernet switch Private 2 serverD serverC serverE serverA serverB Ethernet switch Public “public network”

  5. Domains: space/filers Experience with NetApp at CERN IT/DB - 5

  6. Typical setup

  7. Impact of storage architecture on Oracle stability at CERN Experience with NetApp at CERN IT/DB - 7

  8. Key features • Flash cache • RaidDP • Snapshots • Compression Experience with NetApp at CERN IT/DB - 8

  9. Flash cache • Help to increase random IOPs on disks • Very good for OLTP-like workload • Don’t get wiped when servers reboot • For databases • Decide what volumes to cache: fas3240>priority on fas3240>priority set volume volname cache=[reuse|keep] • 512 GB modules • 1 per controller Experience with NetApp at CERN IT/DB - 9

  10. IOPs and Flash cache Experience with NetApp at CERN IT/DB - 10

  11. IOPs and Flash cache Experience with NetApp at CERN IT/DB - 11

  12. Key features • Flash cache • RaidDP • Snapshots • Compression Experience with NetApp at CERN IT/DB - 12

  13. Disk and redundancy (1/2) • Disks are larger and larger • speed stay ~constant → issue with performance • bit error rate stay constant (10-14 to 10-16), increasing issue with availability • With x as the size and α the “bit error rate” Experience with NetApp at CERN IT/DB - 13

  14. Disks, redundancy comparison (2/2) Data loss probability for different disk types and groups Experience with NetApp at CERN IT/DB - 14

  15. Key features • Flash cache • RaidDP • Snapshots • Compression Experience with NetApp at CERN IT/DB - 15

  16. Snapshots • T0: take snapshot 1 Experience with NetApp at CERN IT/DB - 16

  17. Snapshots • T0: take snapshot 1 • T1: file changed Experience with NetApp at CERN IT/DB - 17

  18. Snapshots • T0: take snapshot 1 • T1: file changed • T2: take snapshot 2 Experience with NetApp at CERN IT/DB - 18

  19. Snapshots for backups • With data growth, restoring databases in reasonable amount of time is impossible using “traditional” restore/backup techniques • 100TB, 10GbE, 4 tape drives • Tape drive restore performance ~120MB/s • Restore ~ 58 hours (but it can be much longer) Experience with NetApp at CERN IT/DB - 19

  20. Snapshots and Real Application Testing Capture update … update … delete … delete … PL/SQL PL/SQL insert… insert… Original Replay Upgrade Clone 11.2 10.2 Experience with NetApp at CERN IT/DB - 20

  21. Snapshots and Real Application Testing Capture update … update … update … update … delete … delete … delete … delete … PL/SQL PL/SQL PL/SQL PL/SQL insert… insert… insert… insert… Original Replay Replay Replay Upgrade Clone 11.2 10.2 SnapRestore® Experience with NetApp at CERN IT/DB - 20

  22. Key features • Flash cache • RaidDP • Snapshots • Compression Experience with NetApp at CERN IT/DB - 21

  23. NetApp compression factor Experience with NetApp at CERN IT/DB - 22

  24. Compression: backup on disk + 1x tape copy File backup RMAN Disk bufferRaw: ~1700 TiB (576 3TB disks) Usable: 1000 TiB(to hold ~2PiB uncompressed data) Experience with NetApp at CERN IT/DB - 23

  25. Future: OnTap Cluster Mode • Non-disruptive upgrades/operations: the immortal cluster • Interesting new features • Internal DNS load balancing • Export policies: fine-grained access for NFS exports • Encryption and compression at storage level • NFS 4.1 implementation, parallel NFS • Scale-out architecture: up to 24 (512 theoretical) • Seamless data moves for capacity, performance rebalancing or hardware replacement Experience with NetApp at CERN IT/DB - 24

  26. Architecture view – Ontap cluster mode Experience with NetApp at CERN IT/DB - 25

  27. Possible implementation Experience with NetApp at CERN IT/DB - 26

  28. Logical components Experience with NetApp at CERN IT/DB - 27

  29. pNFS • NFS 4.1 standard (client caching, Kerberos, ACL) • Coming with Ontap 8.1RC2 • Not natively supported by Oracle yet • In RHEL 6.2 • Control protocol: provides synchronization among data and metadata server • pNFS between client and MDS, get where information is store • Storage access protocols: file-based, block-based and object- based Storage access protocols pNFS Experience with NetApp at CERN IT/DB - 28

  30. Summary • Good reliability • Six years of operations with minimal downtime • Good flexibility • Same setup for different uses/workloads • Scales to our needs Experience with NetApp at CERN IT/DB - 29

  31. Q&A Thanks! Eric.Grancher@cern.ch Ruben.Gaspar.Aparicio@cern.ch Experience with NetApp at CERN IT/DB - 30

More Related