1 / 52

BACKUP/ GETTING STARTED: Backup School

BACKUP/ GETTING STARTED: Backup School. Presented by: W. Curtis Preston VP, Services Development GlassHouse Technologies. Making good on your investment. Many SANs are built in order to simplify backup yet often fail for lack of good design, processes and procedures

lstrong
Download Presentation

BACKUP/ GETTING STARTED: Backup School

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. BACKUP/GETTING STARTED:Backup School Presented by: W. Curtis Preston VP, Services Development GlassHouse Technologies

  2. Making good on your investment • Many SANs are built in order to simplify backup yet often fail for lack of good design, processes and procedures • There are several common mistakes that people make when building a backup system • Avoiding these mistakes and taking proper action can create a backup system that is reliable and restorable

  3. What will we cover? • Common backup configuration mistakes • How to avoid them • Sizing your backup system • Configuration examples for NetBackup • Configuration examples for NetWorker

  4. Common backup configuration mistakes

  5. Where do these lessons come from? • Audits of real backup and recovery systems • Lessons learned from real horror stories • Many, many sleepless nights

  6. Too little power • Not enough tape drives • Tape drives that aren’t fast enough • Not enough slots in the tape library • Not enough bandwidth to the server

  7. Too much power • Streaming tape drives must be streamed • If you don’t, you will wear out your tape drives and decrease aggregate performance • Must match the speed of the pipe to the speed of the tape • You can actually increase your throughput by using fewer tape drives

  8. Not using multiplexing • Defined: Sending multiple backup jobs to the same drive simultaneously • Again, drives must be streamed • Multiplexing will impact restore performance but not as much as you might think • Multiplexing can actually help your restore just as it can help your backups • Using multiplexing can greatly increase the utilization of your backup hardware

  9. Not using multistreaming • Defined: Sending multiple simultaneous backup jobs from a single client • Large systems cannot be backed up serially • Multistreaming creates a different job for each file system

  10. Using include lists • Most major backup software supports file system discovery • Still, many administrators use manually created include lists • Any perceived value is significantly outweighed by the risk it creates

  11. Too many full backups • If you are using a commercial backup and recovery product with automated media management and multiple levels, weekly full backups are a waste of tape, time and money • Monthly full backups, weekly cumulative incrementals (1), and daily incrementals (9) work just as well and use ¼ as much tape • Depending on the level of incremental activity, quarterly backups can work just as well

  12. Not standardizing • Creating custom configurations for each client is easier, but much riskier • Creating a standard backup client configuration can significantly decrease risk • Create a standard exclude list, etc. and push it out to each client

  13. Backups go ignored so often. It’s like they’re the bill collector nobody wants to talk to. Backup reporting products can really help automate easy reporting Don’t ignore backups. They will bite you. Not even noticing!

  14. It’s just backups, right? • “I’m an experienced, seasoned systems administrator. This is just backups. How hard can they be?” • The data being backed up has become very complex, and the complexity of backup systems has matched that complexity with equally complex functionality.

  15. Not thinking about disk • Tape is not as cheap as you thought • Let’s examine a 4 TB library: 20 slots, 2 drives $17K 20 tapes, $70 apiece $14K Robotic license $10K Total $41K (does not include labor costs) • That’s about $10/GB

  16. Disk is cheaper than you thought • ATA-based storage arrays as low as $5/GB(disk only, needs file system) • Special function arrays • Quantum DX-30 looks and behaves like a Quantum P1000. Can be used as target for “tape-based” backups (3 usable TB, $55K list, or $18/GB) • NetApp R100 looks like other NetApp filer. Target for SnapVault and disk-based backups, source for SnapMirror (9+ usable TB, $175K list, or $18/GB) • ATA disks not suited for heavy, random access, but perfect for large block I/O (e.g., backups!)

  17. You can do neat things with disk • Incremental backups are one of the greatest backup performance challenges • Use as a target for all incremental backups. (Full, too, if you can afford it.) • For offsite storage, duplicate all disk-based backups to tape • Leave disk-based backups on disk

  18. Now that I know… Building a reliable and restorable backup system

  19. Sizing the backup system

  20. Server size/Power • I/O performance more important than CPU power • CPU, memory, I/O expandability paramount • Avoid overbuying by testing prospective server under load • If you use Suns, you’ve got snoop and truss

  21. Catalog/Database size • Determine number of files (n) • Determine number of days in cycle (d) • (A cycle is a full backup and its associated incremental backups.) • Determine daily incremental size (i = n * .02) • Determine number of cycles online (c) • 150-250 bytes per file, per backup • Use a 1.5 multiplier for growth and error • Index Size = (n + (i*d)) * c * 250 * 1.5

  22. Library size: Drives • Network backup • Buy twice as many backup drives as your network will support • Use only as many drives as the network will support. (You will get more with less.) • Use the other half of the drives for duplicating

  23. Library size: Drives (2) • Local backup • Most large servers have enough I/O bandwidth to back themselves up within a reasonable time if you’re using NetBackup • Usually a simple matter of mathematics: • 8 hr window, 8 TBs = 1 TB/hr = 277 MB/s • 30 10 Mb/s drives, 15 20 MB/s drives • Must have sufficient bandwidth to tape drives • Filesystem vs. raw recoveries • Allow drives and time for duplicating

  24. Library size: Slots (all-tape environment) • Should hold all onsite tapes • Onsite tapes automatically expire and get reused • Only offsite tapes require physical mgmt. • Should monitor library via a script to ensure that each pool has enough free tapes before you go home • Watch for those downed drive messages

  25. Library size: Slots (disk/tape environment) • Do incremental backups to disk • Library needs only to hold onsite full tapes and the latest set of copies • On-site tapes and disk-based backups automatically expire and get reused • Only offsite tapes require physical mgmt. • Should monitor library and disk via a script to ensure that each pool has enough free tapes before you go home • Watch for those downed drive messages

  26. Local or remote backup? • Throughput (in 8hrs), if you “own the wire:” • 10 Mb = 20 GB, 100 Mb = 200 GB • GbE = 500 GB – 1 TB (Also must “own the box.”) • Greater than 500 GB should be “local” • LAN-free backups allow you to share a large tape library by performing “local” backups to a “remote, shared” device • More than one 500+ GBserver, buy a SAN! • Only one 500+ GB server, plan for a SAN! • (NetBackup= SSO, NetWorker=DDS)

  27. BACKUP/GETTING STARTED:Backup School, Part 2 • W. Curtis PrestonVP, Services DevelopmentGlassHouse Technologies

  28. Multistreaming: NetBackup Defined: Starting multiple simultaneous backup jobs from a single client • Maximum jobs per client > 1 • Check “Allow multiple data streams.” • ALL_LOCAL_DRIVES, or multiple entries in file list • Maximum jobs per policy > 1 or unchecked • Need storage unit with more than one drive, or one drive with multiplexing enabled • Can change max jobs per client using the Server Properties -> Clients tab (4.5) • By default, will not exceed one job per filesystem, but can bypass this if you make your own file list

  29. Multistreaming (Parallelism): NetWorker • Use “All” saveset or multiple entries in the saveset list • Set the parallelism setting for server and, if necessary, the storage node • Set client parallelism value in client attributes • Must have multiple drives available, or one drive with target sessions set higher than one • Will not exceed number of disks or logical volumes on the client (see maximum-sessions in manual)

  30. Multiplexing: NetWorker • Set target sessions per device, allocating how many sessions may be sent to that device • Global setting for all backups that go to that device

  31. Multiplexing: NetBackup • Max multiplexing per drive in storage unit configuration > 1 • Media multiplexing in schedule > 1 • Use higher multiplexing for incremental backups if going to tape (6-8) • Use lower multiplexing for local backups (2) • No need to multiplex disk storage units • Multiple policies can multiplex to the same drive, but multiple media servers cannot

  32. Using Include lists -- not • NetBackup – ALL_LOCAL_DRIVES in file list • NetWorker – All in saveset field • Automatically excludes NFS/CIFS drives • Does not include dynamically mounted drives not in /etc/*fstab

  33. What about database clients? • Use scripts that parse lists of databases: • /var/opt/oracle/oratab for Oracle • MS-SQL list in registry • Master database in Sybase • Some backup products support “All” for databases • Remember to write standardize script with parameters to backup databases

  34. Incremental backups: NetBackup • Create staggered monthly full backups using calendar-based scheduling • Create staggered weekly cumulative incrementals using CBS • Create daily incremental backups using frequency-based backups • (Check Allow after run day) • Delete window from previous day for CBS

  35. Incremental backups: NetWorker • Do not use the Default schedule! • Create 28 schedules with a monthly full, weekly level 1, and daily incremental; name them after the full day • Do not specify a schedule for the Group • Assign the 28 schedules evenly across all clients based on size

  36. Standardization: NetWorker • Use All saveset entry • To exclude files, use standard directives for all clients

  37. Standardization: NetBackup • Use ALL_LOCAL_DRIVES • Non-Windows clients:Use standard exclude list and push out from master using bpgp. • Windows clients:Use standard exclude list and push out from master using bpgetconfig –M and bpsetconfig –h.

  38. Backup reporting: NetBackup • Watch activity and device monitors • bperror • bpdbjobs -report • bpdbjobs –report –all_columns • /usr/openv/netbackup/logs • /usr/openv/logs • /usr/openv/volmgr/logs

  39. Watch nwadmin screens mminfo nsrinfo mmlocate nsrmm /nsr/logs Backup reporting: NetWorker

  40. Disk-to-disk backup: NetWorker • If using regular disk, use file type device • Disk backup extra cost with options • If using virtual tape library, treat it like a tape library • Use cloning to duplicate disk-based backups to tape and send them offsite

  41. Disk-to-disk backup: NetBackup • If using regular disk, use disk-based storage unit • (No extra cost for disk storage units!) • If using virtual tape library, treat it like a tape library • Use vault to duplicate disk-based backups to tape and send them offsite

  42. What about my SAN and NAS?

  43. SAN: LAN-free, Client-free, and Server-freebackupNAS: NDMP filer to self, filer to filer, filer to server, &server to filer

  44. How does this work? SCSI Reserve/release Third-party queuing system Levels of drive sharing Restores LAN-free backups

  45. How client-free backups work Backup transaction logs to disk Establish backup mirror Split backup mirror and back it up

  46. How client-free recoveries work Restore backup mirror from tape Restore primary mirror from backup mirror Replay transaction logs from disk

  47. Server-free backups • Server directs client to take a copy-on-write snapshot • Client and server record block and file associations • Server sends XCOPY request to SAN

  48. Server-less restores • Changing block locations • Image levelrestores • File levelrestores

  49. NDMP configurations • Filer to self • Filer to filer • Filer to server • Server to filer

  50. Using NDMP • Level of functionality depends on the DMA you choose • Robotic support • Filer-to-library support • Filer-to-server support • Direct access restore support

More Related