1 / 47

Refreshing Test Databases Using O racle Transportable Tablespaces and Checkpoints

Refreshing Test Databases Using O racle Transportable Tablespaces and Checkpoints. Alex Gorbachev The Pythian Group 2006 November. Special thanks to Nicolae Hachi. http://www.pythian.com/documents/UKOUG06-Refreshes.ppt. Agenda. Who am I? Refresh scenarios and challenges

adora
Download Presentation

Refreshing Test Databases Using O racle Transportable Tablespaces and Checkpoints

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Refreshing Test DatabasesUsing Oracle Transportable Tablespaces andCheckpoints Alex Gorbachev The Pythian Group 2006 November Special thanks to Nicolae Hachi http://www.pythian.com/documents/UKOUG06-Refreshes.ppt

  2. Agenda • Who am I? • Refresh scenarios and challenges • Transportable Tablespaces and their role • Rename tablespace in pre-10g • Veritas Storage Checkpoints • Alternatives • Is Flashback an option? • ZFS Clones example

  3. Who am I? • Senior Oracle DBA at The Pythian Group • Working with Oracle since ‘97 • Pythian is a leading DBA outsourcing provider – remote DBA 24x7 • Offices worldwide – Canada, US, UK, India, Australia • I’m blogging at www.pythian.com/blogs

  4. Low Test Systems Capacity • Test systems do not earn money • Shortage of • Storage • Memory • CPU • Network capacity

  5. “Instant” Refresh • Minutes, not hours! • Hourly • Full production size

  6. Multiple Independent Clones • Each client need it’s own test system • Temporary test environments (migrations) • Development teams don’t want to mess with each other • QA tests • Massive parallel testing • What-If testing

  7. Transportable Tablespaces • Designed to facilitate fast data movements • Plug datafiles directly into another database • Export/Import from 8i and above • Data Pump starting in 10g • TTS covers: • Datafiles and tablespaces • Tables and indexes • Constraints, grants, triggers • No sequences, synonyms, PL/SQL procedures, types and etc.

  8. TTS Limitations • Same character set • Non-supported objects • Materialized views • FBI • AQ 8.0 multi-consumer queues • Can’t rename tablespace to import twice • 10g can remap and rename • Same platform • 10g cross-platform with RMAN • Compatibility considerations between releases

  9. TTS Move Procedure • Generate TTS set • Check that it self-contained • Place tablespaces in read only (!) • Export Metadata • Copy TTS set (datafiles and metadata) • Copy done => tablespaces can be put back in read write in the source database • Import TTS to the target database • Set read write Outage in 9i Too long 9i – can’t import twice – same name No re-import after read write

  10. Life Scenario Requirements • Up to 1 TB production databases • Absolutely no production downtime • Weekly refreshes from the same capture • Re-capture periodically • 15 minutes downtime for refresh • Up to 7 independent test environments • Limited test environment capacity

  11. Capture • Restore into auxiliary DB • Pre-processing • Customization • Compression • Remove sensitive info • Remove DB links • Disable FK • Prepare TTS set + capture other objects

  12. Test System • Bundle test environments in one DB • Flip-flop mechanism to avoid downtime during copy • Plug tablespace in several times? • 10g rename or remap in Data Pump • 9i – prepare several copies during capture • 9i – use unsupported “remap” method

  13. Pre-10g “Remap” • Manually update dump file • bvi, bvedit, bsed • Replace tablespace names with new ones • Same length (!) • Careful – don’t replace unwanted strings • Tablespace names in datafile header are not checked on import (only DBID, file#) • After import, datafiles are read-only so headers are ignored • During read-write headers are flushed with new info including new tablespace name

  14. Preparations Copy to inactive destinations Long but no impact Refresh – 15 min. Drop exiting tablespaces Import TTS from alternative location Create/import other objects Enable FK Make tablespace read-write Preparations … TTS Refresh 600 GB 600 GB 600 GB 600 GB 300 GB 600 GB 600 GB 600 GB system Datafiles – read only Datafiles – TTS set Datafiles – part of the DB

  15. Zero production impact One DB – many test environments Reduced memory and CPU requirements while delivering several test environments Refresh in 15 minutes Storage group is really unhappy Test systems use 7*2+1 of production size 15 * 300 = 4.5 TB New requirements from QA group Another 3 environments Refresh on demand – several times per day, every hour Results So Far (TTS + Rename)

  16. VxFS Storage Checkpoints • Filesystem technology • Comparable volume based technology exists • Checkpoint visible as an exact image of the base filesystem • Created in a matter of seconds • For a database it’s a separate filesystem on a different mount-point

  17. Checkpoints Are Writable • Base filesystem is writable • No impact on checkpoints • Checkpoints are writable • No impact on base filesystem • No impact on other checkpoints • Changes to existing data consume additional space • Copy-on-write technology

  18. Checkpoints Are Persistent • Checkpoint information is stored on media • Not in memory • Checkpoints survive everything just like normal FS • Unmount / mount • Server reboot • System crashes • Do not protect from media corruptions (!)

  19. Storage Efficiency • Checkpoint consumes practically no space when created • Additional space consumed • Changes on the base filesystem • Changes on the checkpoint • Copy-on-write • Base filesystem and all checkpoints share the same free space pool

  20. Multiple Checkpoints • Unlimited number of checkpoints • Hierarchical checkpoints Base Ckpt1 Ckpt2 Ckpt3 Ckpt3-1 Ckpt3-2 Ckpt3-3 Ckpt3-2-1 …

  21. Master list contains an inode of every file Inode stores a list of descriptors pointing to the location of file’s data Ordinary extent descriptors Checkpoint is just another inode list Overlay extent descriptors checkpoint Creating Checkpoint Master list allocated extent free space pool

  22. Write to checkpoint VxFS detects an overlay descriptor Allocates new one Writes new content to the new extent Updates descriptor in checkpoint to ordinary, pointing to the new extent Updating the same extent doesn’t cause metadata changes Copy-On-Write Master list checkpoint allocated extent free space pool

  23. Write to base filesystem Write to an extent changed in checkpoint Write to “shared” extent isn’t straightforward Allocate new extent Copy old image Change overlay descriptor in checkpoint to ordinary referencing the new extent Perform the write Copy-On-Write Master list checkpoint allocated extent free space pool

  24. Storage Checkpoints Recap • Checkpoints are writable • Multiple checkpoints • Storage efficiency • Persistency

  25. Datafiles – read only Datafiles – TTS set Datafiles – part of the DB TTS + Checkpoints Refresh 600 GB 600 GB 600 GB 600 GB 300 GB 600 GB 600 GB 600 GB system

  26. Only one “big” filesystem Create checkpoint Import TTS + other objects + enable FK Set read-write Repeat for others Checkpoint changes gradually consume space Datafiles – read only Datafiles in checkpoint – read only Datafiles – TTS set Datafiles – TTS set Datafiles – part of the DB Datafiles in checkpoint – part of the DB TTS + Checkpoints Refresh /base /ckpt1 /ckpt2 /ckpt3 /ckpt4 /ckpt5 /ckpt6 system /ckpt7

  27. How much space? 300 GB – original base 10% changes (30 GB) x 7 times = 210 GB What about fresh capture? +300 GB for overwriting the base 810 MB Datafiles in checkpoint – read only Datafiles – TTS set Datafiles in checkpoint – part of the DB TTS + Checkpoints Refresh /base /ckpt1 /ckpt2 /base /ckpt3 /ckpt4 /ckpt5 /ckpt6 • Overwrite too costly • First delete base files • Create new base • Recreate checkpoints during next refresh system /ckpt7

  28. What about 3 QA test systems? Short living – 1% changes (3 GB) + 40MB + 3 test environments + another 3 test environments /cqa1 /cqa2 /cqa3 /cqa4 /cqa5 /cqa6 Datafiles in checkpoint – read only Datafiles – TTS set Datafiles in checkpoint – part of the DB TTS + Checkpoints – QA Request /base /ckpt1 /ckpt2 /ckpt3 /ckpt4 /ckpt5 /ckpt6 system /ckpt7

  29. Refresh with Checkpoints Recap • Drop old tablespaces on checkpoint • Remove checkpoint • Careful with umount – check that no critical processes have inodes open on checkpoint • Create fresh checkpoint • Import tablespaces + others + FK + RW

  30. Hints • Removing the last checkpoint from “cleaned up” base can take long – be patient • Solaris, AIX, and Linux – VxFS 3.4+ • HP-UX – VxFS 3.5+ • Recommend at least VxFS 4.0+ • Use ODM or at least direct IO • Test, test, test… • Several bugs on HP-UX • Checkpoint corruption under heavy load with ODM • Autoextend problem without ODM but with direct IO • Read write bug

  31. Flashback Database Every test environment requires separate DB No flashback tablespace Storage “inefficient” Might be longer to flashback NetApp FlexClone technology OEM’ed by IBM ZFS Solaris10 Update2 Open Solaris Being ported to Linux and FreeBSD Alternatives to Veritas?

  32. ZFS Experience (1) • Create ZFS base # zpool create pythian c1t1d0 c1t2d0 # zfs create pythian/orabase # zfs set recordsize=8k pythian/orabase # zfs set atime=off pythian/orabase # mkdir /z1 # zfs set mountpoint=/z1 pythian/orabase # chown oracle:dba /z1

  33. ZFS Experience (2) • Create base tablespaces SQL> create user pythian identified by "$E%&V#$V%$"; SQL> grant resource to pythian; SQL> create tablespace pythian_data datafile '/z1/pythian_data01.dbf' size 128M, datafile '/z1/pythian_data02.dbf' size 128M, datafile '/z1/pythian_data03.dbf' size 128M; SQL> create tablespace pythian_idx datafile '/z1/pythian_idx01.dbf' size 128M, datafile '/z1/pythian_idx02.dbf' size 128M;

  34. ZFS Experience (3) • Create base schema SQL> create table pythian.obj tablespace pythian_data as select oo.object_id o1, o.* from dba_objects o, dba_objects oo where rownum <= 4000000; SQL> alter table pythian.obj add primary key (o1,object_id) using index tablespace pythian_idx; SQL> create index pythian.obj_idx1 on pythian.obj (owner,object_id) tablespace pythian_idx;

  35. ZFS Experience (4) • Export TTS SQL> BEGIN SYS.DBMS_TTS.TRANSPORT_SET_CHECK( ts_list => 'PYTHIAN_DATA,PYTHIAN_IDX' ); END; / SQL> SELECT * FROM SYS.TRANSPORT_SET_VIOLATIONS; SQL> ALTER TABLESPACE PYTHIAN_DATA READ ONLY; SQL> ALTER TABLESPACE PYTHIAN_IDX READ ONLY; $ expdp dumpfile=pythian.dmp transport_tablespaces=PYTHIAN_DATA,PYTHIAN_IDX

  36. ZFS Experience (5) • Prepare test schemas SQL> create user pythian01 identified by “e4!d6^hsk"; SQL> grant resource to pythian01; SQL> create user pythian02 identified by “0d(hs<6]d"; SQL> grant resource to pythian02; SQL> create user pythian03 identified by “iUh^lmhd3"; SQL> grant resource to pythian03; • And mount points # mkdir /z1c1/z1c2/z1c3

  37. ZFS Experience (6) • Clone it # zfs snapshot pythian/orabase@snap # zfs clone pythian/orabase@snap pythian/oraclone1 # zfs clone pythian/orabase@snap pythian/oraclone2 # zfs clone pythian/orabase@snap pythian/oraclone3 # zfs set mountpoint=/z1c1 pythian/oraclone1 # zfs set mountpoint=/z1c2 pythian/oraclone2 # zfs set mountpoint=/z1c3 pythian/oraclone3

  38. ZFS Experience (7) • Space usage right after cloning # zfs list NAME USED AVAIL REFER MOUNTPOINT pythian 647M 1.31G 24.5K /pythian pythian/orabase 647M 1.31G 647M /z1 pythian/orabase@s1 0 - 647M - pythian/oraclone1 0 1.31G 647M /z1c1 pythian/oraclone2 0 1.31G 647M /z1c2 pythian/oraclone3 0 1.31G 647M /z1c3

  39. ZFS Experience (8) • Create import par file USERID='/ as sysdba' DUMPFILE=pythian.dmp TRANSPORT_DATAFILES = ( /z1c1/pythian_data01.dbf ,/z1c1/pythian_data02.dbf ,/z1c1/pythian_data03.dbf ,/z1c1/pythian_idx01.dbf ,/z1c1/pythian_idx02.dbf ) REMAP_SCHEMA=PYTHIAN:PYTHIAN01 REMAP_TABLESPACE=PYTHIAN_DATA:PYTHIAN_DATA01 REMAP_TABLESPACE=PYTHIAN_IDX:PYTHIAN_IDX01

  40. ZFS Experience (9) • Import TTS impdp parfile=imp_z1c1.par • Set tablespaces read-write ALTER TABLESPACE PYTHIAN_DATA01 READ WRITE; ALTER TABLESPACE PYTHIAN_IDX01 READ WRITE;

  41. ZFS Experience (10) • Space usage after the first import # zfs list NAME USED AVAIL REFER MOUNTPOINT pythian 647M 1.31G 24.5K /pythian pythian/orabase 647M 1.31G 647M /z1 pythian/orabase@s1 0 - 647M - pythian/oraclone1 58.5K 1.31G 647M /z1c1 pythian/oraclone2 0 1.31G 647M /z1c2 pythian/oraclone3 0 1.31G 647M /z1c3

  42. ZFS Experience (11) • Repeat import for other clones # zfs list NAME USED AVAIL REFER MOUNTPOINT pythian 648M 1.30G 24.5K /pythian pythian/orabase 647M 1.30G 647M /z1 pythian/orabase@s1 0 - 647M - pythian/oraclone1 204K 1.30G 647M /z1c1 pythian/oraclone2 204K 1.30G 647M /z1c2 pythian/oraclone3 204K 1.30G 647M /z1c3

  43. ZFS Experience (12) • Delete first 400 thousand rows # zfs list NAME USED AVAIL REFER MOUNTPOINT pythian 812M 1.14G 24.5K /pythian pythian/orabase 647M 1.14G 647M /z1 pythian/orabase@s1 0 - 647M - pythian/oraclone1 164M 1.14G 647M /z1c1 pythian/oraclone2 204K 1.14G 647M /z1c2 pythian/oraclone3 204K 1.14G 647M /z1c3 • 10% is only 64.7 MB so why 164MB? • Index leaves are all over the place

  44. ZFS Experience (13) • Let’s drop and recreate the table SQL> drop table pythian01.obj; SQL> create table pythian01.obj … as select …; • Now the bad news  • Solaris virtual machine crashed on CTAS • … but the good news …  • Clones survived just fine • Oracle started after reset and CTAS worked

  45. What’s Next? • Doesn’t work with ASM • Comparable volume technology exists, but… • Cloned disks in conflict with original disks • No rename disk group • Binary patching of disk header is not trivial • Play with hierarchical checkpoints • Promoting checkpoints • Merge changes to parent

  46. References • Symantec Yellow Book “Using Local Copy Services” • VERITAS Storage Checkpoints “AN UNDO BUTTON FOR USERS AND ADMINISTRATORS” • NetApp FlexClone Homepage • IBM Redbooks Paper “An Introduction to FlexClone Volumes” • Open Solaris ZFSCommunity

  47. Q & A Thank you! gorbachev@pythian.com Alex Gorbachev The Pythian Group 2006 November http://www.pythian.com/blogs

More Related