1 / 38

Chapter 13

Chapter 13. Data Pump. Data Pump. Tool that replaces legacy exp /imp utilities Data Pump is a scalable, feature-rich utility that allows you to extract objects and data from a database You can use the extract file as: Point in time backup of your database

bishop
Download Presentation

Chapter 13

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 13 Data Pump

  2. Data Pump • Tool that replaces legacy exp/imp utilities • Data Pump is a scalable, feature-rich utility that allows you to extract objects and data from a database • You can use the extract file as: • Point in time backup of your database • Efficiently copy large amounts of objects/data from one database environment to another • Extract SQL from the export file • Partially extract specific objects or data from the export file • Easily stop/suspend/restart operations

  3. Data Pump Architectural Components • expdp (Data Pump export utility) • impdp (Data Pump import utility) • DBMS_DATAPUMP PL/SQL package (Data Pump API) • DBMS_METADATA PL/SQL package (Data Pump Metadata API)

  4. Data Pump Architecture

  5. Steps Required to Export Data • Create a database-directory object that points to an operating-system directory that you want to write/read Data Pump files to/from. • Grant read, write on the directory to the database user running the export. • From the operating-system prompt, run the expdp utility.

  6. Creating a Database Directory • Use CREATE DIRECTORY statement to create a directory: SQL> create directory dp_dir as '/oradump'; • View directory details: SQL> select owner, directory_name, directory_path from dba_directories;

  7. Granting Access to the Directory • As a privileged user, grant access to the directory to the user who will be invoking expdp: SQL> grant read, write on directory dp_dir to <user>;

  8. Taking an Export • From the operating system invoke expdp: $ expdpdarl/foo directory=dp_dir tables=invdumpfile=exp.dmp logfile=exp.log

  9. Importing using Data Pump • Similar to export, but using the import utility. • Need to setup directory and access • Create a database-directory object that points to an operating-system directory that you want to write/read Data Pump files to/from. • Grant read, write on the directory to the database user running the export or import. • From the operating system prompt, run the impdp command.

  10. Using Data Pump Interactive Command Mode • Allows you to interact with Data Pump from a command line mode utility interface. • There are two ways to get to the interactive command-mode prompt: • Press Ctrl+C in a Data Pump job that you started via expdp or impdp. • Use the ATTACH parameter to attach to a currently running job • Most useful commands: status, start_job, kill_job, parallel, stop_job.

  11. Attaching to a Running Job • You can attach to a currently running Data Pump job to view its status, stop/start it, and so on. • Determine the job name: SQL> select owner_name, operation, job_name, state from dba_datapump_jobs; • Use the ATTACH command line switch: $ impdpdarl/engdev attach=sys_import_schema_02

  12. Stopping and Restarting a Job • You can stop and restart a Data Pump job • First attach to the job, then issue interactive commands: $ impdpdarl/foo attach=sys_import_table_01 Import> stop_job Import> start_job

  13. Terminating a Data Pump Job • You might find yourself in a scenario where you have a long running Data Pump job that appears to be hung. • You want to kill the job as gracefully as possible. • First attach to the job, then kill it: Import> kill_job

  14. Using a Parameter File • Parameter files are useful for repeatability and consistency. • Parameter files also make it easier to handle parameters that use single and double quotes. • Sometimes single and double quotes are mis-interpreted by the OS when entered on the command line. userid=darl/foo directory=dp dumpfile=invp.dmp logfile=invp.log tables=f_sales $ impdpparfile=pfile.ctl

  15. Estimating the Size of Export Jobs • Sometimes it’s useful to estimate the size of a job before starting it. • Helps you get an idea how much disk space will be required. $ expdpdbauser/foo estimate_only=y full=y logfile=n

  16. Listing the Contents of Dump Files • Allows you to translate the contents of a dump file created by expdpinto SQL statements. • Very handy for generating a copy of the SQL required to re-create an environment. • Use the SQLFILE parameter: $ impdphr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp \ SQLFILE=dpump_dir2:expfull.sql • In this mode, impdp doesn’t import anything, it only creates a file that contains the SQL required to re-create objects within the dump file.

  17. Exporting and Importing Directly Across the Network • Powerful technique for copying environments. • Efficient way to copy users and data from one database to another. • Create users in target environment. • Create a database link • Create a directory object. • Run the impdp command using the NETWORK_LINK • parameter that points at the database link you created in the prior step.

  18. Transportable Tablespaces • Powerful feature for copying large amounts of data from one environment to another. • Ensure tablespace is self-contained. • Make the tablespaces being transported read-only. • Use Data Pump to export the metadata for the tablespaces being transported: • Copy the Data Pump export dump file to the destination server. • Copy the datafile(s) to the destination database. • Import the metadata into the destination database.

  19. Exporting Tablespace Metadata • Sometimes it’s useful to capture tablespace information and transfer it to a new database environment. • Use a combination of FULL and INCLUDE=TABLESPACE. $ expdpdarl/foo directory=dpdumpfile=phredstg.dmp content=metadata_only full=y \ include=tablespace

  20. Specifying Different Datafile Paths and Names • When you’re copying database environments, oftentimes the source and target servers have different mount point names. • You can rename the datafiles on the import using REMAP_DATAFILE. userid=darl/foo directory=dp dumpfile=phredstg.dmp full=y include=tablespace:"like 'TBSP%'" remap_datafile="'/ora01/dbfile/O11R2/tbsp101.dbf':'/ora02/O11R2/tb1.dbf'" remap_datafile="'/ora01/dbfile/O11R2/tbsp201.dbf':'/ora02/O11R2/tb2.dbf'" remap_datafile="'/ora01/dbfile/O11R2/tbsp301.dbf':'/ora02/O11R2/tb3.dbf'"

  21. Changing Segment and Storage Attributes • You may have very different segment and storage requirements when copying database environments. • Use the TRANSFORM parameter to remove the segment and storage attributes from being specified when importing. $ impdpdarl/foo directory=dpdumpfile=inv.dmp transform=segment_attributes:n $ impdpdarl/foo directory=dpdumpfile=inv.dmp transform=storage:n

  22. Importing into a Different Tablespace from the Original • You may want to export a user and when importing map the old user to a new username and map the old tablespaces associated with the user’s objects to new tablespace names. • Use REMAP_SCHEMA and REMAP_TABLESPACE $ impdpdarl/foo directory=dpdumpfile=rm.dmpremap_schema=HEERA:CHAYA \ remap_tablespace=TBSP1:V_DATA tables=heera.ticket

  23. Sophisticated Filtering Mechanisms • Use the QUERY parameter to export or import subsets of data. • Use the SAMPLE parameter to export a percentage of the rows in a table. • Use the CONTENT parameter to exclude or include data and metadata. • Use the EXCLUDE parameter to specifically name items to be excluded. • Use the INCLUDE parameter to name the items to be included (thereby excluding other non-dependent items not included in the list). • Use parameters like SCHEMA to specify that you only want a subset of the database’s objects (those that belong to the specified user or users).

  24. Filtering via a Query • Use this syntax to filter via a query: QUERY = [schema.][table_name:] query_clause • Example using a parameter file: userid=darl/foo directory=dp dumpfile=inv.dmp tables=inv,reg query=inv:"WHERE inv_desc='Book'" query=reg:"WHERE reg_id <=20"

  25. Exporting a Percentage of the Data • Doesn’t work well when exporting combinations of parent/child tables, because the percentage doesn’t enforce that parent rows match corresponding child rows. • General syntax for exporting a percentage of data in a table: SAMPLE=[[schema_name.]table_name:]sample_percent $ expdpdarl/foo directory=dp tables=inv sample=10 dumpfile=inv.dmp

  26. Excluding Objects from the Export File • Use this general syntax to exclude objects from being exported: EXCLUDE=object_type[:name_clause] [, ...] userid=darl/foo directory=dp dumpfile=inv.dmp tables=inv exclude=index:"LIKE 'INV%'"

  27. Including Only Specific Objects in an Export File • Use INCLUDE to specify objects to be included. directory=datapump dumpfile=pl.dmp include=function:"='ISDATE'",procedure:"='DEPTREE_FILL'"

  28. Excluding Objects from Import • Use EXCLUDE to exclude objects on import. impdpdarl/foo dumpfile=d.dmp directory=dp exclude=TRIGGER,PROCEDURE

  29. Including Objects in Import • To only include objects on import, use INCLUDE: userid=darl/foo directory=dp dumpfile=h.dmp schemas=HEERA include=table:"like 'A%'"

  30. Creating a Consistent Export • No consistent=y switch (like the old exp utility) • Use flashback_scn or flashback_time SQL> select current_scn from v$database; $ expdpdarl/foo directory=dpflashback_scn=8400741902387

  31. Importing When Objects Already Exist • Use TABLE_EXISTS_ACTION to specify behavior if the object already exists • The TABLE_EXISTS_ACTION parameter takes the following options: • SKIP (default if not combined with CONTENT=DATA_ONLY) • APPEND (default if combined with CONTENT=DATA_ONLY) • REPLACE • TRUNCATE

  32. Renaming a Table • Rename a table on import • General syntax: REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename or REMAP_TABLE=[schema.]old_tablename[:partition]:new_tablename

  33. Remapping Data • Allows you to obfuscate data on import. • Some production data you may not want imported with easily readable values. • Create a PL/SQL function to obfuscate the data $ impdpdarl/foo directory=dpdumpfile=cust.dmptables=customers remap_data=customers.last_name:obfus.obf

  34. Cloning a User • Use REMAP_SCHEMA to rename a user during import $ impdpdarl/foo directory=dpremap_schema=inv:inv_dwdumpfile=inv.dmp

  35. Miscellaneous Features • Suppressing a Log File • Using Parallelism • Specifying Additional Dump Files • Reusing Output File Names • Creating a Daily DDL File • Compressing Output • Encrypting Data

  36. Monitoring Data Pump Jobs • Screen output • Data Pump log file • Database alert log • Querying the status table • Querying data-dictionary views • Interactive command-mode status • Using operating-system utilities’ process status ps

  37. Data Pump Legacy Mode • Enter old exp or imp parameter at the command line when invoking Data Pump • Data Pump automatically translates parameter into parameters it understands • Available from 11g R2 • Very handy feature if you’re familiar with the old exp/imp utilities

  38. Summary • Data Pump is a flexible and powerful utility for taking point in time backups and copying database environments. • As a DBA, you should be proficient with this utility. • Sophisticated object and data transfer filtering parameters. • Use Data Pump in legacy mode to view how the old exp/imp parameters are translated into Data Pump parameters.

More Related