Once the job is initiated, it can be seen under the "Import Jobs" node of the tree, where it can be monitored.Īs normal, the log file is located in the specified directory on the database server. The default is to run the job immediately. If you want to schedule the import to run at a later time, or on regular intervals, enter the details here. When you have selected your specific options, click the "Next" button. The "Options" screen allows you to increase the parallelism of the import, name the logfile and control the action if tables or unusable indexes exist. Once this is done, click the "Next" button. To load the data into a new schema, we need to add a REMAP_SCHEMA entry. If you need any specific include/exclude filters, they can be added in the "Include Exclude Filter" tab. Wait for the utility to interrogate the file, then select the schema of choice. The screens that follow will vary depending on the type of import you perform. Right-click on either the "Data Pump" or "Import Jobs" tree node and select the "Data Pump Import Wizard." menu option.Įnter the type of import you want to do and the name of the dump file that is the source of the data, then click the "Next" button. GRANT CREATE SESSION, CREATE TABLE to scott_copy The new user was created as follows.ĬREATE USER scott_copy IDENTIFIED BY scott_copy In this section we will import the SCOTT schema, exported in the previous section, into a new user. Once the job is initiated, it can be seen under the "Export Jobs" node of the tree, where it can be monitored.Īs normal, the dump file and log file are located in the specified directory on the database server. When you are ready, click the "Finish" button. If you need to keep a copy of the job you have just defined, click on the "PL/SQL" tab to see the code. Click the "Next" button.Ĭheck the summary information is correct. If you want to schedule the export to run at a later time, or on regular intervals, enter the details here. When you have selected your specific options, click the "Next" button.Įnter a suitable dump file name by double-clicking on the default name and choose the appropriate action should the file already exist, then click the "Next" button. The "Options" screen allows you to increase the parallelism of the export, name the logfile and control the read-consistent point in time if necessary. If you want to apply a WHERE clause to any or all of the tables, enter the details in the "Table Data" screen, then click the "Next" button. If you have any specific include/exclude filters, add them and click the "Next" button. When you are happy with your selection, click the "Next" button. To do this, highlight the schema of interest in the left-hand "Available" pane, then click the ">" button to move it to the right-hand "Selected" pane. For the schema export, we must select the schema to be exported. The screens that follow will vary depending on the type of export you perform. In this case I will do a simple schema export. Right-click on either the "Data Pump" or "Export Jobs" tree node and select the "Data Pump Export Wizard." menu option.Ĭheck the connection details are correct and select the type of export you want to perform, then click the "Next" button. This tree will be the starting point for the operations listed in the following sections. Expanding the "Data Pump" node displays "Export Jobs" and "Import Jobs" nodes, which can be used to monitor running data pump jobs. In this case I will be using the "system" connection.Įxpanding the connection node in the tree lists a number of functions, including "Data Pump". If no connections are available, click the "+" icon and select the appropriate connection from the drop-down list and click the "OK" button. The data pump wizards are accessible from the DBA browser (View > DBA).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |