gDBClone Powerful Database Clone/Snapshot Management Tool (Doc ID 2099214.1)
In this Document
Applies to:Oracle Database - Enterprise Edition - Version 11.2.0.4 and laterOracle Database Cloud Service - Version N/A and later Oracle Database Backup Service - Version N/A and later Gen 1 Exadata Cloud at Customer (Oracle Exadata Database Cloud Machine) - Version N/A and later Oracle Cloud Infrastructure - Database Service - Version N/A and later Linux x86-64 Oracle Solaris on x86-64 (64-bit) Oracle Solaris on SPARC (64-bit) AbstractgDBClone is a tool to simplify management of database test and development environments leveraging Oracle ACFS snapshots and/or Oracle Exascale sparse clone HistoryAuthor: Ruggero Citton - RAC Pack, Cloud Innovation and Solution Engineering Team - gDBClone Version 3.0.6:
- For a snapshot from a standby DB on Oracle Exascale, fix for bug:34301681 is required
- For a snapshot on Exascale with 23c, fix for bug:35856329 is required - gDBClone Version 3.0.5:
What's new
Details### -------------------------------------------------------------------
### Disclaimer: ### ### EXCEPT WHERE EXPRESSLY PROVIDED OTHERWISE, THE INFORMATION, SOFTWARE, ### PROVIDED ON AN \"AS IS\" AND \"AS AVAILABLE\" BASIS. ORACLE EXPRESSLY DISCLAIMS ### ALL WARRANTIES OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT ### LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR ### PURPOSE AND NON-INFRINGEMENT. ORACLE MAKES NO WARRANTY THAT: (A) THE RESULTS ### THAT MAY BE OBTAINED FROM THE USE OF THE SOFTWARE WILL BE ACCURATE OR ### RELIABLE; OR (B) THE INFORMATION, OR OTHER MATERIAL OBTAINED WILL MEET YOUR ### EXPECTATIONS. ANY CONTENT, MATERIALS, INFORMATION OR SOFTWARE DOWNLOADED OR ### OTHERWISE OBTAINED IS DONE AT YOUR OWN DISCRETION AND RISK. ORACLE SHALL HAVE ### NO RESPONSIBILITY FOR ANY DAMAGE TO YOUR COMPUTER SYSTEM OR LOSS OF DATA THAT ### RESULTS FROM THE DOWNLOAD OF ANY CONTENT, MATERIALS, INFORMATION OR SOFTWARE. ### ### ORACLE RESERVES THE RIGHT TO MAKE CHANGES OR UPDATES TO THE SOFTWARE AT ANY ### TIME WITHOUT NOTICE. ### ### Limitation of Liability: ### ### IN NO EVENT SHALL ORACLE BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, ### SPECIAL OR CONSEQUENTIAL DAMAGES, OR DAMAGES FOR LOSS OF PROFITS, REVENUE, ### DATA OR USE, INCURRED BY YOU OR ANY THIRD PARTY, WHETHER IN AN ACTION IN ### CONTRACT OR TORT, ARISING FROM YOUR ACCESS TO, OR USE OF, THE SOFTWARE. ### ------------------------------------------------------------------- Executive OverviewAs database-driven applications grow rapidly, maximizing agility,
reducing management overhead and cost savings are top priority for IT
organizations today. Customers must have solutions to contain data
redundancy that is sprawling out of control. On the average, more than
10 full copies of a production databases are created for test,
development and reporting purposes. » Simplicity Database Provisioning Lifecycle and ChallengesManaging database test and development (test & dev) environments can be challenging and costly in time and resources. Production databases often require 8-10 or more copies for varies types of test and development purposes. Each copy of a database consumes significant storage space. Database copies are typically recycled (created, deleted or refreshed) often. Conventional ways of manually managing test & dev environments can be complex, costly and time consuming. Test & dev life cycle can be defined at a high level as follows:
An initial copy of a production database is created on a test & dev cluster as a master copy. Data scrubbing may be required either on the production server or on the test cluster as well. Database or data scrubbing could mean data filtering, redaction or any other technique the user chooses to use in order to provide only the data set that is needed or authorizes for test and development purposes. A database home should be identified or provisioned in preparation for deploying a test database. Multiple copies of databases may be provisioned off the master copy on the test cluster. Database administrators manage the environments and clean up when a database copy is no longer needed. Managing Test & Dev Environments Does Not Have to be ComplexgDBClone is a tool that was developed to provide a simple and
efficient method for cloning a database for test and dev environments.
gDBClone leverages on ASM Cluster File System (ACFS) snapshot
functionality to create space efficient copies of databases and manage a
test and dev database life cycle. Starting with gDBClone 3.0.6, clone and snap operation are supported on Exascale too.
Purpose of Database DuplicationA duplicate database is useful for a variety of purposes, most of which involve testing & upgrade. You can perform the following tasks in a duplicate database: » Test backup and recovery procedures For example, you can duplicate the production database on host1 to host2, and then use the duplicate database on host2 to practice restoring and recovering this database while the production database on host1 operates as usual. Test and Dev environmentThe following diagram illustrates a typical test and dev environment that can be created and managed with the gDBClone command. In this example, a copy of a production database is cloned into the test & dev cluster with a single gDBClone command (“gDBClone clone”). The source database may be on an Exadata Database Machine or any other legacy server and any type of file system including Oracle ASM, Oracle Exascale. gDBClone is utilized to provision sparse space efficient copies of databases (“gDBClone snap”). These copies may be deployed for test and dev purposes. Only small incremental storage is required by the snapshots after the initial creation of the master copy as illustrated by the storage capacity illustration on the right of figure 2 above. An Oracle ACFS snapshot is an online, read only
or read write, point in time copy of an Oracle ACFS file system. The
snapshot copy is space efficient and uses Copy-on-Write (ACFS COW)
functionality. Before an Oracle ACFS file extent is modified or deleted,
its current value is preserved in the snapshot to maintain the point in
time view of the file system. Oracle ACFS supports 1023 snapshots per
file system
Database Clone vs Database Snapshot creation timeThe database clone creation time depends on the database size and on the network throughput. In case of database snapshot, the cloning operation is a very fast operation as it’s independent on the database size and/or network speed. In the following example, we compare a clone/snap of 3Gb vs 60Gb database size: Supported Configurations and FeaturesgDBClone CloneCreates a clone database (as Primary or as Physical Standby) from a production database duplicating (physical copy) the DB to the Test & Dev cluster using “RMAN Duplicate from Active Database” (by default gDBClone is allocating 3 RMAN channels, you may overwrite it using “-channels <RMAN channels number>” command option). The source database may be on an Exadata Database Machine or any other legacy server and any type of file system including Oracle ASM. gDBClone needs to connect the remote database normally through the SCAN (Single Client Access Network) listener. It’s also possible clone a production database from a given RMAN full
backup location (the full backup can be on Filesystem/NFS, on Oracle
Database Backup Cloud Service or on a tape) and, if needed, perform the
upgrade within one command. The source database can be SI (Single
Instance), RAC OneNode or RAC, primary or standby. The target clone
database can be SI (Single Instance), RAC OneNode or RAC (by default it
will be SI). The 12c Multitenant container databases are supported. The “gDBClone clone” is done without special impact on the source production database. The “gDBClone clone” could duplicate a source database full backup having less IO impact over the production source database. In this case it’s also possible to limit the clone (“duplicate until”) to a given SCN (‘-scn’) or time (‘-time’).
On cloning a remote or local database 3 different ACFS mount points are possible: -dataacfs <acfs mount point> --> Database datafiles target ACFS storage gDBClone can be used to clone a database to ASM, in such case, 3 different disk groups are possible: -datadg <ASM diskgroup> --> Database datafiles target ASM disk group Note: cloning a database to ASM you cannot leverage later on the database gDBClone snapshot feature
Starting with gDBClone 3.0.6, it's possible clone a database to Exascale vault. On Exascale it's possible get database snapshot (sparse clone): -dataxdg <Exascale vault> --> Database datafiles target Exascale vault gDBClone SnapCreates sparse snapshots of the DB to be used for test and development. The source database must be stored on local Oracle ACFS filesystem or on Exascale vault The source database can be SI (Single Instance), RAC OneNode or RAC, primary or standby. The target snapshot database can be SI (Single Instance), RAC OneNode or RAC (by default it will be SI), primary or standby. The gDBClone is introducing the “Hot Database Snapshot as Standby” capability. Without impact over the source database and "without" storage duplication, leveraging on ACFS snapshot copy-on-write (ACFS COW) feature, gDBClone is making a snapshot of a running database and if "-standby" option is used the result will be a standby database. Note: the possibility to get a physical standby
from a running database without downtime is the key to make database
upgrade leveraging on TLS (Transient Logical Standby)
gDBClone supports snapshot of a running standby database without production impact leveraging on the “Snapshot Standby” database feature. gDBClone is also supporting “Multi ACFS database file locations” and “database snapshot for different ORACLE_HOMEs gDBClone nzdru (near Zero Downtime Rolling Upgrade) (beta)"Near Zero Downtime Rolling Upgrade" is a feature able to make a database upgrade with very limited downtime leveraging on TLS (Trans Logical Standby). This feature still in beta version
- nzdru has resumable capability. Using "-prompt" command option, you have the possibility to stop the tool at macrostep and in case re-execute the tool » MacroStep3 - Temporary database physical standby to transient logical standby - In order to leverage on nzdru you must have a Database Service in place, (here an example of AC application continuity service for database PROD and TEMP, where PROD is the target database to migrate and TEMP is the temporary standby created as "clone" or as "snap" used by gDBClone nzdru to perform the upgrade) /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/srvctl add service -d PROD -s NZDRU_AC
/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/srvctl modify service -d PROD -s NZDRU_AC -e SESSION -P BASIC /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/srvctl modify service -d PROD -s NZDRU_AC -w 5 /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/srvctl modify service -d PROD -s NZDRU_AC -z 60 /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/srvctl modify service -d PROD -s NZDRU_AC -j SHORT /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/srvctl start service -d PROD -s NZDRU_AC /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/srvctl add service -d TEMP -s NZDRU_AC and the connect string the application may use could be as example: (DESCRIPTION=
(RETRY_COUNT=6)(FAILOVER=ON) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=exa-scan)(PORT=1521))) (CONNECT_DATA= (SERVICE_NAME=NZDRU_AC) (SERVER = DEDICATED) ) )
Note: as part of the nzdru process a restore
point "NZDRU_0000_0001" and a backup control file
(ORACLE_HOME/dbs/NZDRU_0000_<temp standby>_f.f) will be created.
You can use them to restore temp standby back to its original state as
a physical standby, in case the rolling upgrade operation needs to be aborted prior to the first switchover done in MacroStep6. - "Near Zero Downtime Rolling Upgrade" uses autoupgrade tool by default. You can specify "-dbua" to leverage on DBUA instead Note: Autoupgrade will write related logs under "/opt/gDBClone/out/log". You could check the upgrade status issuing: cd /opt/gDBClone/out/log/cfgtoollogs/upgrade/auto
python -m SimpleHTTPServer 8000 and from a broser checkout the URL: "http://<host>:8000/state.html" gDBClone ConvertBeside “clone” and “snap” features, gDBClone can be used to convert a database to a RAC or RAC OneNode database and it can be used to convert a no-container database(non-CDB) to a Pluggable database (PDB) of a given CDB. gDBClone ListDBs & DelDBgDBClone also provides single command to verify your database environments, providing parent/child relation in case of snap-of-snap database, and delete databases that are no longer in use. gDBClone ListHomesWith “listhomes” command option you can check which available ORACLE_HOMEs are available. The oracle home name will be used later to “attach” the clone/snap database (“-tdbhome”). gDBClone ListSnaps & DelSnapYou could use gDBClone to list and remove ACFS snapshot gDBClone SYSPwFUsing “gDBClone syspwf” an encrypted password file will be created. Such password is the SYS source remote database password. Doing a clone/snap with “-syspwf <sys password file>” option, gDBClone will use the encrypted password, otherwise it will request at command line. If a file with the name: “SYSpasswd_file” is present under gDBClone home (“/opt/gDBClone”), at clone/snap time, gDBClone will check for the password from that file, in such case you can avoid the “-syspwf” option and no password request are done at command line. gDBClone Command SyntaxUsage:
gDBClone clone -sdbname <source DB name> -sdbscan <source DB Host SCAN name> | -sconstring '<source DB connect string>' |-sbckloc '<backup location path>' [-time <DD-MON-YYYY_HH24:MI:SS>] [-upgrade [-parallel <number of process>]] |-sopcbck -opclib '<opc_lib_path>' -opcpfile '<opc_pfile_path' [-scn <scn>] [-dbid <DB id>] [-rmanpwf <rman password file>] [-upgrade [-parallel <number of process>] ] |-catuser <catalog user> [-catpwf <rman catalog password file>] -cstring <connect string> [-scn <scn>] [-dbid <DB ID>] [-sbt1 <sbt params>] [-sbt2 <sbt params>] -tdbname <Target Database Name> [-tdomain <Target Database Domain Name>] -tdbhome <Target Database Home Name> -dataacfs <acfs mount point> [-redoacfs <acfs mount point>] [-recoacfs <acfs mount point>] |-datadg <asm data diskgroup> [-redodg <asm redo diskgroup> ] [-recodg <asm reco diskgroup>]} |-dataxdg <exascale data diskgroup> [-redoxdg <exascale redo diskgroup> ] [-recoxdg <exascale reco diskgroup>]} [-sga_max_size <size Mb> ] [-sga_target <size Mb>] | [-pfile <file path>] [-channels <RMAN channels number> ] [-ssize <size Mb>] [-cbset] [-sdbport <Source DB SCAN Listener Port> ] [-tdbport <Target DB SCAN Listener Port>] [-racmod <db type> ] [-standby [-pmode maxperf|maxavail|maxprot] [-activedg] [-rtapply] [-dgbroker [-dgbpath1 <dgb config path>][-dgbpath2 <dgb config path>]] ] [-opc ] [-noping] [-resume ] [-syspwf <sys password file>] gDBClone snap -sdbname <source DB name> -tdbname <Target Database Name> gDBClone nzdru -sdbname <Source DB name> gDBClone convert -sdbname <source noCDB name> gDBClone listhomes [-verbose] gDBClone listdbs [-tree] | [-verbose] gDBClone listsnaps -dataacfs <acfs_mount_point> [-tree] gDBClone syspwf -syspwf <SYS encrypted password file path> gDBClone OPTIONS gDBClone OPTIONS
-sdbname Source Database Name -sdbscan Source DB Host SCAN Name -sconstring Source DB connect string -sdbport Source SCAN Listener Port (default 1521) -sbckloc Source RMAN Full Backup Location -tdbname Target Database Name -tdomain Target Database Domain Name -tdbhome Target Database Home Name -tdbport Target SCAN Listener Port (default 1521) -dataacfs Database datafiles target ACFS storage -redoacfs Database redologs target ACFS storage (default dataacfs) -recoacfs Database recovery target ACFS storage (default dataacfs) -datadg Database datafiles target ASM diskgroup -redodg Database redologs target ASM diskgroup (default +DATA) -recodg Database recovery target ASM diskgroup (default +DATA) -dataxdg Database datafiles target Exascale vault -redoxdg Database redologs target Exascale vault (default @DATA) -recoxdg Database recovery target Exascale vault (default @DATA) -standby The clone/snap will be a physical standby database -pmode Standby option: maxperf/maxavail/maxprot (default maxperf) -activedg Enable Active Dataguard -rtapply Enable real time apply -dgbroker Enable DG Broker -dgbpath1 DG Broker remote configuration file path -dgbpath2 DG Broker remote configuration file path -racmod 0/1/2 == SINGLE/RACONE/RAC (default 0) -upgrade Clone for upgrade -dbua NZDRU upgrade using dbua (Default: autoupgrade) -parallel Number of upgrade process (Default 4) -sga_max_size SGA Max Size (Mb) -sga_target SGA Target (Mb) -pfile Parameters file -channels RMAN allocate channels (default 3) -ssize RMAN section size -cbset RMAN compressed backupset -opc Required option on RACDBaaS environment -noping Avoid ping check -resume Clone resume -opclib Location of the SBT library -opcpfile Location of the SBT configuration file -scn SCN number or Latest SCN from backup to clear datafile fuzziness -time Time in the format 'DD-MON-YYYY_HH24:MI:SS' -catuser RMAN catalog user -catpwf RMAN catalog user encrypted password file -cstring RMAN catalog connect string -sbt1 RMAN SBT Tape Parameters -sbt2 RMAN SBT Tape Parameters -dbid Database ID -syspwf SYS encrypted password file -tsyspwf SYS encrypted password file -rmanpwf RMAN encrypted password file -walletpwf Wallet encrypted password file -check Will perform a CDB to PDB conversion pre-check -copy Will copy the source noCDB datafiles to CDB location (default: nocopy) -path Path where to copy the dbfiles (default CDB system dbf path) -tree With listdb will show the Parent/Snapshot tree -verbose Display OH & version on listdb -force With deldb will unregister the db -rmandbg On cloning RMAN debug log file location -version Show program's version number and exit gDBClone InstallationgDBClone can be installed using the RPM (RedHat Package Manager) command as following: # rpm -i gDBClone-3.0.6-X.<arch>.rpm
(*) X=version number or updating an installed version, issuing: # rpm -Uvh gDBClone-3.0.6-X.<arch>.rpm
Following files are created under ‘/opt/gDBClone’: # tree /opt/gDBClone
/opt/gDBClone/ ├── gDBClone.bin └── utils └──autoupgrade.jar 1 directory, 2 files gDBClone deinstallationgDBClone can be removed using the RPM (RedHat Package Manager) command as following: # rpm -e gDBClone-3.0.6-X.<arch>
(*) X=version number Managing gDBClone Privileges and Security with SUDOgDBClone command-line utility requires root system privileges for
most actions. You may want to use SUDO as part of your system auditing
and security policy. Allowing Root User Access Using SUDOIn environments where system administration is handled by a different
group than database administration, or where security is a significant
concern, you may want to limit access to the root user account and
password. SUDO enables system administrators to grant certain users (or
groups of users) the ability to run commands as root, while logging all
commands and arguments as part of your security and compliance protocol. Caution: Configuring SUDO to allow a user to
perform any operation is equivalent to giving that user root privileges.
Consider carefully if this is appropriate for your security needs.
SUDO Example 1: Allow a User to Perform Any gDBClone OperationThis example shows how to configure SUDO to enable a user to perform any gDBClone operation. You Cmnd_Alias GDBCLONE_CMD=/opt/gDBClone/gDBClone *
rcitton ALL = GDBCLONE_CMD In this example, the user name is rcitton . The file parameter setting ALL=GDBCLONE_CMDS grants the SUDO Example 2: Allow a User to Perform Only Selected gDBClone OperationsTo configure SUDO to allow an user to perform only selected gDBClone operations, add lines to the Cmnd_Alias GDBCLONE_CMD=/opt/gDBClone/gDBClone clone
rcitton ALL = GDBCLONE_CMD SUDO Example 3: Allow a User to Perform Any gDBClone Operation without password requestTo configure SUDO to allow an user to perform gDBClone operations without password request, add Cmnd_Alias GDBCLONE_CMD=/opt/gDBClone/gDBClone *
rcitton ALL=(root) NOPASSWD:GDBCLONE_CMD Limitations & ConsiderationsgDBClone works on a Grid Infrastructure environment and on Oracle
Restart (SIHA). Source database must be in archivelog mode when
cloning/snapshotting as it’s executed a hot clone/snapshot. Multitenant
database (CDB) snapshot is not currently supported if contains PDBs
created as “snapshot copy”. gDBClone to target ACFS Consideration
"gDBClone clone" to target ACFS will duplicate the source database to an ACFS snapshot and not to ACFS "root". Later gDBClone snap is expecting the source database running from an ACFS snapshot.
The idea is to have a single ACFS filesystem to store many different databases. Every single databse will be stored on a single ACFS snapshot and the "root" ACFS will be keep empty to preserve space and avoid duplicates when we are going to take a snapshot. Recovery Manager (RMAN) considerationsStarting with Oracle Database 12c Release 1 (12.1), you can use
multisection backup sets to transfer the source files required to
perform active database duplication. Use the ‘-ssize’ clause in the
“gDBClone clone” command to create multisection backup sets that can be
used for active database duplication. Source database using Transparent Data Encryption (TDE)Transparent Data Encryption (TDE), which functions at the column level, and tablespace encryption. If you are cloning a database with encrypted tablespaces, you must manually copy the keystore to the duplicate database. If the keystore is not an auto login (SSO) keystore, then you must convert it to an auto login keystore at the duplicate database. (See also case study) Database clone/snap overwriting SGA parametersgDBClone is supporting the possibility to clone/snap a source database “overwriting” some SGA parameters. You can leverage on this feature using the “-pfile <parameters file>” command option. The supported parameters are the following: aq_tm_processes
archive_lag_target audit_file_dest bitmap_merge_area_size create_bitmap_area_size db_block_checking db_block_checksum db_cache_size db_file_multiblock_read_count db_files db_lost_write_protect deferred_segment_creation diagnostic_dest fast_start_parallel_rollback filesystemio_options hash_area_size java_pool_size job_queue_processes large_pool_size log_archive_format log_archive_max_processes log_archive_trace open_cursors optimizer_adaptive_plans optimizer_adaptive_statistics optimizer_adaptive_features parallel_execution_message_size parallel_force_local parallel_max_servers parallel_threads_per_cpu pga_aggregate_target processes recovery_parallelism remote_login_passwordfile sec_case_sensitive_logon session_cached_cursors sessions sga_max_size sga_target shared_pool_reserved_size shared_pool_size shared_servers sort_area_retained_size sort_area_size standby_file_management streams_pool_size undo_management undo_retention tde_configuration wallet_root It’s possible “overwrite” just only sga_max_size & sga_target, using “-sga_max_size <size Mb>” & “-sga_target <size Mb>” gDBClone command option. gDBClone usage example1. Clone a Remote/Local (physical/standby or backup) database to ACFS(Gold/Image)gDBclone clone command options: $ sudo /opt/gDBClone/gDBClone clone -sdbname ORCL \
-sdbscan exadata316-scan \ -tdbname GOLD \ -tdbhome OraDb12201_home1 \ -dataacfs /u02/app/oracle/oradata/datastore [-redoacfs <acfs mount point>][-recoacfs <acfs mount point>] You could use “-redoacfs” and/or “-recoacfs” to store redologs/archivelogs in different ACFS filesystems. Note: If you need to decrease/increase the SGA
footprint, example if your target local system cannot accommodate the
source SGA, you can leverage on “-sga_max_size” and “-sga_target”
gDBClone clone parameters (both expressed in Mb), or using the more
comprehensive “-pfile” option.
2. Clone a Remote/Local database to ASM and make it a RAC databasegDBclone clone command options: # gDBClone.bin clone -sdbname ORCL \
-sdbscan exadata316-scan \ -tdbname GOLD \ -tdbhome OraDb12201_home1 \ -datadg +MYDATA [-redodg <dgname>][-recodg <dgname>] Local default ASM diskgroup:+DATA 3. Clone a Multitenant database to ACFSgDBclone clone command options: # gDBClone.bin clone -sdbname ORCL \
-sdbscan exadata316-scan \ -tdbname GOLD \ -tdbhome OraDb12201_home1 \ -dataacfs /u02/app/oracle/oradata/datastore [ -standby [-pmode maxperf|maxavail|maxprot] [-activedg] [-rtapply] ] [ -racmod <db type> ] Note: no extra options are needed, automatic CDB recognition 4. Clone a remote database to ExascalegDBclone clone command options: # gDBClone.bin clone \
-sdbname ORCL.sub11110943180.exascale.oraclevcn.com \ -sdbscan racpack-scan.racpack.sub11110943180.exascale.exascale \ -tdbname CLONE \ -tdomain sub11110943180.exascale.oraclevcn.com \ -tdbhome DbHome_1 \ -dataxdg @0nhGLu2f \ -syspwf syspasswd \ -racmod 2 5. Snapshot a gold/master database as RAC OneNode# gDBClone.bin snap -sdbname GOLD \
-tdbname SNAP \ -racmod 1 In this case as the “ -tdbhome ” option is not provided, the ORACLE_HOME will be the same source database’s ORACLE_HOME. Note: If you need to decrease/increase the SGA
footprint, example if your target local system cannot accommodate the
source SGA, you can leverage on “-sga_max_size” and “-sga_target” gDBClone snap parameters (both expressed in Mb), or using the more comprehensive “-pfile” option.
6. Convert a database (SI or RAC OneNode) to RAC# gDBClone.bin convert -sdbname SNAP \
-racmod 2 Note: “ -racmod ” 0/1/2 = SINGLE/RACONE/RAC (default 0). Convert RAC OneNode, RAC to single instance is not supported.
7. Convert a non-CDB database to PDB of a given CDB# gDBClone.bin convert -sdbname <source noCDB name>
-tdbname <target CDB name> [-check] {[-copy] [-path <path>]} -check Will perform a CDB to PDB conversion pre-check Note: before the conversion you may want execute “
gDBClone convert -check ” to verify the conversion result in “dry
mode”. Using “-check” a report with warnings and potential conversion
errors is generated for your review.
8. Delete database# gDBClone.bin deldb -tdbname SNAP -force
9. List databasesHaving the following scenario, “ gDBClone listdb ” will list relations and database type Using "-tree" option Using "-verbose" option 10. Create an encrypted SYS password file# gDBClone syspwf -syspwf /opt/gDBClone/SYSpasswd_file
Case StudiesThe gDBClone script provides flexible options and configurations to best fulfill the customer’s requirements. 1. Managing a test & dev environment combined with Oracle Data GuardMany customers deploy Oracle Data Guard as their disaster recovery solution. When you create the standby database on an ACFS file system, you will have simple options to create and manage a test & dev environment on the standby cluster. This makes better utilization of the standby resources while enabling a test and dev environment. The following diagram illustrates a test and dev environment that can easily be managed using the gDBClone commands: Using the ‘gDBClone snap’ function, you can either create multiple snapshots and provision for different purposes, or create a snapshot to preserve the point in time copy and create snaps of snaps to deploy identical copies of databases for test & dev as you can see from the diagram above. The advantage of this approach is that the master copy of the database (standby) is continuously refreshed by Data Guard allowing disaster recovery as well as refreshed data for testing. Such scenario can be setup using gDBClone within few commands: 1) On target list the available Oracle Homes: gDBClone.bin listhomes
Oracle Home Name Home Location 2) Create an encrypted SYS (source database) password file (optional): $ sudo /opt/gDBClone/gDBClone.bin syspwf -syspwf /opt/gDBClone/SYS.passwd_file
Please enter the SYS User password : ## Enter the SYS source database password SYS password file created as /opt/gDBClone/SYS.passwd_file Note: if you skip the syspwf file creation, the SYS source database password will be requested at command line 3) Clone the source database (ORCL) on target as Standby database (CLONE): $ sudo /opt/gDBClone/gDBClone.bin clone -sdbname DB197H1 \
-sdbscan exadata316-scan \ -tdbname CLONE \ -tdbhome OraDb19700_home1 \ -dataacfs /u02/app/oracle/oradata/datastore \ -redoacfs /u03/app/oracle \ -recoacfs /u03/app/oracle \ -syspwf /opt/gDBClone/SYS.passwd_file \ -standby output example: INFO: 2020-07-17 17:56:06: Please check the logfile '/opt/gDBClone/out/log/gDBClone_21040.log' for more details
MacroStep1 - Getting information and validating setup... MacroStep2 - Setting up clone environment... MacroStep3 - Cloning database 'DB197H1'... MacroStep5 - Standby setup... 4) Check the target database (CLONE) creation: gDBClone.bin listdbs
Database Name Database Type Database Role Master/Snapshot Location/Parent 5) Create a snapshot database SDB1 from the source standby CLONE database gDBClone.bin snap -sdbname CLONE -tdbname S1DB
Note: in order to get a snapshot database from a source standby database, the Flashback is mandatory, if needed, on source CLONE standby database execute: RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE FLASHBACK ON; RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; Note: without “ -tdbhome <Target Database Home Name> ” parameter, the database home will be the same source database oracle home, in this example “ OraDb19700_home1” output example: INFO: 2020-07-17 18:07:59: Please check the logfile '/opt/gDBClone/out/log/gDBClone_23972.log' for more details
MacroStep1 - Getting information and validating setup... Please enter the 'SYS' User password for the database CLONE: MacroStep2 - Getting database snapshot... 6) Check the target database (S1DB) creation: gDBClone.bin listdbs
Database Name Database Type Database Role Master/Snapshot Location/Parent Note: you could verify the parent relationship also doing gDBClone.bin listdbs -tree
Parent Child ---------------------------------------------------------------- 7) Create a snapshot database S2DB as RAC(Real Application Cluster) from the source standby CLONE database: gDBClone.bin snap -sdbname CLONE \
-tdbname S2DB \ -racmod 2 output: INFO: 2020-07-17 18:26:13: Please check the logfile '/opt/gDBClone/out/log/gDBClone_11429.log' for more details
MacroStep1 - Getting information and validating setup... Please enter the 'SYS' User password for the database CLONE: MacroStep2 - Getting database snapshot... MacroStep3 - Converting clone database 'S2DB' to cluster mode... INFO: 2020-07-17 18:40:08: Starting database 'S2DB' 8) Check the target database (S2DB) creation: Database Name Database Type Database Role Master/Snapshot Location/Parent
------------- ------------- ------------- --------------- --------------- CLONE SINGLE PHYSICAL_STANDBY Master /u02/app/oracle/oradata/datastore/.ACFS/snaps/ S1DB SINGLE PRIMARY Snapshot CLONE S2DB RAC PRIMARY Snapshot CLONE 2. Creating database clone using RMAN backupsetsDatabases may be cloned using RMAN backupsets from the production server to minimize the overhead. The backupsets may be exported using the NFS network protocol as the source for the gDBClone command.
The backup sets may be mounted via NFS on the test cluster. In this case, the NFS mount must be exported using the “insecure” export option on the source server for Oracle Database 12c tools to access the NFS mount properly. The following is an example of RMAN source database full backup command: RMAN> RUN
{ ALLOCATE CHANNEL disk1 DEVICE TYPE DISK FORMAT '/mnt/backup/ORCL/%U'; BACKUP DATABASE PLUS ARCHIVELOG; BACKUP AS COPY CURRENT CONTROLFILE FORMAT '/mnt/backup/ORCL/control_%U'; BACKUP SPFILE FORMAT '/mnt/backup/ORCL/spfile_%U'; } The gDBClone command (example) is as following: gDBClone.bin clone -sdbname DB197H1 \
-sbckloc /mnt/backup/ORCL/ \ -tdbname GOLD \ -tdbhome OraDb19700_home1 \ -dataacfs /u02/app/oracle/oradata/datastore \ -redoacfs /u03/app/oracle \ -recoacfs /u03/app/oracle \ -syspwf SYS.passwd output: INFO: 2020-07-18 10:19:17: Please check the logfile '/opt/gDBClone/out/log/gDBClone_13551.log' for more details
MacroStep1 - Getting information and validating setup... MacroStep2 - Setting up clone environment... MacroStep3 - Cloning database 'DB197H1'... INFO: 2020-07-18 10:25:01: Starting database 'GOLD'
Note: if LOG_ARCHIVE_DEST_2 is set on source database backup, the clone may fail with error
" ORA-16019: cannot use LOG_ARCHIVE_DEST_2 with LOG_ARCHIVE_DEST or LOG_ARCHIVE_DUPLEX_DEST "
Unset LOG_ARCHIVE_DEST_2 on source database, get a full database backup and retry the clone with “-sbckloc” Check for the new cloned database “GOLD”: Database Name Database Type Database Role Master/Snapshot Location/Parent
------------- ------------- ------------- --------------- --------------- GOLD SINGLE PRIMARY Master /u02/app/oracle/oradata/datastore/.ACFS/snaps/ 3. Creating database clone from Oracle Cloud Database Backup Service (OCI)Databases can be cloned from the Oracle Cloud Database Backup Service. Oracle Database Cloud Backup Module is required. gDBClone command (example) is as following: gDBClone.bin clone -sdbname DB197H1 \
-sopcbck -opclib /opt/bck2cloud/oci/libopc \ -opcpfile /opt/bck2cloud/oci/bck2cloud.conf \ -dbid 3206634329 \ -tdbname DB197H1 \ -tdbhome OraDb19700_home1 \ -datadg +DATA \ -redodg +RECO \ -recodg +RECO \ -channels 6 \ -rmanpwf RMAN.password \ -syspwf SYS.passwd_file "-opclib" is specifiing where is located the "Oracle Database Cloud Backup Module" library Then contenf for "Oracle Database Cloud Backup Module" configuration file is something like OPC_HOST=https://objectstorage.<region>.oraclecloud.com/n/<tennant name>
OPC_WALLET='LOCATION=file:<wallet location> CREDENTIAL_ALIAS=alias_oci' OPC_CONTAINER=<bucket name> OPC_COMPARTMENT_ID=<compartment id> OPC_AUTH_SCHEME=BMC gDBClone command output: INFO: 2020-07-18 12:46:16: Please check the logfile '/opt/gDBClone/out/log/gDBClone_32629.log' for more details
MacroStep1 - Getting information and validating setup... MacroStep2 - Setting up clone environment... MacroStep3 - Cloning database 'DB197H1' from Cloud Storage Backup... INFO: 2020-07-18 12:55:11: Starting database 'DB197H1' 4. Creating database clone from RMAN full backup and upgrade to 19cIn the following scenario we are cloning an 11g database from an RMAN full backup, upgrading to 19c and making it a RAC database. Steps to clone an 11g database to 19c:
gDBClone.bin clone -sdbname PROD \
-sbckloc /u01/BACKUP/PROD/ \ -tdbname PRODUPG \ -tdbhome OraDb19700_home1 \ -dataacfs /u02/app/oracle/oradata/datastore \ -redoacfs /u03/app/oracle \ -recoacfs /u03/app/oracle \ -upgrade -parallel 8 \ -racmod 2 \ -syspwf /opt/gDBClone/SYS.passwd output: INFO: 2020-07-18 14:24:58: Please check the logfile '/opt/gDBClone/out/log/gDBClone_30868.log' for more details
MacroStep1 - Getting information and validating setup... MacroStep2 - Setting up clone environment... MacroStep3 - Cloning database 'PROD'... MacroStep4 - Converting clone database 'PRODUPG' to cluster mode... INFO: 2020-07-18 15:10:01: Starting database 'PRODUPG' 5. Clone 11g Database from ASM to ACFS keeping the source running
Prior to Oracle 12c, moving datafiles is always an offline task. Using gDBClone you can “move” (clone) a database from ASM to ACFS keeping it running. If you need to preserve the transactions during the cloning operation you may consider the Oracle GoldenGate usage. gDBClone clone -sdbname ORCL \
-sdbscan exadata316-scan \ -tdbname CLONE \ -tdbhome OraDb11204_home1 \ -dataacfs /u02/app/oracle/oradata/datastore [ -redoacfs <acfs mount point> ] [ -recoacfs <acfs mount point> ] [ -standby [-pmode maxperf|maxavail|maxprot][-activedg] [-rtapply] ] [ -racmod <db type> ] 6. Create a snapshot database as RAC from a running database (GOLD clone)Once you got a clone gold image (from a backupset or from a running physical/standby database) of your production database you can now, using the ‘gDBCone snap’ function, create multiple snapshots for different purposes. The source clone can be SI, RAC OneNode or RAC and the target snapshot database can be SI, RAC OneNode or RAC (“ -racmod ”). You could convert the snapshot database later also, using the “convert” option. 1) Create a snapshot database SDB from the source standby CLONE database: gDBClone.bin snap -sdbname GOLD \
-tdbname SDB \ -racmod 1 output: MacroStep1 - Getting information and validating setup...
INFO: 2017-09-25 01:02:55: Validating environment... INFO: 2017-09-25 01:02:55: Superuser usage check INFO: 2017-09-25 01:02:55: Clusterware running check INFO: 2017-09-25 01:02:55: Minimun crs activeversion check INFO: 2017-09-25 01:02:55: Database 'GOLD' existence check INFO: 2017-09-25 01:02:55: Database 'GOLD' running check INFO: 2017-09-25 01:02:57: Getting ORACLE_BASE path from orabase INFO: 2017-09-25 01:02:57: Checking if target database name SDB is a valid name INFO: 2017-09-25 01:02:57: Checking database 'GOLD' connectivity INFO: 2017-09-25 01:03:07: Checking whether the database 'GOLD' is in ACFS snapshot INFO: 2017-09-25 01:03:07: Checking source database 'GOLD' and target dbhome version INFO: 2017-09-25 01:03:11: Checking if target database 'SDB' exists INFO: 2017-09-25 01:03:11: Checking registered instance 'SDB' INFO: 2017-09-25 01:03:22: Checking if SDB exists as snapshot in '/u02/app/oracle/oradata/datastore' INFO: 2017-09-25 01:03:22: Checking if source database GOLD is snapable INFO: 2017-09-25 01:03:27: ...Checking whether the database 'GOLD' is entirely on ACFS INFO: 2017-09-25 01:03:32: ...Checking whether the database 'GOLD' is a primary/physical standby database. INFO: 2017-09-25 01:03:34: ...Checking whether the database 'GOLD' is in READ WRITE mode INFO: 2017-09-25 01:03:38: ...Checking whether the database 'GOLD' is a CDB INFO: 2017-09-25 01:03:43: ...Checking whether the database 'GOLD' is running as backup mode INFO: 2017-09-25 01:03:48: ...Checking whether the database 'GOLD' is running in archivelog mode INFO: 2017-09-25 01:03:52: ...Checking if all datafiles are available INFO: 2017-09-25 01:03:57: ...Checking if there are OFFLINE datafiles SUCCESS: 2017-09-25 01:04:02: Environment validation complete MacroStep2 - Getting database snapshot... 2) Check the new database created: Database Name Database Type Database Role Master/Snapshot Location/Parent
------------- ------------- ------------- --------------- --------------- GOLD SINGLE PRIMARY Master /u02/app/oracle/oradata/datastore/.ACFS/snaps/ SDB RACOneNode PRIMARY Snapshot GOLD 7. Create a clone RAC database from a “remote” standby databaseHaving a “remote” running standby database, using gDBClone is possible to get a clone of it. The source “remote” standby can be SI, RAC OneNode or RAC and the clone database can be SI, RAC OneNode or RAC (“-racmod”) running from ASM or ACFS. You could convert the snapshot database also later using the “convert” option. 1) Having gDBClone on remote host, check the source DB role Database Name Database Type Database Role Master/Snapshot Location/Parent
------------- ------------- ------------- --------------- --------------- STDBY RAC PHYSICAL_STANDBY n/a ASM 2) Run gDBClone from target host gDBClone.bin clone -sdbname STDBY \
-sdbscan remotehost-scan \ -tdbname CLONE \ -tdbhome OraDb19700_home1 \ -dataacfs /u02/app/oracle/oradata/datastore [ -redoacfs <acfs mount point> ] [ -recoacfs <acfs mount point> ] [ -standby [-pmode maxperf|maxavail|maxprot][-activedg] [-rtapply] ] [ -racmod <db type> ] 8. Create a snapshot RAC database from a “local” standby databaseHaving a running standby database, using gDBClone is possible to get a snapshot of it. The source standby can be SI, RAC OneNode or RAC and the snapshot database can be SI, RAC OneNode or RAC (“ -racmod ”). You could convert the snapshot database also later using the “convert” option. 1) Check the source Standby database Database Name Database Type Database Role Master/Snapshot Location/Parent
------------- ------------- ------------- --------------- --------------- STDBY SINGLE PHYSICAL_STANDBY Master /u02/app/oracle/oradata/datastore/.ACFS/snaps/ 2) Create a snapshot database SNAP from the source standby STDBY database MacroStep1 - Getting information and validating setup...
INFO: 2017-09-25 01:53:12: Validating environment... INFO: 2017-09-25 01:53:12: Superuser usage check INFO: 2017-09-25 01:53:12: Clusterware running check INFO: 2017-09-25 01:53:12: Minimun crs activeversion check INFO: 2017-09-25 01:53:12: Database 'STDBY' existence check INFO: 2017-09-25 01:53:12: Database 'STDBY' running check INFO: 2017-09-25 01:53:14: Getting ORACLE_BASE path from orabase INFO: 2017-09-25 01:53:14: Checking if target database name SNAP is a valid name INFO: 2017-09-25 01:53:14: Checking database 'STDBY' connectivity INFO: 2017-09-25 01:53:23: Checking whether the database 'STDBY' is in ACFS snapshot INFO: 2017-09-25 01:53:23: Checking source database 'STDBY' and target dbhome version INFO: 2017-09-25 01:53:28: Checking if target database 'SNAP' exists INFO: 2017-09-25 01:53:28: Checking registered instance 'SNAP' INFO: 2017-09-25 01:53:43: Checking if SNAP exists as snapshot in '/u02/app/oracle/oradata/datastore' INFO: 2017-09-25 01:53:43: Checking if source database STDBY is snapable INFO: 2017-09-25 01:53:47: ...Checking whether the database 'STDBY' is entirely on ACFS INFO: 2017-09-25 01:53:52: ...Checking whether the database 'STDBY' is a primary/physical standby database. INFO: 2017-09-25 01:53:54: ...Checking whether the Physical Standby database 'STDBY' is in MOUNT mode INFO: 2017-09-25 01:53:59: ...Checking flashback on database 'STDBY' INFO: 2017-09-25 01:54:03: ...Checking whether the database 'STDBY' is a CDB INFO: 2017-09-25 01:54:08: ...Checking whether the database 'STDBY' is running in archivelog mode INFO: 2017-09-25 01:54:13: ...Checking if all datafiles are available SUCCESS: 2017-09-25 01:54:18: Environment validation complete MacroStep2 - Getting database snapshot... MacroStep3 - Converting clone database 'SNAP' to cluster mode... SUCCESS: 2017-09-25 02:03:58: Successfully created clone database 'SNAP' 3) Check the new database created Database Name Database Type Database Role Master/Snapshot Location/Parent
------------- ------------- ------------- --------------- --------------- STDBY SINGLE PHYSICAL_STANDBY Master /u02/app/oracle/oradata/datastore/.ACFS/snaps/ STDBY RAC PRIMARY Snapshot STDBY gDBClone.bin listdbs -tree
Parent Child ---------------------------------------------------------------- 9. Clone a database from RMAN full backup to ACFS as standby Database
Create a clone database NOWK from RMAN backupset and make it as standby of source database ORCL gDBClone.bin clone -sdbname ORCL \
-sbckloc /mnt/ORCL/bck \ -sdbscan exa316c1n1-scan \ -tdbname NOWK \ -tdbhome OraDb12102_home1 \ -dataacfs /u02/app/oracle/oradata/datastore -standby Output: MacroStep1 - Getting information and validating setup...
INFO: 2017-10-02 08:22:42: Validating environment INFO: 2017-10-02 08:22:42: Checking superuser usage INFO: 2017-10-02 08:22:42: Checking ping to host 'lac4-scan' INFO: 2017-10-02 08:22:42: Checking if target database name 'NOWK' is a valid name INFO: 2017-10-02 08:22:42: Checking if target database home 'OraDb11204_home1' exists INFO: 2017-10-02 08:22:42: Getting ORACLE_BASE path from orabase INFO: 2017-10-02 08:22:42: Checking if target database 'NOWK' exists INFO: 2017-10-02 08:22:42: Checking 'NOWK' snapshot existence on '/u02/app/oracle/oradata/datastore' INFO: 2017-10-02 08:22:42: Checking registered instance 'NOWK' INFO: 2017-10-02 08:22:47: Checking listener on 'lac4-scan:1521' INFO: 2017-10-02 08:22:47: Checking ACFS command options INFO: 2017-10-02 08:22:47: Checking if '/u02/app/oracle/oradata/datastore' is an ACFS file system INFO: 2017-10-02 08:22:47: Checking FLASHBACK mode WARNING: 2017-10-02 08:22:47: Source database 'ORCL' is not in FLASHBACK mode INFO: 2017-10-02 08:22:47: Checking LOG_ARCHIVE_DEST settings SUCCESS: 2017-10-02 08:22:47: Environment validation complete MacroStep2 - Setting up clone environment... MacroStep3 - Cloning database 'ORCL'... MacroStep5 - Standby setup... SUCCESS: 2017-10-02 08:27:11: Successfully created clone "NOWK" database as standby Check the standby creation Database Name Database Type Database Role Master/Snapshot Location/Parent
------------- ------------- ------------- --------------- --------------- NOWK SINGLE PHYSICAL_STANDBY Master /u02/app/oracle/oradata/datastore/.ACFS/snaps/
Note1: for a clone as standby from backup-based duplication you must have the "controlfile for standby" included in the backup:
example:
RMAN> RUN Note2: With backup-based duplication you must
copy the password file used on the primary to the standby, for Oracle
Data Guard to ship logs.
10. Clone a database from OCDBS (Oracle Cloud Database Backup Service/Storage) as RAC standby Database
Create a clone database PROD from OCDBS (Oracle Cloud Database Backup Service/Storage) and make it as standby of source database: gDBClone.bin clone -sdbname ORCL
-sdbscan oda404-scan -sopcbck -opclib '/opt/bck2cloud/OPC/libopc' -opcpfile '/opt/bck2cloud/OPC/bck2cloud.conf' -tdbname ORCL -tdbhome OraDb11204_home1 -dataacfs /u02/app/oracle/oradata/datastore -channels 6 -racmod 2 -standby -rmanpwf RMAN.password -syspwf SYS.passwd_file
With backup-based duplication you must copy the
password file used on the primary to the standby, for Oracle Data Guard
to ship logs.
11. Database upgrade using Transient Logical Standby (TLS) (manual mode)Due to “Hot Database Snapshot as Standby” capability,
without impact over the source production database and "without" storage
duplication, leveraging on ACFS snapshot “redirect-on-write” feature,
gDBClone is making a snapshot of a running database as standby. Having a
physical standby from a running The steps could be as following: 1. gDBClone snap as standby --> no downtime, minimal storage duplication (depend on source database production activity) Note: physru is a script to minimize downtime and simplify a database rolling upgrade using a physical standby database. A 'transient logical' herein refers to the physical standby database that has been temporarily converted to a transient logical standby database for the purpose of executing the upgrade. Also see the MAA Best Practice Oracle Database Rolling Upgrades, for additional information describing this process and Oracle11g Data Guard: Database Rolling Upgrade Shell Script (Doc ID NOTE:949322.1) 12. Database upgrade with limted downtime using nzdru feature (TLS automatic) (beta)This approach will minimize the downtime required for an upgrade leveraging on transient logical standby rolling upgrade process. “gDBClone nzdru” greatly reduces the complexity of executing a rolling database upgrade by automating the upgrade steps. Although the upgrade begins with a physical standby database, the transient logical standby process uses SQL Apply to take redo generated by a database running a lower Oracle release, and apply the redo to a standby database running a higher Oracle release. When the upgrade process is complete, both the primary database and its physical standby database are operating at the new Oracle Database release. The Steps 1. using gDBClone (clone/snap) build a physical standby (as snap or as clone, in case of clone the standby can be on ACFS or ASM) example: gDBClone.bin snap -sdbname PROD -tdbname TEMP -standby
2. Create the database service if not created yet (here an example of AC application continuity service) /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/srvctl add service -d PROD -s NZDRU_AC
/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/srvctl modify service -d PROD -s NZDRU_AC -e SESSION -P BASIC /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/srvctl modify service -d PROD -s NZDRU_AC -w 5 /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/srvctl modify service -d PROD -s NZDRU_AC -z 60 /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/srvctl modify service -d PROD -s NZDRU_AC -j SHORT /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/srvctl start service -d PROD -s NZDRU_AC /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/srvctl add service -d TEMP -s NZDRU_AC 3. Apply the nzdru feature gDBClone.bin nzdru -sdbname PROD -tdbname TEMP \
-udbhome OraDb19000_home1 -service NZDRU_AC
By default gDBClone nzdru will upgrade the database using "autoupgrade". If you need to perform the database upgrade using "dbua" you need to use "-dbua" option
Complete execution example output: gDBClone.bin listdbs -verbose
Database Name Database Type Database Role Oracle Home Name Oracle Home Location Home Version ------------- ------------- ------------- ---------------- ----------------------------------------- ----- TEMP SINGLE PHYSICAL_STANDBY OraDb11204_home1 /u01/app/oracle/product/11.2.0.4/dbhome_1 11.2.0.4.161018 PROD SINGLE PRIMARY OraDb11204_home1 /u01/app/oracle/product/11.2.0.4/dbhome_1 11.2.0.4.161018 gDBClone.bin nzdru -sdbname PROD -tdbname TEMP -udbhome OraDb19700_home1 -service NZDRU_AC
INFO: 2017-06-01 06:17:08: Please check the logfile '/opt/gDBClone/out/log/gDBClone_93408.log' for more details MacroStep3 - Stage 2 Create transient logical standby from existing physical standby... MacroStep4 - Logical standby upgrade... MacroStep5 - Stage 3 Validate upgraded transient logical standby... MacroStep6 - Stage 4 Transient logical standby switch role... MacroStep7 - Stage 5 Flashback former primary to pre-upgrade restore point and convert to physical... INFO: 2017-06-01 06:49:37: [5-4] Creating checkpoint NZDRU_0504 on database 'TEMP' MacroStep8 - Stage 6 Former primary recovers... MacroStep9 - Stage 7 Switch back... INFO: 2017-06-01 06:53:13: [7-3] Creating checkpoint NZDRU_0703 on database 'TEMP' MacroStep10 - Stage 8 Statistics... gDBClone.bin listdbs -verbose
Database Name Database Type Database Role Oracle Home Name Oracle Home Location Home Version ------------- ------------- ------------- ---------------- ----------------------------------------- ----- TEMP SINGLE PHYSICAL_STANDBY OraDb19700_home1 /u01/app/oracle/product/19.7.0.0/dbhome_1 19.7.0.0.0 PROD SINGLE PRIMARY OraDb19700_home1 /u01/app/oracle/product/19.7.0.0/dbhome_1 19.7.0.0.0 13. Clone a database encrypted with Transparent Data Encryption (TDE)
In this scenario, we want use gDBClone to clone/snap database encrypted with the Transparent Data Encryption (TDE). Before to use gDBClone, you must manually copy the keystore to the duplicate database. If the keystore is not an auto login (SSO) keystore, then you must convert it to an auto login keystore at the duplicate database. In this example we are considering ORCL as source encrypted database, ENCORCL the cloned database and SNAPENC the snapshot encrypted database 1) Copy the wallet file (ewallet.p12) from source database server to target clone database server. You can check the wallet file location on source database from sqlnet.ora file of the source database ORACLE_HOME $ mkdir -p /u01/app/oracle/admin/ENCORCL/tde_wallet
$ scp oracle@prod-serv:/u01/app/oracle/admin/ORCL/tde_wallet/ewallet.p12 /u01/app/oracle/admin/ENCORCL/tde_wallet/ 2) Modify sqlnet.ora file in target clone database ORACLE_HOME to reflect the location of the wallet file: ENCRYPTION_WALLET_LOCATION =
(SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY=/u01/app/oracle/admin/ENCORCL/tde_wallet) ) ) In an Oracle Grid Infrastructure environment, add the TNS_ADMIN and ORACLE_UNQNAME initialization parameters to both the listener.ora file and the static listener for Data Guard Broker. The listener must be stopped and restarted after these changes are made. The following is an example of setting the TNS_ADMIN and ORACLE_UNQNAME parameters: (SID_DESC=(GLOBAL_DBNAME=ENCORCL.example.com) (ORACLE_HOME=/u01/app/oracle/product/19.7.0.0/dbhome_1)
(SID_NAME=ENCORCL) (ENVS="TNS_ADMIN=/u01/app/oracle/admin/ENCORCL/tde_wallet") (ENVS="ORACLE_UNQNAME=ENCORCL")) 3) Invoke orapki utility on the target clone database server to make the wallet auto-login: $ orapki wallet create -wallet /u01/app/oracle/admin/ENCORCL/tde_wallet \
-pwd "Welcome_1" \ -auto_login 4) You can now use gDBClone to clone the source encrypted database gDBClone.bin clone -sdbname ORCL \
-sdbscan exadata316-scan \ -tdbname ENCORCL \ -tdbhome OraDb19700_home1 \ -dataacfs /u02/app/oracle/oradata/datastore If you need also a database snapshot: gDBClone.bin snap -sdbname ENCORCL -tdbname SNAPENC
The required wallet will be created under the expected location ” /u01/app/oracle/admin/SNAPENC/tde_wallet ” Output: [Insert code here. Use 'Paste from Word' to retain layout.]
You can avoid the “WALLET” password request if “/opt/gDBClone/WALLETpasswd_file” is present. You can create such file with the wallet password issuing: gDBClone.bin syspwf -syspwf WALLET.passwd_file
Note: if you are going to use $ORACLE_UNQNAME into sqlnet.ora, example: ENCRYPTION_WALLET_LOCATION =
(SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY=/u01/app/oracle/admin/ENCORCL/$ORACLE_UNQNAME) ) ) You should also setup such env variable for the database using “srvctl” command: $ srvctl setenv database -db db_unique_name -env "ORACLE_UNQNAME=val"
$ srvctl stop database -db db_unique_name $ srvctl start database -db db_unique_name 14. Using gDBClone on ODA HA,S,M,L (Enterprise Edition)Note: On ODA before to get database snapshots
with gDBClone you must get a source database “clone” on ACFS as by
default, on ODA the databases are stored on ACFS “root” filesystem and
not on an ACFS filesystem snapshot.
1) Check the source database on source host (or on local ODA) gDBClone.bin listdbs -verbose
Database Name Database Type Database Role Oracle Home Name Oracle Home Location Home Version Master/Snapshot Location/Parent ------------- ------------- ------------- ---------------- ----------------------------------------- ------------ ---------------- ---------------- DB122H1 RAC PRIMARY OraDb12201_home1 /u01/app/oracle/product/12.2.0.1/dbhome_1 12.2.0.1.0 n/a ASM 2) Make a new ODA “ACFS DB storage” for the clone database issuing: # odacli create-dbstorage --dbname CLONE --databaseUniqueName CLONE --dbstorage ACFS
{ "jobId" : "d317a3de-99a1-4ba4-842f-bbd4ec4763f6", "status" : "Created", "message" : null, "reports" : [ ], "createTimestamp" : "July 20, 2020 10:03:34 AM CEST", "resourceList" : [ ], "description" : "Database storage service creation with db name: CLONE", "updatedTime" : "July 20, 2020 10:03:34 AM CEST" } 3) Get a clone database from the source DB running gDBClone from target ODA: ./gDBClone.bin clone \
-sdbname DB122H1 \ -sdbscan rcvm-scan \ -tdbname CLONE \ -tdbhome OraDB12201_home1 \ -dataacfs /u02/app/oracle/oradata/CLONE \ -recoacfs /u03/app/oracle/fast_recovery_area \ -redoacfs /u04/app/oracle/redo/CLONE output: MacroStep1 - Getting information and validating setup...
Please enter the 'SYS' User password for the database DB122H1: MacroStep2 - Setting up clone environment... MacroStep3 - Cloning database 'DB122H1'... INFO: 2020-07-20 11:20:37: Starting database 'CLONE' 3) Check the new 'clone' database gDBClone.bin listdbs
Database Name Database Type Database Role Master/Snapshot Location/Parent ------------- ------------- ------------- --------------- --------------- CLONE SINGLE PRIMARY Master /u02/app/oracle/oradata/CLONE/.ACFS/snaps/ Note as the database is created under the ACFS snapshot and nto on dbstorage root filesystem: # tree /u02/app/oracle/oradata/CLONE
/u02/app/oracle/oradata/CLONE └── lost+found 1 directory, 0 files
3 directories, 6 files You could register the new clone database to the dcs-agent so it can be managed by the dcs-agent stack also. In order to do so you must de-register the clone db into the cluster (done by gDBClone) as “ odacli register-database” will fail otherwise, the steps are as following: 1) stop the database (as oracle user): srvctl stop database -d CLONE
2) de-register the database (as oracle user) from the cluster: $ srvctl remove database -d CLONE
3) startup the database using SQL*Plus (as oracle user): $ export ORACLE_SID=CLONE
$ sqlplus / as sysdba SQL> startup 4) Register the databse on DCS (as root user) odacli register-database --dbclass OLTP --dbshape odb1 --dbtype SI --servicename CLONE
--> {
"jobId" : "a8298f61-2821-4aff-8da1-b5614e34d786", "status" : "Created", "message" : null, "reports" : [ ], "createTimestamp" : "July 20, 2020 11:31:44 AM CEST", "resourceList" : [ ], "description" : "Database service registration with db service name: CLONE", "updatedTime" : "July 20, 2020 11:31:44 AM CEST" } check the new registered database (from DCS propsective) # odacli list-databases
ID DB Name DB Type DB
Version CDB Class Shape Storage Status
DbHomeID check the new registered database (from gDBClone propsective) gDBClone.bin listdbs
Database Name Database Type Database Role Master/Snapshot Location/Parent
Note: on registering the database on ODA DCS, db_create_file_dest will be updated to the target ACFS root filesystem (in this example: '/u02/app/oracle/oradata/CLONE'). You should set it to the right ACFS snapshot path (example) instead, issuing: alter system set db_create_file_dest='/u02/app/oracle/oradata/CLONE/.ACFS/snaps/CLONE' scope=both;
Note: on registering the database on ODA DCS, db_recovery_file_dest will be updated to '/u03/app/oracle/fast_recovery_area', gDBClone is expecting db_recovery_file_dest set to <path>/<DB name>, example: '/u03/app/oracle/fast_recovery_area/CLONE'. You should set db_recovery_file_dest to the right expected value issuing (example): ALTER SYSTEM SET db_recovery_file_dest='/u03/app/oracle/fast_recovery_area/CLONE' SCOPE=SPFILE;
You can now leverage on the gDBClone snapshot feature: gDBClone.bin snap -sdbname CLONE -tdbname SNAP
Note: DCS-Agent does not support databases created using the “gDBClone snap” feature.
15. Migrate a database from OCIC to OCI using gDBCloneIn this scenario, we want use gDBClone to migrate a database running on Oracle Cloud Infrastructure Classic (OCIC) to Oracle Cloud Infrastructure (OCI).
Requirements
col PROPERTY_NAME format a30
col VALUE format a5 SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME; PROPERTY_NAME VALUE In this example, time zone ver.28 must be present on OCI $ORACLE_HOME/oracore/zoneinfo: [root@OCI ~]# ls -l /u01/app/oracle/product/12.1.0.2/dbhome_1/oracore/zoneinfo/*28*
-rw-r--r-- 1 oracle oinstall 53922 May 17 03:55 /u01/app/oracle/product/12.1.0.2/dbhome_1/oracore/zoneinfo/readme_28.txt -rw-r--r-- 1 oracle oinstall 782585 May 17 03:55 /u01/app/oracle/product/12.1.0.2/dbhome_1/oracore/zoneinfo/timezlrg_28.dat -rw-r--r-- 1 oracle oinstall 341401 May 17 03:55 /u01/app/oracle/product/12.1.0.2/dbhome_1/oracore/zoneinfo/timezone_28.dat Steps 1. Make on OCI targert BM a new “DB storage” for the target database issuing: [root@OCI ~] # dbcli create-dbstorage --dbname RC12COCI --dbstorage ACFS
{ "jobId" : "2275567f-d8ae-46a0-bd76-0745d3c6a5f4", "status" : "Created", "message" : null, "reports" : [ ], "createTimestamp" : "October 04, 2017 11:17:03 AM UTC", "resourceList" : [ ], "description" : "Database storage service creation with db name: RC12COCI", "updatedTime" : "October 04, 2017 11:17:03 AM UTC" } 2. Setup the wallet file on OCI targert BM - Copy the wallet file (ewallet.p12) from the OCIC database server to
target OCI database server. You can check the wallet file location on
source database from sqlnet.ora file of the source database $ mkdir -p /opt/oracle/dcs/commonstore/wallets/tde/RC12COCI
$ scp oracle@ocic:/u01/app/oracle/admin/RC12COCI/tde_wallet/ewallet.p12 oracle@oci:/opt/oracle/dcs/commonstore/wallets/tde/RC12COCI/ $ chmod 600 /opt/oracle/dcs/commonstore/wallets/tde/RC12COCI/ewallet.p12 - Modify sqlnet.ora file on target BMC database ORACLE_HOME to reflect the location of the wallet file: ENCRYPTION_WALLET_LOCATION =
(SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/RC12COCI) ) ) - Invoke orapki utility on the target clone database server to make the wallet auto-login (the password is an example): $ orapki wallet create -wallet /opt/oracle/dcs/commonstore/wallets/tde/RC12COCI \
-pwd "Welcome_1" \ -auto_login 3. Execugte gDBClone - Create the password file for gDBClone: [root@OCI ~]# gDBClone.bin syspwf -syspwf /opt/gDBClone/SYS.passwd
Please enter the SYS User password : Please re-enter the SYS user password : SYS password file created as /opt/gDBClone/SYS.passwd - run gDBClone as following (you could check the sdbname with “lsnrctl service” command): [root@OCI ~]# nohup /opt/gDBClone/gDBClone.bin clone \
-sdbname RC12C.gboracle88888.oraclecloud.internal \ -sdbscan 140.85.10.81 \ -tdbname RC12COCI \ -tdbhome OraDB12102_home1 \ -dataacfs /u02/app/oracle/oradata/RC12COCI \ -redoacfs /u03/app/oracle/redo \ -recoacfs /u03/app/oracle/fast_recovery_area \ -oci \ -syspwf /opt/gDBClone/SYS.passwd & --> MacroStep1 - Getting information and validating setup...
INFO: 2017-05-23 08:59:42: Validating environment INFO: 2017-05-23 08:59:42: Checking superuser usage INFO: 2017-05-23 08:59:42: Checking if target database name 'RC12CBMC' is a valid name INFO: 2017-05-23 08:59:42: Checking if target database home 'OraDB12102_home1' exists WARNING: 2017-05-23 08:59:42: ORACLE_BASE is not set INFO: 2017-05-23 08:59:42: Got Oracle Base from orabase INFO: 2017-05-23 08:59:42: Checking if target database 'RC12CBMC' exists INFO: 2017-05-23 08:59:43: Checking 'RC12CBMC' snapshot existence on '/u02/app/oracle/oradata/RC12CBMC' INFO: 2017-05-23 08:59:43: Checking registered instance 'RC12CBMC' INFO: 2017-05-23 08:59:43: Checking listener on 'rc12c:1521' INFO: 2017-05-23 08:59:43: Checking source and target database version INFO: 2017-05-23 08:59:49: Checking source log mode INFO: 2017-05-23 08:59:52: Checking Flash Cache setting INFO: 2017-05-23 08:59:55: Checking ACFS command options INFO: 2017-05-23 08:59:55: Checking if '/u02/app/oracle/oradata/RC12CBMC' is an ACFS file system INFO: 2017-05-23 08:59:55: Checking if '/u03/app/oracle/redo' is an ACFS file system INFO: 2017-05-23 08:59:55: Checking if '/u03/app/oracle/fast_recovery_area' is an ACFS file system SUCCESS: 2017-05-23 08:59:55: Environment validation complete MacroStep2 - Setting up clone environment... INFO: 2017-05-23 08:59:55: Creating local pfile INFO: 2017-05-23 08:59:58: Creating local password file INFO: 2017-05-23 08:59:58: Creating local Audit folder INFO: 2017-05-23 08:59:58: Creating local auxiliary listener INFO: 2017-05-23 08:59:58: Starting auxiliary listener INFO: 2017-05-23 08:59:58: Sleeping 60 secs, please wait INFO: 2017-05-23 09:00:58: Setting up ACFS storage INFO: 2017-05-23 09:00:58: Creating dynamic scripts INFO: 2017-05-23 09:00:59: Cloning to target ACFS from host '140.85.10.81' INFO: 2017-05-23 09:00:59: Creating RMAN script for spfile target to ACFS INFO: 2017-05-23 09:00:59: Instantiating clone database SUCCESS: 2017-05-23 09:00:59: Environment setup complete MacroStep3 - Cloning database 'RC12COPC.gboracle88888.oraclecloud.internal'... INFO: 2017-05-23 09:00:59: please wait (this can take a while depending on database size and/or network speed) INFO: 2017-05-23 09:12:18: Moving spfile INFO: 2017-05-23 09:12:51: Updating local dbs pfile/spfile INFO: 2017-05-23 09:12:51: Register 'RC12CBMC' database as cluster resource INFO: 2017-05-23 09:12:55: Checking database name INFO: 2017-05-23 09:12:55: Modifying DB instance INFO: 2017-05-23 09:12:56: Setup ACFS dependency INFO: 2017-05-23 09:12:58: Database 'RC12CBMC' dependency to '/u02/app/oracle/oradata/RC12CBMC,/u03/app/oracle/redo,/u03/app/oracle/fast_recovery_area' done successfully INFO: 2017-05-23 09:12:58: Starting database 'RC12CBMC' SUCCESS: 2017-05-23 09:13:10: Successfully created clone database 'RC12CBMC' INFO: 2017-05-23 09:13:10: Cleaning up the setup Check the database creation: gDBClone.bin listdbs
Database Name Database Type Database Role Master/Snapshot Location/Parent You could register the new clone database to the dcs-agent so it can
be managed by the dcs-agent stack also. In order to do so the
“COMPATIBLE” parameter must be in the form of 4 numbers (x.y.z.w)
example: “12.1.0.2” (12.1.0.2.0 is not valid) and the database password
must have at least two 1. set the compatible parameter SQL> alter system set compatible='12.1.0.2' scope=spfile;
2. Set the SYS password to support “dbcli registration” SQL> alter user sys identified by "WElcome__12";
3. stop the database $ srvctl stop database -d RC12CBMC
4. de-register the database $ srvctl remove database -d RC12CBMC
5. startup the database using SQL*Plus $ export ORACLE_SID=RC12CBMC
$ sqlplus / as sysdba SQL> startup 6. Run the ‘ odacli register-database’ command [root@BMC ~]# dbcli register-database \
--dbclass OLTP \ --dbshape odb2 \ --servicename RC12CBMC.gboracle60892.oraclecloud.internal \ -p Password for SYS: { Check the database creation: # dbcli list-databases
ID DB Name DB Type DB
Version CDB Class Shape Storage Status
DbHomeID 16. Test & Dev Management environment exampleTest System Configuration
It is assumed that the user is running a 12.1 production database ‘SALES’ on ProdRAC1/2 cluster and wants to create a clone of that database on a test & dev cluster for testing and certification purposes. Therefore, the user needs to have two snapshots of the database for this purpose The following are the steps necessary to provision two snapshot databases for test and dev purposes: 1) List the current cloned databases on the test cluster. gdbclone.bin listdbs
Database Name Database Type Database Role Master/Snapshot Location/Parent ------------- ------------- ------------- --------------- --------------- RACM RAC PRIMARY Master /cloudfs 2) Create a clone of RAC database ‘SALES’ on the test cluster and name it ‘SALESM’. Create the clone on /acfs file system [root@TestCluster1 ~]# gDBClone.bin clone -sdbname SALES -sdbhost ProdRAC1 -tdbname SALESM -dataacfs /acfs -racmod 2
3) List the current cloned databases on the test cluster. gdbclone.bin listdbs
Database Name Database Type Database Role Master/Snapshot Location/Parent ------------- ------------- ------------- --------------- --------------- RACM RAC PRIMARY Master /cloudfs SALESM RAC PRIMARY Master /acfs 4) Create a read-write snapshot of SALESM clone database called SALEST1 and configure it as a single instance database. Also, create a read-write snapshot of SALEM clone database called SALEST2 and configure it as a RAC database. [root@TestCluster1 ~]# gDBClone.bin snap -sdbname SALESM -tdbname SALEST1
[root@TestCluster1 ~]# gDBClone.bin snap -sdbname SALESM -tdbname SALEST2 -racmod 2 5) List the current cloned databases on the test cluster. gdbclone.bin listdbs
Database Name Database Type Database Role Master/Snapshot Location/Parent ------------- ------------- ------------- --------------- --------------- RACM RAC PRIMARY Master /cloudfs SALESM RAC PRIMARY Master /acfs SALEST1 SINNGLE PRIMARY Snapshot SALESM SALEST2 RAC PRIMARY Snapshot SALES ConlusionManaging test and dev environments does not have to be complex. The Oracle Cloud File System coupled with the gDBClone script provides powerful, flexible and simple tools that ease management of test and dev servers and reduce management complexity. You can finally contain the sprawling cost of storage by using the ACFS point-in-time snapshot technology; and therefore, realize significant storage savings. Many sparse snapshot clones can be created for parallel test and development purpose and only require a fraction of the storage. The Oracle Cloud File System is bundled with Oracle Grid Infrastructure and installs automatically on every cluster. Refreshing and recycling test databases have never been easier. The gDBClone script allows you to create clones and snapshots, list and delete them in one simple command. This is the type of agility businesses need to adapt to changing requirements in their IT organization. Appendix - AClone LocationUsing the following options:
automatically gDBClone will create:
gDBClone clone -sdbname O12C -sdbscan slcac458-scan -tdbname CO12C -tdbhome OraDb12102_home1
-dataacfs /u02/app/oracle/oradata/datastore -redoacfs /u01/app/oracle/oradata/datastore -recoacfs /u01/app/oracle/fast_recovery_area/datastore --> # tree /u02/app/oracle/oradata/datastore/.ACFS/snaps/CO12C /u02/app/oracle/oradata/datastore/.ACFS/snaps/CO12C └── CO12C ├── datafile │ ├── o1_mf_sysaux_d0wd0jj3_.dbf │ ├── o1_mf_system_d0wd0j6g_.dbf │ ├── o1_mf_temp_d0wd2qt1_.tmp │ ├── o1_mf_undotbs1_d0wd0jmb_.dbf │ ├── o1_mf_undotbs2_d0wd0qt2_.dbf │ └── o1_mf_users_d0wd0r1r_.dbf └── spfileCO12C.ora . # tree /u01/app/oracle/oradata/datastore/CO12C/ /u01/app/oracle/oradata/datastore/CO12C/ ├── CO12C │ ├── onlinelog │ ├── o1_mf_1_d0wd2h1n_.log │ ├── o1_mf_2_d0wd2m7t_.log │ ├── o1_mf_3_d0wd26qo_.log │ └── o1_mf_4_d0wd2boz_.log └── control01.ctl . # tree /u01/app/oracle/fast_recovery_area/datastore/CO12C/ /u01/app/oracle/fast_recovery_area/datastore/CO12C/ └── CO12C ├── archivelog │ └── 2016_10_24 │ ├── o1_mf_1_4_d0wd13wv_.arc │ ├── o1_mf_1_5_d0wd148z_.arc │ ├── o1_mf_1_6_d0wd14jr_.arc │ ├── o1_mf_2_6_d0wd14sl_.arc │ └── o1_mf_2_7_d0wd153n_.arc └── backupset └── 2016_10_24 └── o1_mf_nnsnf_TAG20161024T090133_d0wd2xow_.bkp Snap LocationWhen gDBClone snap is in use the database location is made based on the following assumptions: Database datafiles will be stored following "db_create_file_dest" source database location. Example having: db_create_file_dest='/u02/app/oracle/oradata/datastore/.ACFS/snaps/<sourceDBname>'
/u02/app/oracle/oradata/datastore/.ACFS/snaps/<snapDBname>/<uppercasesnapDBname>/datafile
Database Redologs will follow "db_create_online_log_dest_1" db_create_online_log_dest_1='/u01/app/oracle/oradata/datastore/<sourceDBname>' the snapshot database will store the dbf under /u01/app/oracle/oradata/datastore/<snapDBname>
Database recovery area will follow "db_recovery_file_dest" db_recovery_file_dest='/u01/app/oracle/fast_recovery_area/datastore/<sourceDBname>' the snapshot database will store the dbf under /u01/app/oracle/fast_recovery_area/datastore/<snapDBname>
Clone resumeIn case of RMAN error/failure during
the clone process, gDBClone wont cleanup the environment leaving what
done until the failure. Restarting the same gDBClone clone command it
will print errors as the previous db structure is (could be) present. In
such case using "-resume" command option (it ll ask for confirmation),
gDBClone will skips the errors and it will restarts the "rman
duplicate". Standby optionThe standby option (usable doing clone/snap) is as following: -standby [-pmode maxperf|maxavail|maxprot]
[-activedg] [-rtapply][-dgbroker [-dgbpath1 <dgb config
path>][-dgbpath2 <dgb config path>]]
Where -pmode If "-pmode" is used and ne maxperf LOG_ARCHIVE_DEST_2--> AFFIRM/ASYNC -activedg Using "-activedg" the clone/snap database will be register as "-r physical_standby", "-s "READ ONLY" -rtapply If "-rtapply" is in use "ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT USING CURRENT LOGFILE"
If "-rtapply" is not in use "ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION"
DG broker Setup
During the “clone/snap” as standby and as option, it’s possible configure Oracle Data Guard Broker. In case
of database “clone” if the DG Broker is not configured yet you must
provide the DG Broker configuration file path (-dgbpath1/-dgbpath2). If
you don’t provide ‘-dgbpath2’, the same ‘-dgbpath1’ will be used. In case of Database “snap”, dgbpath1 will be set to “<dataacfs>/.ACFS/snaps/<db_name>/dgbc1<db_name>.dat
dgbpath2 will be set to “<recoacfs>/<db_name>/dgbc2<db_name>.dat gDBClone “-pfile” option
As discussed early on this technical paper, if you need to decrease/increase SGA footprint, you can leverage on “-pfile” option. Here and example of the usage. Example: gDBClone_pfile12c.conf aq_tm_processes = 0
archive_lag_target = 900 bitmap_merge_area_size = 0 compatible = 12.1.0.2 create_bitmap_area_size = 0 db_block_checking = 'FULL' db_block_checksum = 'FULL' db_file_multiblock_read_count = 16 db_files = 500 db_lost_write_protect = 'FULL' fast_start_parallel_rollback = 'HIGH' hash_area_size = 0 job_queue_processes = 10 log_archive_format = '%t_%s_%r.arc' log_archive_max_processes = 10 log_archive_trace = 0 open_cursors = 1000 parallel_execution_message_size = 8192 parallel_max_servers = 80 pga_aggregate_target = 1G processes = 150 recovery_parallelism = 8 remote_login_passwordfile = 'EXCLUSIVE' sec_case_sensitive_logon = 'FALSE' session_cached_cursors = 100 sessions = 250 sga_max_size = 1G sga_target = 1G shared_pool_reserved_size = 22649241 shared_pool_size = 704643072 shared_servers = 0 sort_area_retained_size = 0 sort_area_size = 65536 undo_management = 'AUTO' gDBClone_pfile12c.conf undo_retention = 43200 Getting a database clone: gDBClone.bin clone -sdbname ORCL \
-sdbscan exadata316-scan \ -tdbname CLONE \ -tdbhome OraDb12102_home1 \ -dataacfs /u02/app/oracle/oradata/datastore \ -pfile /opt/gDBClone/gDBClone_pfile12c.conf Getting a database snapshot: gDBClone.bin snap -sdbname STDBY -tdbname SNAP \
-pfile /opt/gDBClone/gDBClone_pfile12c.conf Debug InformationIf for any reason gDBClone is failing, you may have more detailed information from the log created under “/opt/gDBClone/out/log/gDBClone_<pid>.log”
If on cloning a source database you need RMAN debug log, you can use “-rmandbg” command option; ReferencesNOTE:2363679.1 - Bck2Cloud - “1-Button” Cloud Backup/Restore Automation Utilityhttps://www.oracle.com/database/technologies/oracle-cloud-backup-downloads.html http://www.oracle.com/technetwork/indexes/samplecode/gdbclone-download-2295388.html NOTE:949322.1 - Oracle11g Data Guard: Database Rolling Upgrade Shell Script NOTE:884522.1 - How to Download and Run Oracle's Database Pre-Upgrade Utility NOTE:2485457.1 - AutoUpgrade Tool |
Comments
Post a Comment
Dear User,
Thank you very much for your kind response