INDEX
1. ABOUT THE GOLDENGATE
2. ARCHITECTURE SOURCE SIDE
2.1. BI-DIRECTIONAL SETUP
2.2. UNI-DIRECTIONAL SETUP
2.3.SOURCE SIDE
2.3.1. SOURCE DATABASE
2.3.2. GOLDENGATE CAPTURE
2.3.3. GOLDENGATE TRAIL FILES
2.3.4. GOLDENGATE PUMP: “CAPTURE ONCE, APPLY MANY”
2.3.5. ROUTING
2.3.6. GOLDENGATE CAN MOVE BI-DIRECTIONAY
2.4.TARGET SIDE
2.4.1. GOLDENGATE PUMP: “CAPTURE ONCE, APPLY MANY”
2.4.2. GOLDENGATE TRAIL FILES
2.4.3. GOLDENGATE DELIVERY OR REPLICATE
3. GOLDENGATE INSTALLATION
3.1. SOURCE ORACLE DATABASE PRE-REQUISITE
3.2. SOURCE GOLDENGATE PRE-REQUISITE
3.3. TARGET ORACLE DATABASE PRE-REQUISITE
3.4. TARGET GOLDENGATE PRE-REQUISITE
3.5. INSTALLATION
3.6. SOURCE AND TARGET DATABASE COMMUNICATION
4. GOLDENGATE SOURCE AND TARGET IMPLEMENTATION
4.1. CONFIGURATION
4.2. SOURCE GOLDENGATE IMPLEMENTATION
4.2.1. MANAGER PROCESS
4.2.2. TRANSACTION LOGGING
4.2.3. EXTRACT PROCESS
4.2.4. PUMP PROCESS
4.2.5. INITIAL COMPLETE LOAD PROCESS 1ST TIME ONLY
4.3. TARGET GOLDENGATE IMPLEMENTATION
4.3.1. MANAGER PROCESS
4.3.2. REGULAR REPLICATE PROCESS
4.3.3. INITIAL REPLICATE LOAD PROCESS 1ST TIME ONLY
5. TROUBLESHOOTING
1. About The GOLDENGATE
GOLDENGATE transaction data management (TDM) consists of decoupled components that can be configured to enable a variety of solutions for high availability/disaster tolerance and real-time data integration. TDM's architecture allows each component to perform its tasks independently in addition to enabling real-time data replication between heterogeneous source and target systems. Its core components include: GOLDENGATE Capture, GOLDENGATE Trail Files, and GOLDENGATE Delivery.
2. ARCHITECTURE
2.1 Bi-Directional Setup
2.2 Uni-Directional Setup
2.3 Source Side
2.3.1 SOURCE DATABASE – GOLDENGATE reads source database log to capture real-time transaction data.
2.3.2 GOLDENGATE CAPTURE:
Capture module reads the transaction logs (redo or archive logs) of the desired source database. Capture committed transaction – insert, update, and delete operations – and maintains transaction integrity.
Immediately writes the transactions to external Trail file for distribution to targets.
A process efficiently and with low impact – does not create additional work in the database or create triggers.
Offers selective capture and delivery capability: Advanced filtering at the database/schema, table, row, or column level, o based on operation type, matched patterns, and value thresholds.
2.3.3 GOLDENGATE TRAIL FILES:
Trail files contain the changed data and operations in a transportable, platform-independent, universal data format.
Exist outside of the databases to minimize overhead and enable improved reliability and heterogeneity.
Help to checkpoint data, to ensure data accuracy and completeness between source and target systems.
Improved architecture durability and recovery in the event of an outage.
2.3.4 GOLDENGATE PUMP: “CAPTURE ONCE, APPLY MANY”
The GOLDENGATE Pump process reads the trail file and distributes the data to the designated target database.
Multiple pumps may be configured to send the data to multiple targets, even different database types
(i.e. capture from Oracle, apply to Teradata and SQL Server)
2.3.5 ROUTING:
GOLDENGATE sends data over TCP/IP – this means no distance constraints between source and target systems!
Encryption may be applied.
Additional compression may be applied as desired.
Can send thousands of transactions operations per second (TPS). Extremely efficient, low-bandwidth implementation.
2.3.6 GOLDENGATE CAN MOVE BI-DIRECTIONAY:
Synchronizes data between source and target systems in real-time. Enable “Active-Active” and “Active-Passive” configurations. Enables zero-downtime upgrade and migration solutions.
Provides features for data conflict detection and resolution.
2.4 Source Side
2.4.1 GOLDENGATE PUMP: “CAPTURE ONCE, APPLY MANY”
The GOLDENGATE Pump reads the trail file and distributes the data to the designated target database. Multiple pumps may be configured to send the data to multiple targets, even different database types (i.e. capture from Oracle, apply to Teradata and SQL Server)
2.4.2 GOLDENGATE TRAIL FILES:
Trail files contain the changed data and operations in a transportable, platform-independent, universal data format.
Exist outside of the databases to minimize overhead and enable improved reliability and heterogeneity.
Help to checkpoint data, to ensure data accuracy and completeness between source and target systems.
Improved architecture durability and recovery in the event of an outage.
2.4.3 GOLDENGATE DELIVERY or REPLICATE:
GOLDENGATE reads changed data transactions from the Trail File.
Continuously applies changed data to the target database using the native SQL for that RDBMS. Ensures transactions integrity by applying transactions in the same order that they were committed on the source system.
Transaction commit boundaries are preserved throughout each step of the data movement. Delivery also supported to non-RDBMS targets via flat file integration.
Supports high-volume environments while keeping source and target “in synch.”
All of this happens with only sub-seconds latency!
3. GOLDENGATE INSTALLATION
3.1 SOURCE ORACLE DATABASE PRE-REQUISITE:
SOURCE DATABSE owned by ORACLE user Group oinstall,dba
GOLDENGATE database schema
GRANT CONNECT, RESOURCE, SELECT ANY DICTIONARY, SELECT ANY TABLE, FLASHBACK ANY TABLE, ALTER ANY TABLE to GOLDENGATE;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; ALTER SYSTEM SWITCH LOGFILE;
SYS password of ASM instance if ASM configured.
3.2 SOURCE GOLDENGATE PRE-REQUISITE:
SOURCE GOLDENGATE
GOLDENGATE OS user with oinstall group (to share ORACLE binaries and redologs a logs)
Group oinstall
Read permisssion on redo logs of oracle
.profile should be configure (ORACLE_HOME, ORACLE_SID, LD_LIBRARY_PATH)
200MB Space for GOLDENGATE S/W (home directory is enough)
d archives
Space to store Trail files on source and target (approx 35% of daily archived log generation) Read and Write permission on trail files
3.3 TARGET ORACLE DATABASE PRE-REQUISITE:
TARGET DATABSE owned by ORACLE user Group oinstall,dba
GOLDENDATA database schema;
GRANT CONNECT, RESOURCE, SELECT ANY DICTIONARY,INSERT ANY TABLE,UPDATE ANY TABLE,DELETE ANY TABLE, SELECT ANY TABLE, FLASHBACK ANY TABLE, ALTER ANY TABLE to GOLDENGATE;
Tnsentry to access source
3.4 TARGET GOLDENGATE PRE-REQUISITE:
GOLDENGATE S/W
GOLDENGATE OS user with oinstall group
Group oinstall
read permisssion to redo logs of oracle
.profile should be configured
200MB Space for GOLDENGATE S/W
Space to store Trail files on target (approx 35% of daily archived log generation) Read and Write permission on trail files
Tnsentry to access source database.
3.5 INSTALLATION:
1- Put the GOLDENGATE compressed binaries in directory where you want install and then just extract.
2- Ensure LD_LIBRARY_PATH, ORACLE_HOME, and ORACLE_SID environmental variables are set to run GOLDENGATE software.
3- Ensure GOLDENGATE OS user is in oinstall group sharing oracle binaries, use tnsping to check database connectivity.
4- Use ./ggsci command (GOLDENGATE Software Command Line Interface) to login GOLDENGATE then create subdir command to create GOLDENGATE Sub Directories like dirprm, dirrpt etc… for GOLDENGATE parameters.
5- Do this activity on source and target servers.
3.6 SOURCE AND TARGET DATABASE COMMUNICATION:
Source and target database works on TCP/IP port like 7809, 8809, should open by firewall.
4. GOLDENGATE SOURCE AND TARGET IMPLEMENTATION
Example taken replication of 3 tables between target CCMS 10.105.98.200 and source ATCAT
10.105.98.54 database server.
4.1 CONFIGURATION:
Source GOLDENGATE Target GOLDENGATE
Mgr Port 7809 7809
Host IP 10.105.98.54 10.105.98.200
Trail Location /oradatacat/gg_trail/ct /orau01_ccms/gg_trail/ct
Temp Trail /db_bkp/gg_trail/CT
4.2 SOURCE GOLDENGATE IMPLEMENTATION:
4.2.1 MANAGER PROCESS:
1. Login to GOLDENGATE S/W at ggsci> prompt.
2. EDIT PARAMS MGR (Will open an vi editor)
3. Write “port 7809” Manager is parent process uses this port to communicate source and target.
4.
5. START MGR to start the Manager process.
6. INFO ALL to check status of all processes.
4.2.2 TRANSACTION LOGGING:
1. This is required to read the redo logs by GOLDENGATE then write that changes to trail files.
2. ADD TRANDATA
3. To Add Transaction Logging
4. ggsci>ADD TRANDATA arbor.ENUMERATION_DEF
5. ggsci>ADD TRANDATA arbor.GENERIC_ENUMERATION_VALUES
6. ggsci>ADD TRANDATA arbor.PARAM_DEF
7. To Check
8. ggsci>INFO TRANDATA arbor.PARAM_DEF
4.2.3 EXTRACT PROCESS:
1. Create the extract process, whatever the changes occurred for logged tables transfer to external trail files. This trail file will be transferred to target database using pump to replicate to make target database in sync, Trail file name starts with two letters and GOLDENGATE automatically suffixes 6 digits unique number for Trail file uniqueness.
2. ggsci>ADD EXTRACT ECAT, TRANLOG, BEGIN NOW
3. ggsci>ADD EXTTRAIL /oradatacat/gg_trail/ct, EXTRACT ECAT
4. ggsci>edit params ECAT
Opens vi editor, Save the below parameters in file.
=======================================
EXTRACT ECAT
userid goldengate@ATCAT, password goldengate exttrail /oradatacat/gg_trail/ct
reportcount every 30 minutes, rate report at 23:59
THREADOPTIONS MAXCOMMITPROPAGATIONDELAY 20000 IOLATENCY 15000
gettruncates
table arbor.ENUMERATION_DEF;
table arbor.GENERIC_ENUMERATION_VALUES;
table arbor.PARAM_DEF;
========================================
5. Check using INFO ALL, Then START ECAT command.
4.2.4 PUMP PROCESS:
1. Pump process is used to transfer the trail generated by Extract process to target database server.
2. ggsci>ADD EXTRACT PCAT, EXTtrailsource /oradatacat/gg_trail/ct
3. ggsci>ADD RMTTRAIL /orau01_ccms/gg_trail/ct, EXTRACT PCAT
4. ggsci>EDIT PARAS PCAT
Opens in vi editor, Save the below parameters.
========================================
extract PCAT
passthru
rmthost 10.105.98.200, mgrport 7809 rmttrail /orau01_ccms/gg_trail/ct table *;
========================================
5. Check using INFO ALL, Then START PCAT command.
4.2.5 INITIAL COMPLETE LOAD PROCESS 1st TIME ONLY:
1. EXPDP
2. or
3. This process used extracts the whole data of the tables and transferring directly to Target
Database server. This process is required first time only. This is a Task and will be finished after complete tables transferring to remote Target.
4. ggsci>ADD EXTRACT EVIN, SOURCEISTABLE
5. ggsci>EDIT PARAMS EVIN
Opens in vi editor, Save the below parameters.
========================================
extract evin
userid goldengate@ATCAT, password goldengate rmthost 10.105.98.200, mgrport 7809
rmtfile /db_bkp/gg_trail/CT, maxfiles 1000, megabytes 1000 table arbor.ENUMERATION_DEF;
table arbor.GENERIC_ENUMERATION_VALUES;
table arbor.PARAM_DEF;
========================================
6. Check using INFO ALL, Then START EVIN command.
4.3 TARGET GoldenGate IMPLEMENTATION:
4.3.1 MANAGER PROCESS:
1. Login to GOLDENGATE S/W at ggsci> prompt.
2. EDIT PARAMS MGR (Will open an vi editor)
3. Write “port 7809” Manager is parent process uses this port to communicate source and target.
4.
5. START MGR to start the Manager process.
6. INFO ALL to check status of all processes.
4.3.2 REGULAR REPLICATE PROCESS:
1. This process is used to replicate the Trail files received by Pump process from Source side. It will apply change captures generated at source.
2. Initially it has to be STOPPED, Start after Initial Replicate finishes their replication.
3. ggsci>ADD REPLICAT RCAT, exttrail /orau01_ccms/gg_trail/ct, nodbcheckpoint
4. ggsci>EDIT PARAMS RCAT
Opens in vi editor, Save the below parameters.
========================================
REPLICAT RCAT
USERID goldengate@ccmsdb, PASSWORD goldengate
ASSUMETARGETDEFS
REPORTCOUNT EVERY 30 MINUTES, RATE REPORT AT 23:59
APPLYNOOPUPDATES
HANDLECOLLISIONS
DISCARDFILE ./dirrpt/pcat.dsc, append, megabytes 50
map arbor.ENUMERATION_DEF, target ccms.ENUMERATION_DEF; map arbor.GENERIC_ENUMERATION_VALUES, target ccms.GENERIC_ENUMERATION_VALUES;
map arbor.PARAM_DEF, target ccms.PARAM_DEF;
5. DO NOT START this process, Start this process after Initial replication.
6. Check using INFO ALL command.
4.3.3 INITIAL REPLICATE LOAD PROCESS 1st TIME ONLY:
1. IMPDP.
2. or
3. This process is used to replicate the Trail files received by Initial extract process from
Source side. It will apply the complete data of the tables generated at source.
4. Initially it has to be STOPPED, Start after Initial Replicate finishes their replication.
5. ggsci>ADD REPLICAT RVIN, exttrail /db_bkp/gg_trail/CT, nodbcheckpoint
6. ggsci>EDIT PARAMS RVIN
Opens in vi editor, Save the below parameters.
========================================
REPLICAT RVIN
USERID goldengate@ccmsdb, PASSWORD goldengate
ASSUMETARGETDEFS
REPORTCOUNT EVERY 30 MINUTES, RATE REPORT AT 23:59
APPLYNOOPUPDATES HANDLECOLLISIONS
DISCARDFILE ./dirrpt/pvin.dsc, append, megabytes 50
map arbor.ENUMERATION_DEF, target ccms.ENUMERATION_DEF; map arbor.GENERIC_ENUMERATION_VALUES, target ccms.GENERIC_ENUMERATION_VALUES;
map arbor.PARAM_DEF, target ccms.PARAM_DEF;
========================================
7. Check using INFO ALL command. And START RVIN this process.
8. WAIT till lag becomes zero, Then STOP RVIN and start the regular replicate process START RCAT with HANDLECOLLISIONS parameter to sync complete and changed capture, WAIT till lag becomes zero, Then Stop this process and comment HANDLECOLLISIONS parameter and then start, It’s Very IMPORTANT.
7. GOLDENGATE TROUBLESHOOTING
1. See process parameters -> VIEW PARAMS ECAT (ECAT is the extract process at source)
2. Edit process parameters-> EDIT PARAMS ECAT
3. See report of the process-> VIEW REPORT ECAT
4. See all process status-> INFO ALL
5. See all tasks process-> INFO ALL,TASKS
6. See current process status-> INFO ECAT
7. See current process stats-> STATS ECAT
8. Start manager -> START MGR
9. Stop manager-> START MGR
10. Start process-> START ECAT
11. Stop process-> STOP ECAT
12. When any replicate or extract process got ABENDED, and then see report.
13. When any ABBENDED process having any discard record then see it’s discard file location specified in parameter file as DISCARDFILE. Then see discard file and find out the discard record and why the record discarded, Error or warning will be discard and report file.
14. New troubleshooting steps will be published and updated in this document.
No comments:
Post a Comment