Monthly Archives: July 2015

Introducing FETCHER in a running replication process


This is no regular bit of work and it will probably (and hopefully) never hit you in a production setup…

The prerequisite is that you know how on-line data replication in general, and Dbvisit Replicate specifically, work.

The following case is true:
I had half of a replication pair running.
It means that the MINE process was running, converting REDO-log in PLOG-format. The APPLY process had not yet started because the target database was still being prepared.

dbvisit-replicate-logical-replication-made-easy-18-638-300x225The reason for this is that we needed to start converting redo-log information to PLOG information while we were setting up the target environment. The reason for that was that the setup (exporting source, copying dump to target and importing) was taking quite a bit of time, which would impact redo-log storage to heavily in this specific situation.

It was my suspicion that the MINE process was unable to get enough CPU-cycles from the production server to actually MINE more redo-log seconds than wall-clock seconds passed. In effect, for every second of redo-log information that was mined, between 1 and 6 seconds passed.

This means that the replication is lagging behind and will never be able to catch up.

To resolve this, the plan was to take the MINE process of the production server and placed on an extra server. On the production server, a process called FETCHER would be introduced. The task of this process is to act as a broker between the database and the MIN process, forwarding the requested on-line an archived redo log files.

Normally (!) you would use the nifty opportunities that Replicate offers with the setup wizard and just create a new setup. And actually, this is what I used to figure out this setup. And, if you can, please do use this…

Why didn’t I then, you would rightfully ask?

Well… The instantiation process would take to long, and did I say we were under time-pressure?

  • Setup wizard, 5 minutes
  • The famous *-all.sh script, ~ 1 hr.
  • Datapump Export, ~ 10 hrs.
  • Copy from DC old to DC new,  ~ 36 hrs.
  • Datapump Import, ~ 10 hrs.

So, totally we could spend 57:05 hrs. to try to fix this on the go…

Okay, here we go:

Note: cst-migration is the name of the replication project as you specified it in setup wizard when setting up Replication.

TIP: When setting up on-line replication, it is worth your effort to create separate tnsnames.ora entries for your project, like ‘repl-source’ and ‘repl-target’ acros all nodes.
It can get hellishly confusing if you have, as in this case, a database that is called <cst> and is called the same on the source and target server!

1. Step one:
We obviously had the ./cst-migration/config directory from our basic setup with just MINE & APPLY. This directory holds (among others) the ./cst-migration/config/cst-migration-ontime.ddc file. This file holds the Dbvisit Replicate Repository contents that is needed to run the processes.

From this setup, MINE is actually running. We actually concluded the fact that we were not catching up from this process.

2. Step two:
Now we run dbvrep -> setup wizard again and create a Replicate setup directory with FETCHER and isolate the ./cst-migration+fetcher/config/cst-migration+fetcher-onetime.ddc.

By comparing the two files, I was able to note the differences and therewith conclude the changes necessary to introduce a FETCHER process. It is a meticulous job to make sure all the paths on all the three servers are correct, that port numbers are correct and that all the individual steps are take in the right order. This is the overview.

Having these changes, it is all downhill from now.

3. Step three:
Using the Dbvisit Replicate console, the new entries and the changes were made to the DDC-information stored in the Replicate repository. You can enter these manually or execute your change-file by executing @<change-file-name> inside the console.

4. Step four:
Create the ./cst-migration directory on the system you will use for the relocated MINE process and copy the cst-migration-MINE.ddc and cst-migration-run-source-node.sh in this directory.
Rename the cst-migration-run-source-node.sh to cst-migration-run-mine-node.sh to reduce confusion.
Make sure that the paths mentioned in the cst-migration-MINE.ddc are correct for the system you are starting it on!

NOTE: Please make sure that you can reach both the source and the target database from this node using the tnsnames-entries you have created for the replication setup.

5. Step five:
Rename the cst-migration-MINE.ddc on the source node (!) to cst-migration-FETCHER.ddc and change the cst-migration-run-source-node.sh file to start the FETCHER process in stead of MINE process.

You are now ready to start your new replication processes!

NOTE: If you are running APPLY already, there are some additional things you need to be aware of.

Although it was not the case when I came across this challenge, I am happy to say that Dbvisit have verified and accepted this solutions as a supported action.

Hope this helps.


My picks, no, Agenda… for UKOUG_Tech15

I went over the agenda for UKOUG_Tech15 and took my picks & suggestions.
Then I thought, why not share these…

MONDAY

The Oracle Database In-Memory Option: Challenges & Possibilities
Christian Antognini – Trivadis AG

Standard Edition Something for the Enterprise or the Cloud?
Ann Sjökvist – SE – JUST LOVE IT

All about Table Locks: DML, DDL, Foreign Key, Online Operations,…
Franck Pachot – DBi Services

Silent but Deadly : SE Deserves Your Attention
Philippe Fierens – FCP
Co-presenter(s): Jan Karremans – JK-Consult (Having a link here would be silly, right)

Oracle SE – RAC, HA and Standby are Still Available. Even Cloud!
Chris Lawless – Dbvisit

SE DBA’s Life a Bed of Roses?
Ann Sjökvist – SE – JUST LOVE IT

Oracle Standard Edition Round Table
Joel Goodman – Oracle
Co-presenter(s): Ann Sjokvist, Philippe Fierens, Jan Karremans

TUESDAY

Watch out for #RepAttack… all day long!!
And earn your RepAttack badge-ribbon…

Advanced ASH Analytics: ASHmasters
Kyle Hailey – Delphix

Community Keynote – Dominic Giles

Oracle BI Cloud Service – Moving Your Complete BI Platform to the Cloud
Mark Rittman – Rittman Mead

Infiniband for Engineerd Systems
Klaas-Jan Jongsma – VX Company

Oracle Database In-Memory Option – Under the Hood
Maria Colgan – Oracle

Do an Oracle Data Guard Switchover without Your Applications Even Knowing
Marc Fielding – Pythian

Using Oracle NoSQL to Prioritise High Value Customers
James Anthony – RedStack tech

WEDNESDAY

HA for Single Instance Databases without Breaking the Bank
Niall Litchfield – Markit

Database Password Security
Pete Finnigan – PeteFinnigan.com

Connecting Oracle & Hadoop
Tanel Poder – PoderC LLC

Enterprise Use Cases for Internet of Things
Lonneke Dikmans – eProseed
Co-presenter(s): Luc Bors – eProseed

Bad Boys of On-line Replication – Changing Everything
Bjoern Rost – portrix Systems GmbH
Co-presenter(s): Jan Karremans – JK-Consult

RMAN 12c Live : It’s All About Recovery,Recovery,Recovery
René Antúnez – Pythian

Hopefully it will attend you to some interesting session for you!

Synology backup with CrashPlan 4.3.0

I recently upgraded to CrashPlan 4.3.0 which I use to backup my Synology to a remote location.

On Synology, you can only use CrashPlan in a headless manner, so I am running “the head”, the client, from my MacBook.
After the update to CrashPlan 4.3.0, I was unable to connect to the engine running on my Synology. And that is a pain, as I cannot control the CrashPlan setup anymore, which I needed, to do some setup-changes.
I thought to write it down as it is the combination of to pieces of forum-information with a small alteration.

Here’s how I got to fix it (I took the rigorous way as I feel a clean start is the best start & CrashPlan keeps all your settings with you account anyway):
1) remove CrashPlan from Synology (using the package manager)
2) remove CrashPlan from my MacBook
3) install CrashPlan on Synology (using the package manager)
4) install CrashPlan on my MacBook from the CrashPlan website
5) change the client ui.properties to include serviceHost=<your NAS name / IP>
6) change .ui_info on the Synology NAS (and this was the missing bit):

Synology (server) side of things:
– Edit my.service.xml, mine was located in /volume1/@appstore/CrashPlan/conf/my.service.xml. Changed from <serviceHost>localhost</serviceHost> to <serviceHost>0.0.0.0</serviceHost>. Please keep the default port <servicePort>4243</servicePort>
– Get the server user id information, check your path… You could use the command cat /Library/Application\ Support/CrashPlan/.ui_info  ; echo

MacBook (client) side of things:
– Making a backup of the client .ui_info file just in case… sudo cp /Library/Application\ Support/CrashPlan/.ui_info /Library/Application\ Support/CrashPlan/.ui_info.backup
Substituting original client .ui_info content with .ui_info coming from server: sudo vi /Library/Application\ Support/CrashPlan/.ui_info

And, presto, this is what did it for me and my Synology!

Oracle Standard Edition 2, a bright new future

Okay, it is not very much more than smoke, since Ludovico Caldara found MOS note 2027072.1 about the support of Standard Edition 12.1.0.2.0 and blogged and tweeted about it.

Despite Ludovico’s disclaimer, there is, nevertheless, some smoke… And Twitter quite quickly filled up (at least the early where I take interest). Dominic Giles stated: “More to come soon!” And Ann Sjökvist urged calmness by saying: “let’s wait for facts!” And of course she is right.

Why then this blogpost?

As one of the founders of the Oracle Standard Edition Round Table (#orclSERT), this interests me. Standard Edition One comprises the most cost effective software stack around. For a deeper dive on that statement, please visit an orclSERT session at an Oracle Usergroup event near you, or drop me a line.
Obviously some kind of news of this category has been long expected as the development of “ever more cores per socket” kept increasing. We have been eagerly awaiting this for a few years actually, hesitant to speak about it… For obvious reasons 😉

Where there is smoke, there is fire, especially when PM’s speak. And as the note states that the release of SE2 is foreseen for Q3 of 2015 (which coincidently is THIS quarter) I would like to prepare myself…

This is what we have to go by for now:

Beginning with the release of Oracle Database 12.1.0.2 Standard Edition 2 (SE2), Standard Edition (SE) and Standard Edition One (SE1) are replaced by Standard Edition 2. SE2 will run on systems with up to 2 sockets and will have the ability to support a two node RAC cluster. 12.1.0.1 was the final edition that we will produce for SE and SE1. Customers running SE or SE1 will need to migrate their licenses to SE2 to be able to upgrade to 12.1.0.2.

1. SE2 will run on systems with up to 2 sockets.
– This is not different from the current SE1 rule;
– This means that a 4 socket SE installation will have to be revised, either migrate up to EE or revamp to 2 sockets;

2. SE2 will have the ability to run a 2 node cluster.
– RAC will become available across the board in the Standard Edition realm.
– How many sockets will a full SE2 cluster be able to support? 2 sockets, if you would follow current rules, 4 sockets if the license would be optimal!

And it is always good to speculate about price… And mind you, this is smoke! The best educated guess so far, 3/4 between SE1 and SE, which could (hopefully) bring the price to round about 10k euro per socket, but… who knows? Perhaps, as Ludovico also stated, socket licensing could become history?

The great news is, Oracle Standard Edition will remain available as the alternative to the Enterprise Edition installations. For more information we will just have to hold our breath a little longer.
But, be assured, during the next session of #orclSERT we will be able to tell you (much) more!

Meanwhile I will keep preparing (and talking about it), for this change will have some impact yet…