Category Archives: Database Replication

Adding flexibility to your PostgreSQL clusters – Using EDB Failover Manager


Using PostgreSQL in enterprise environments gets more and more popular. And why not? This extremely stable and performant database can compete with ease with almost all enterprise database installations out there today.

Competing technically? Sure!
Competing from a business perspective? Absolutely!!

Making sure your database systems stay up during planned maintenance? Absolutely yes, no discussion about that!
Ensuring your systems stay up during a catastrophic failure of your master? Yes! We need to ensure 99.99999 availability.

Introducing EDB Failover Manager (or short: EFM).

A tool that will do precisely this.

  • A graceful switch-over from a master database to a slave database (and back) with just one single command. This way you have the chance to do maintenance on the (previously master) node.
  • Failover from a master node to a slave node (which will be promoted to new master).
    It is based on PostgreSQL streaming replication, which allows you to create multiple slave clusters to your master cluster.

The tool ensures access to the cluster of database clusters using a Virtual IP Address. It gives you a wealth of ‘hooks’, where you can call scripts to help you reconfigure you surrounding landscape to a switch of masters. Think of re-configuring your load-balancing tools, like Pgpool-II to make sure read and write queries get assigned to the correct cluster nodes.

Well, that sounds good, right!

So, what do you need to do?

  1. Make sure your PostgreSQL streaming replication is running.
  2. Allocate at least 3 nodes (master/slave/slave or master/slave/witness). You will need three nodes to have a quorum to prevent a split brain scenario.
  3. Install EFM on those 3 nodes and configure it.
  4. Start, run and play!

Configuration of EFM is done through efm.properties in the /etc/efm-2.1 directory.
Tip is to create 1 copy of this file and distribute this over you EFM cluster nodes. There are respectively one (master/slave/slave configuration) or two (master/slave/witness configuration) parameters that are node-specific.

  • bind.address: specific to each node, <node IP-address>:9001 (9001 is cluster communication port, same for all cluster members)
  • is.witness: put this parameter to true if the node hold no database.

All other parameters are well documented in the efm.properties file.

Enter the <IP-address>:9001 of the membership coordinator (basically the first node of the EFM-cluster you start), in the efm.nodes-file of all the cluster members.

With this, we are basically good to go!!

systemctl start efm-2.1 and your cluster is running!

The efm-command allows you to manage your cluster. Syntax for the command is: efm <command> <cluster-name> <option>.

  • efm cluster-status efm gives you a nice overview of what is happening. Precede this with the linux watch command and you can monitor this nicely.
  • efm allow-node efm pg-11 allows node pg-11 to join the EFM cluster
  • efm promote efm -switchover makes the first slave in the standby priority list the new master and converts the precious master to slave
  • efm set-priority efm pg-10 1 makes node pg-10 the first node in the standby priority list
-bash-4.2$ watch efm cluster-status efm#

Every 2.0s: efm cluster-status efm Sun Aug 27 10:02:49 2017

Cluster Status: efm
VIP: 192.168.56.10

Agent Type Address Agent DB Info
 --------------------------------------------------------------
 Master pg-10 UP UP
 Standby pg-11 UP UP
 Standby pg-12 UP UP

Allowed node host list:
 pg-10 pg-11 pg-12

Membership coordinator: pg-10

Standby priority host list:
 pg-11 pg-12

Promote Status:

DB Type Address XLog Loc Info
 --------------------------------------------------------------
 Master pg-10 0/AB0000D0
 Standby pg-11 0/AB0000D0
 Standby pg-12 0/AB0000D0

Standby database(s) in sync with master. It is safe to promote.

For troubleshooting and checking purposes, there are very informative logs in /var/log/efm-2.1

EFM truely is a very nice tool to add resilience and flexibility to your PostgreSQL database cluster configuration..


EnterpriseDB Summerschool 2017

I have been meaning to write a lot of posts, meanwhile. With the new challenges, and all, it just hasn’t happened.

But!!

although I don’t tend to do much advertising here, I really do need to share this (unique) opportunity,

I (and my other colleagues across EMEA) really want to meet you and share some of the knowledge on EDB Postgres with you. Especially targeted at Oracle DBA’s!
It will cost you one day and there is even a certificate (which you need to earn during the day) to show “I have walked the walk”.

It starts real soon, there are just very few places available, it’s free (!!) and it is – just down right plain cool – technoloy without hassle…
Bring your laptop, we provide a VM with a lot of tech pre-installed, a little bit like RAC-Attack or #RepAttack!

Visit this link: http://info.enterprisedb.com/EDB-Postgres-Summer-School and sign up!

Looking forward to seeing you personally in either Frankfurt, Munich or Hamburg.

Riga Dev Days 2017, new experiences in many ways.

Riga Dev Days 2017

General

It has been a while since my last blog-post.
One of the reasons is my shift from closed to open source software, databases more specifically. More on that in a later blog-post.

The reason for already mentioning this is this strange hybrid (what a popular word, these days) situation that I am in at the moment.
Thanks to the super enthusiastic, flexible and tenacious organization-team of the Riga Dev Days, I was able to participate.
Happily boarded the Air Baltic flight, I went on my way to Riga!!

Being new at the broader conference scene, I enjoyed being at a mixed source developer conference. Besides the usual suspects – some of which are my best friends – I got to meet many interesting new people.
One of the key phrases of the day is: “the more you learn, the more you realize you know nothing – John Snow…” and it’s true! You never stand to think about it, but the wealth of subjects is just tremendous and the combined knowledge at events like these is down right “Yuge, it’s awesome, tremendous!”

Day one

With a day like this, time flies. Between session (and during sessions) there are discussions, a bit of work and catching up to do.
Still I managed to catch a few sessions, like the one from Michael Hüttermann who made a clear and well rounded case regarding CI/CD in a DevOps world. A nice insight into the effort that goes into what’s behind the proverbial “push of a button”.
Another example was that by Marcos Placona about the many (and very basic) things that you have to keep in mind wen building apps. There is no silver bullet and the best you can achieve is to discourage the hacker so much, they move on. Much like securing your house, do to speak.

The day ended in the medieval basements of Riga, where we had some really good medieval food. Life is good…, well…, it has it’s moments!

Day two

The keynote address by Edson Yanaga, which kicked off day two of the Riga Dev Days, was quite interesting.
Shortening development and deployment cycles and shrinking feature release sets actually helps improving software and deployment quality by creating faster and more accurate feedback loops. By looking at these concept in this way, buzzez like DevOps and Agile actually get some hands and feet. One of the lessons, though, is that doing things this way do not eliminate work or automagically solve various issues for you! It will help in getting predictability and continuity into your software development processes.
A nice eye-opening remark finally, was… “no, I don’t pay you to make something work on your computer, I pay you to make something work on my computer(s)!!”

Another talk I was able to attend was around Blockchains. Something I knew nothing of and was actually quite interested in. Nick Zeeb took us through a very lively and very animated tour of what actually a Blockchain is and what the awesome potential of this technology can be. I was impressed.

With this, the second day draw to and end and therewith also my turn “in the pit”. As this event is held in a movie-theater, every room had a sloped tribune, which was often packed with enthusiastic participants. I had the opportunity to share my thoughts on the comparison between PostgreSQL and Oracle.
The session was very well attended with a lot of questions regarding the possibilities of using these other technologies in scales that were not really considered before. You can find a recording of the actual presentation here as soon as it comes available.

Riga Dev Days was a good conference. I would recommend everyone to either attend or submit an abstract for their event in 2018!!

#OOW16, San Francisco

This year, 2016, is turning out to be an amazing year again, with #OOW16 being once again on of the apices!

Looking back

After the discovery of the Oracle community in 2012, as a result of a very first trip to downtown San Francisco in 2010 for #OOW10, an amazing chain of events was set in motion. This very first introduction in the Oracle World was as ‘a mere participant’ in this awe-inspring, large than life event.

Over these past few years I have met so many people, made so many new friends around the globe… This all literally changed my work, my life; basically everything changed.

After visiting Oracle Open World for the first time, I had the opportunity to work with Arjen Visser and the team of Dbvisit on building a strong brand for this amazing company in Europe. This also brought me back to San Francisco in 2014.
And boy, things have changed!
Not only was it a coming back, it was a fest of friendship, with so many people to meet, either brand new or in a chance to catchup once again. It was also the first time I had the opportunity to participate & share. With #RepAttack I had the opportunity to share knowledge about logical replication and the many benefits it holds for making the most out of your data.
Did I mention the utterly amazing fact of getting not only accepted by the Oracle Community, but also recognized, together with my dear friend from Belgium, Mr. Philippe Fierens, as a genuine Oracle ACE?

A new step

This edition of Oracle Open World, OOW16, again adds a brand new dimension to the visit to San Francisco!
Not only will it be as the Director Operations of Portrix Systems, supporting the Annual Swim in the bay event in cooperation with Oraclenerd Chet Justice, it will be as a selected speaker too. An opportunity I would have never anticipated to be possible.

Speaking-OOWWhen Your Database Server Crashes

I will be discussing the various aspects around the protection of data and how you can justify various investments to accomplish this.

Sunday, Sep 18, 10:30 a.m. – 11:15 a.m. | Moscone South—306

I cannot start tp imagine what the impact of this years trip shall be, I do know that I am looking forward to meeting many of you again. This year too, the OTN Lounge will be the base camp for the travels through the Open World landscape. Don’t hesitate to stop by and say hi!!

See you in San Francisco for #OOW16

Introducing FETCHER in a running replication process

This is no regular bit of work and it will probably (and hopefully) never hit you in a production setup…

The prerequisite is that you know how on-line data replication in general, and Dbvisit Replicate specifically, work.

The following case is true:
I had half of a replication pair running.
It means that the MINE process was running, converting REDO-log in PLOG-format. The APPLY process had not yet started because the target database was still being prepared.

dbvisit-replicate-logical-replication-made-easy-18-638-300x225The reason for this is that we needed to start converting redo-log information to PLOG information while we were setting up the target environment. The reason for that was that the setup (exporting source, copying dump to target and importing) was taking quite a bit of time, which would impact redo-log storage to heavily in this specific situation.

It was my suspicion that the MINE process was unable to get enough CPU-cycles from the production server to actually MINE more redo-log seconds than wall-clock seconds passed. In effect, for every second of redo-log information that was mined, between 1 and 6 seconds passed.

This means that the replication is lagging behind and will never be able to catch up.

To resolve this, the plan was to take the MINE process of the production server and placed on an extra server. On the production server, a process called FETCHER would be introduced. The task of this process is to act as a broker between the database and the MIN process, forwarding the requested on-line an archived redo log files.

Normally (!) you would use the nifty opportunities that Replicate offers with the setup wizard and just create a new setup. And actually, this is what I used to figure out this setup. And, if you can, please do use this…

Why didn’t I then, you would rightfully ask?

Well… The instantiation process would take to long, and did I say we were under time-pressure?

  • Setup wizard, 5 minutes
  • The famous *-all.sh script, ~ 1 hr.
  • Datapump Export, ~ 10 hrs.
  • Copy from DC old to DC new,  ~ 36 hrs.
  • Datapump Import, ~ 10 hrs.

So, totally we could spend 57:05 hrs. to try to fix this on the go…

Okay, here we go:

Note: cst-migration is the name of the replication project as you specified it in setup wizard when setting up Replication.

TIP: When setting up on-line replication, it is worth your effort to create separate tnsnames.ora entries for your project, like ‘repl-source’ and ‘repl-target’ acros all nodes.
It can get hellishly confusing if you have, as in this case, a database that is called <cst> and is called the same on the source and target server!

1. Step one:
We obviously had the ./cst-migration/config directory from our basic setup with just MINE & APPLY. This directory holds (among others) the ./cst-migration/config/cst-migration-ontime.ddc file. This file holds the Dbvisit Replicate Repository contents that is needed to run the processes.

From this setup, MINE is actually running. We actually concluded the fact that we were not catching up from this process.

2. Step two:
Now we run dbvrep -> setup wizard again and create a Replicate setup directory with FETCHER and isolate the ./cst-migration+fetcher/config/cst-migration+fetcher-onetime.ddc.

By comparing the two files, I was able to note the differences and therewith conclude the changes necessary to introduce a FETCHER process. It is a meticulous job to make sure all the paths on all the three servers are correct, that port numbers are correct and that all the individual steps are take in the right order. This is the overview.

Having these changes, it is all downhill from now.

3. Step three:
Using the Dbvisit Replicate console, the new entries and the changes were made to the DDC-information stored in the Replicate repository. You can enter these manually or execute your change-file by executing @<change-file-name> inside the console.

4. Step four:
Create the ./cst-migration directory on the system you will use for the relocated MINE process and copy the cst-migration-MINE.ddc and cst-migration-run-source-node.sh in this directory.
Rename the cst-migration-run-source-node.sh to cst-migration-run-mine-node.sh to reduce confusion.
Make sure that the paths mentioned in the cst-migration-MINE.ddc are correct for the system you are starting it on!

NOTE: Please make sure that you can reach both the source and the target database from this node using the tnsnames-entries you have created for the replication setup.

5. Step five:
Rename the cst-migration-MINE.ddc on the source node (!) to cst-migration-FETCHER.ddc and change the cst-migration-run-source-node.sh file to start the FETCHER process in stead of MINE process.

You are now ready to start your new replication processes!

NOTE: If you are running APPLY already, there are some additional things you need to be aware of.

Although it was not the case when I came across this challenge, I am happy to say that Dbvisit have verified and accepted this solutions as a supported action.

Hope this helps.

My picks, no, Agenda… for UKOUG_Tech15

I went over the agenda for UKOUG_Tech15 and took my picks & suggestions.
Then I thought, why not share these…

MONDAY

The Oracle Database In-Memory Option: Challenges & Possibilities
Christian Antognini – Trivadis AG

Standard Edition Something for the Enterprise or the Cloud?
Ann Sjökvist – SE – JUST LOVE IT

All about Table Locks: DML, DDL, Foreign Key, Online Operations,…
Franck Pachot – DBi Services

Silent but Deadly : SE Deserves Your Attention
Philippe Fierens – FCP
Co-presenter(s): Jan Karremans – JK-Consult (Having a link here would be silly, right)

Oracle SE – RAC, HA and Standby are Still Available. Even Cloud!
Chris Lawless – Dbvisit

SE DBA’s Life a Bed of Roses?
Ann Sjökvist – SE – JUST LOVE IT

Oracle Standard Edition Round Table
Joel Goodman – Oracle
Co-presenter(s): Ann Sjokvist, Philippe Fierens, Jan Karremans

TUESDAY

Watch out for #RepAttack… all day long!!
And earn your RepAttack badge-ribbon…

Advanced ASH Analytics: ASHmasters
Kyle Hailey – Delphix

Community Keynote – Dominic Giles

Oracle BI Cloud Service – Moving Your Complete BI Platform to the Cloud
Mark Rittman – Rittman Mead

Infiniband for Engineerd Systems
Klaas-Jan Jongsma – VX Company

Oracle Database In-Memory Option – Under the Hood
Maria Colgan – Oracle

Do an Oracle Data Guard Switchover without Your Applications Even Knowing
Marc Fielding – Pythian

Using Oracle NoSQL to Prioritise High Value Customers
James Anthony – RedStack tech

WEDNESDAY

HA for Single Instance Databases without Breaking the Bank
Niall Litchfield – Markit

Database Password Security
Pete Finnigan – PeteFinnigan.com

Connecting Oracle & Hadoop
Tanel Poder – PoderC LLC

Enterprise Use Cases for Internet of Things
Lonneke Dikmans – eProseed
Co-presenter(s): Luc Bors – eProseed

Bad Boys of On-line Replication – Changing Everything
Bjoern Rost – portrix Systems GmbH
Co-presenter(s): Jan Karremans – JK-Consult

RMAN 12c Live : It’s All About Recovery,Recovery,Recovery
René Antúnez – Pythian

Hopefully it will attend you to some interesting session for you!

Register redo-log manually with Divisit Replicate

For those of you who haven’t been working with on-line data replication; in short, it is a way to copy data from a source database to a target database and do this on-line (both databases are active) and do it near-real-time.
This means that when you enter data in you source database, you can immediately query it from your target database. This makes on-line data replication ideal for numerous tasks, like moving and / or upgrading your database while it is being used, with almost no downtime at all.

This tale is of an actual project that I conducted. I used Dbvisit Replicate as my tool of choice.

dbvisit-replicate-logical-replication-made-easy-18-638Dbvisit Replicate can use a so-called FETCHER process to act as the “long-arm” for the MINE process. Mining extracts the information from the redo-log files, but, in specific situations, this can be too much of an overhead for the source database server. By moving the MINE to a proxy server, this overhead can be significantly reduced.

In some cases it can be useful to manually transfer redo-log files to the mining stage directory of Dbvisit.
I came across this requirement when catching up a lot of redo from a RAC database. In this case, the RAC cluster creates two streams of redo. When starting the replication processes, the first thread is transferred by FETCHER from the source server to the proxy, before the second thread is transferred. This means mining will pause until the second thread successful delivers the first redo-log file of the second thread. The redo-log information from the second stream is necessary to create consistent and chronologically ordered SQL-statements for the target database. In effect, the SCN’s from first redo-log information of the first stream need to line up with the SCN’s of the second redo-log information.

In this case, this meant having to wait a day or more before mining can start. This is why I decided to copy a number of redo-log files from the source server to the proxy server, where the MINE process is running, manually.
After the copy, the files need to be registered with in the dbvrep-repository. Without this information, the MINE process has no knowledge of the files that are present and about what their contents are.

The update is an easy insert statement, but it should be handled with care, as this needs to be quite precise and it needs a bit of specific information about the redo-log files being added.
You can use the following insert statement to register the files:

insert into dbvrp.dbrsmine_redo_log_history
       (
       ddc_id
     , mine_process_name
     , sequence
     , thread
     , resetlogs_id
     , first_scn
     , next_scn
     , online_name
     , arch_name
     , read_count
     , from_fetcher
     , last_mine_start
     , last_mine_end
     , create_date
     , last_change_date
       )
values
       (
       1
     , ‘MINE’
     , 128779 -- sequence number of the copied file;
     , 2 -- assuming you are updating this thread.
     , 804864915 -- the reset-logs id from the redo-log file
     , 199910296688 -- the first scn from the redo-log file
     , 199911476897 -- the next scn from the redo-log file
     , null
     , ‘/u01/app/oracle/some-big-storage/dbvrep-mine/mine-stage/thread_2_seq_128719.1485.804864915’
       -- full path and name of the file
     , 0
     , ‘Y'
     , null
     , null
     , sysdate
     , sysdate
       )
;

And you can get the information you need about the files here:

select lh.sequence#
     , di.resetlogs_id
     , lh.first_change#
     , lh.next_change#
  from v$log_history lh
 inner join v$database_incarnation di
 using (resetlogs_change#)
 where sequence# = 128779
;

After registering the first file for the second thread, in the Replicate-console, you can watch the MINE process kick off. This process will then again halt after the first file of the second stream is processed in parallel with the first file of the first stream.

Schermafbeelding 2015-05-31 om 21.23.11

I kept adding files until the FETCHER process was able to take over, or you could do this until you test-case or PoC is over.

OUGN15, The “boat conference” revisited

Jan at shipsport
Reflections on OUGN

Sometimes things in life can change quickly! It is only two years ago that I came to Oslo for the first time to join the Scandinavian Oracle crew on a boat trip to Kiel.
At that time I had never actually participated in this kind of experience and I wasn’t into presenting either. Together with my good friend Philippe Fierens I discovered a whole new world back then. You could have read about these experiences in some blogpost, but this was lost in the move to my own site, sorry!

And this trip couldn’t have been more different though! With three presentations accepted the two days at sea will be a reunion with the friends I made over the last years, as well as a way to contribute to one of the most tight knit tech communities I know. And this will be in a scene that I remember vividly from being a newbie… And this is somewhat strange, believe me

After a quick and pleasant flight I touched down in Oslo, flying from Amsterdam with a decent sized crew of Dutch Oracle enthusiasts, including my good friends Patrick Barel and Alex Nuijten. Waiting in the Oslo airport for Luís Marques, I catches up with Gurcan Orhan, which was a great surprise.
Later that day we found ourselves in the Oslo harbor for the speakers dinner. You can imagine the collective amount of Oracle knowledge packed into that one restaurant!

frits
Enkitek’s Frits Hoogland on Ansible

After a somewhat restless night we arrived, on Thursday morning, at the ship Color Fantasy with the Heli Helskyaho-company, just in time for the keynotes. It was good to see Mark Rittman and James Morle made it on board too. Especially as James was up for the delivery of version 2.0 of his vibrant keynote! Next we proceeded to bring our luggage to our cabins and grab a spot of lunch on the exhibition floor down in the belly of the ship. The setup of the exhibition was quite nice and gave a good opportunity to mix and mingle.
The afternoon was spent on sessions, where I visited Frits Hoogland with the Ansible talk, and preparation for my own session at 18:00. This is the last run of this APEX presentation, as I have retired it after OUGN15. The slides will be archived here.
After finalizing the preparation for the third edition of the Standard Edition Round Table (aka “slide polishing”) with the #orclSERT team, comprising of Ann Sjökvist, Philippe and myself, it was time for the souree and for diner in the grand restaurant on board. It has been a good first day!

Diner
Dinner with the international crew on board the Color Fantasy.
Gin-tonic
Warm reception at Kiel port.

The second day of OUGN15 started with a multitude of sessions including the third edition of the Oracle Standard Edition Round Table, which was actually quite busy and interactive. We had some good discussions, and that at 09:00, so thank you, everybody.
Of course, as was declared a tradition, Björn Rost was present in the Kiel harbor. With the famous “Basil smash Gin & tonic” and sandwiches we were welcomed on German soil.
My afternoon comprised 3 sessions, starting with my own called “Okay, and now my database server crashed…” which was quite nicely received. Next Alex Nuijten on 12c new features for developer, topped off with Tim Gorman who taught us to be CSI people, in finding issues in the database.
After an enjoyable evening in the various bars and discotheques of the ship we retired the official part of the Oracle User Group Norway Vårseminar 2015, thanking the board and of course especial Øyvind Isene, for their hard work.

If you want to catch up further on the unconference communications surrounding this event, please do checkout the Twitter hashtag #OUGN15. This will also include a great set of snapshots and pictures taken along the way…

Oslo, until the next time!

A new form of on-line data protection

In the last few years I have been active with data replication solutions in the Oracle realm as you may know. This data replication field is one that has many angels, so there is something new to learn every day and sometimes there even are really new possibilities!

Take heed…

The first and most familiar form of the data replication forms is ‘physical data replication’, also known as ‘Standby Database‘.
In this form of replication, both source and target database are binary identical. Changes are propagated by copying the archived redo logfile from the source database to the environment for the standby database lives. Most often this is another server, preferably in another building in another town, far enough away to not be struck by the same havoc.

There are basically 3 ways to accomplish this;

  1. Use Oracle Data Guard (in Enterprise Edition Oracle database)
  2. Use Dbvisit Standby (in all Oracle database Editions)
  3. Write your own scripting (not recommended in any case)

The second and more emerging form of data replication is ‘Logical Data Replication’.
In this form of replication, there is not real relationship between the source and the target database, other than that the target database houses data coming from the source database. They can live on different systems, be from different database version, a different operating system or even be from a different vendor.
Data is harvested from the source database, converted and copied over to the target database / system. On the target system this data is being applied, in the native speech of the the target database.

There are a few ways to accomplish this, but basically every vendor has the same technique. It is more a matter of pricing, basically.

  1. Oracle Golden Gate (expensive, complex)
  2. Dell Shareplex (somewhat expensive)
  3. IBM Infosphere (ComPlex, expensive)
  4. Dbvisit Replicate (easy, affordable)

So, having discussed this, as this is not new, why this blogpost?

Well…

A Standby database is more or less closed. You can open it occasionally to query some data, but that interrupts the apply-process.
On-line data replication does what it says, you have an active database, where data is continuously added. This way you can, for example query, the same data on two sources to spread load.

The case I mean to discuss is the following:

“I have 10 source database and I want one target database (ah, presto, on-line data replication) and I want to backup 5 tables from each source to the target database (again, on-line data replication, but wait, backup?) so I can easily copy back specific data to the source (eeeuhm, yes…) whenever a user messes up the source tables (aï…) and I want the target to be update each day at 23:00 (so… okay!)

This reeks after somewhat of a hybrid approach!

We cannot do regular on-line data replication, for this is aimed at being real-time.
And we cannot leverage Standby database, since it needs to be centralized in one database and not 10. Next to that it would take some administration to open up the standby database in read-only mode, take the copy, and close the database again.

Working with Dbvisit, we came up with “Pause Apply” and “Resume Apply”, which we combine to form “Delayed Apply“.
This delayed apply would neatly answer the question posed.

  • By “delaying” the application of changes to the data, we could make sure the requested tables are only updated from 23:00 on;
  • We can combine the 50 tables (10 databases x 5 tables) in one single target database, since it is a logical approach to the matter;
  • We can easily restore or copy back corrupted data, since both the source and the target database remain continuously open.

Using Dbvisit Replicate, having this kind of protection for your “logical test-cases”, what this company was doing to require this solution, is really affordable.
It can help in dynamically and quickly resetting specific data-sets or test-cases while remaining much more flexible than creating scripts to reset a specific data-set or test-case! And, of course, there are many more ways to use this neat feature…

DOAG 2014, Nüremberg visited

Traveling to Nuremberg, anticipating three days of Oracle submersion. There are so many speaker heading over there it cannot be anything but successful.
This will be the first conference I will attend after being accredited as Oracle ACE Associate which, for me, makes it again a little more special.TurboProp
The first surprise, though, was just that. Arriving, by bus, at the boarding-location, there was a Bombardier DASH8-Q400 waiting, which turned out to be a turbo-prop aircraft. Okay, I jumped from a Cessna Caravan twin engine turbo-prop before, but this was still a first. As I am writing these lines, we’re descending upon Nüremberg.

On the first day of the conference, which started with a beautiful but rainy morning stroll to the conference center, The action started to really kick in from about 12:00 with the first session of my good friend Peter Raganitsch, talking about the 10 worst practices in APEX. A refresingh way of looking at software development by focussing on how to do it wrong!
The day ended with one of the “most pleasantly unorganized sessions” of the conference, where Johannes Ahrends and Philppe Fierens joined me on stage for the Standard Edition Round Table, #DOAG14-edition.

The second day was full of sessions, and I vistited Joel Kallman “APEX fast=true”, discussing the the knowledge needed to do serious application development on APEX, creating #DBADev. And, off course, the sharp presentation of my friend Franck Pachot about interpreting AWR-reports!
At 17:00 it was time for my third event, the “Electronic Patients Records system based on Oracle APEX” talk, which had quite a good turnout.
GatheringThe day ended with a super-cool meet-up with Mia Urman, Lonneke Dikmans AND Brynn Llewellyn… And later on we had a real nice depiction of #DBADev 2.0, involving Joel Kallman, Philippe Fierens, Illoon Ellen and myself.

Gathering

The third and last day of the conference was spend executing #RepAttack. This session concluded 3 full days of hands-on hacking with cool software and getting a feel of some of the new stuff.

RepAttack

A few of the cool new meetings (which we’ve dubbed the e-people to real people conversion by IRL) involved:

Thank you, DOAG, for a superb conference. I thoroughly enjoyed it. To all the Oracle aficionados, until next time!!