Tag Archives: Dbvisit Replicate

Birth of a user group conference…


Or, how post-conference blues hit.

Seeding

You don’t actually get to witness these kinds of things to often. Yes, there was the birth of POUG conference, the passionate work of Kamil Stawiarski and the people he gathered around him. He did an awesome job and pulled it off.
Why then is this so special? Well, first of all because it is my native conference. People that have contributed for many years, working closely together with new members to create something new. I think it is kinda special.
Richard Olrichs, Ise Douwes, Luc Bors and Bart van de Laar formed the team that pulled it of! Kudos to them.

Work started already end of 2016… on the notion that this conference was being organized, there was a small Twitter bombardment to recruit as many of the interesting international speakers to come and join us. This helped create a fabulous agenda, covering 81 (!!) sessions and 3 keynotes over 2 days in the spectacular setting of “De Rijtuigenloods” in Amersfoort.

Importance for NL

We have (finally) done it! The Netherlands have experienced their very first Full Stack Oracle Conference ever! I have said this many times before, and I will say this probably many times again, this is so very important for the spread of knowledge, the exchange of experiences and cross-pollination between countries!
We have never done this before… We have APEX World, which is, of course, super important! And we have SIGs, which are very important for people working within a specialization. All good, all very important! But our business is way to specialized already. If you never take the time to look over your boundaries to what your colleague is doing (for which you don’t have time on a day-to-day basis) you’ll get isolated and miss out on possible great idea’s, changes and inspiration. For this alone, events like these are so important. For a country / region (as we span Benelux) that is so active in IT, it is a nuisance to have to go to either the UK or to Germany to experience a conference like this.

It is _that_ important…

One personal thing… nl.OUG (this is the brand new name for OGh, which also symbolizes a new start to me) focus on talks on (end-) user experience, in effect, partners with end-users coming to share project briefs… Good in itself, but not what I would applaud as main focus. These conferences – to me – are about learning and the best learning comes from professionals sharing either best practices or telling about technical implementations of technique. These stories are obviously always very welcome, but are no main focus…

My personal experiences

— International scene

As an active member of the Oracle community, I tend to know a number of people, spread out over this globe. One of the joys of a general conference is the fact that many of these people also participate. This leads to many happy encounters. With Oren Nakdimon from Israel, to Sandra Flores from Mexico, with Tim Hall, aka. Oracle Base, Maria Colgan & Brendan Tierney and therefor with Chris Saxon, more than 2rds of the Ask TOM-team!! And even many more famous speakers from home and abroad. It was a very special feeling to meet all these beautiful people on my home turf!

— Followed sessions

I didn’t get to follow many sessions, partly because of the many people I met and wanted to catch up with and partly because of, well, other responsibilities…
And perhaps a bit because of the fact that the ratio between serial sessions and parallel sessions was a bit off.
I did get to see:
The keynote by Maria Colgan highlighting the many things you can do with an Oracle database
Investigating the performance of a statement via the SQL Monitor report by Toon Koppelaars, which is always insightful!
Moving Oracle data in real-time – The 3 fundamental principles of Oracle replication by Jakub Sjeba from Dbvisit, which proved to be an excellent basis for my own session, later that day
Blockchain on the Oracle Cloud, the next big thing by Robert van Mölken, who helped me understand the technical side of the Blockchain technology.
It’s a wrap by Lucas Jellema, who did a great job at really zooming out and looking at the bigger picture.

— My sessions

I had the lucky opportunity to present even 2 sessions in Amersfoort.
Migrating your Oracle database with almost zero downtime
and
Comparing PostgreSQL to Oracle

Both sessions were well attended and interactive. I enjoyed it very much and, judging from the reactions and interactions, I guess the attendees too. Thank you for attending!!
Obviously I am happy to see the uptake of PostgreSQL and EDB Postgres in the Benelux. As said with “horses for courses”, Oracle has it’s playing-field, but so does PostgreSQL, and probably bigger than it is today 🙂

And now, the future…

This was 2017, this was the kick-off, the very first one.
With the buzz and with the post-conference blues…

It now is time to look to 2018, start preparing. Gathering lessons learned, inventorise feedback and make plans.
Whatever the outcome, there can only be 1 plan! “nl.OUGTech18″ or “Tech Experience 2018” we need to make sure the messages reaches further and wider.
With over 250 attendees for a first event, we aim for over 500 for the next event. There are more than enough potential participants in our region to pull this of.
The basic structure is there, the first succes is there, let’s do this!!

See you all next year!!
(or hopefully sooner)


Riga Dev Days 2017, new experiences in many ways.

Riga Dev Days 2017

General

It has been a while since my last blog-post.
One of the reasons is my shift from closed to open source software, databases more specifically. More on that in a later blog-post.

The reason for already mentioning this is this strange hybrid (what a popular word, these days) situation that I am in at the moment.
Thanks to the super enthusiastic, flexible and tenacious organization-team of the Riga Dev Days, I was able to participate.
Happily boarded the Air Baltic flight, I went on my way to Riga!!

Being new at the broader conference scene, I enjoyed being at a mixed source developer conference. Besides the usual suspects – some of which are my best friends – I got to meet many interesting new people.
One of the key phrases of the day is: “the more you learn, the more you realize you know nothing – John Snow…” and it’s true! You never stand to think about it, but the wealth of subjects is just tremendous and the combined knowledge at events like these is down right “Yuge, it’s awesome, tremendous!”

Day one

With a day like this, time flies. Between session (and during sessions) there are discussions, a bit of work and catching up to do.
Still I managed to catch a few sessions, like the one from Michael Hüttermann who made a clear and well rounded case regarding CI/CD in a DevOps world. A nice insight into the effort that goes into what’s behind the proverbial “push of a button”.
Another example was that by Marcos Placona about the many (and very basic) things that you have to keep in mind wen building apps. There is no silver bullet and the best you can achieve is to discourage the hacker so much, they move on. Much like securing your house, do to speak.

The day ended in the medieval basements of Riga, where we had some really good medieval food. Life is good…, well…, it has it’s moments!

Day two

The keynote address by Edson Yanaga, which kicked off day two of the Riga Dev Days, was quite interesting.
Shortening development and deployment cycles and shrinking feature release sets actually helps improving software and deployment quality by creating faster and more accurate feedback loops. By looking at these concept in this way, buzzez like DevOps and Agile actually get some hands and feet. One of the lessons, though, is that doing things this way do not eliminate work or automagically solve various issues for you! It will help in getting predictability and continuity into your software development processes.
A nice eye-opening remark finally, was… “no, I don’t pay you to make something work on your computer, I pay you to make something work on my computer(s)!!”

Another talk I was able to attend was around Blockchains. Something I knew nothing of and was actually quite interested in. Nick Zeeb took us through a very lively and very animated tour of what actually a Blockchain is and what the awesome potential of this technology can be. I was impressed.

With this, the second day draw to and end and therewith also my turn “in the pit”. As this event is held in a movie-theater, every room had a sloped tribune, which was often packed with enthusiastic participants. I had the opportunity to share my thoughts on the comparison between PostgreSQL and Oracle.
The session was very well attended with a lot of questions regarding the possibilities of using these other technologies in scales that were not really considered before. You can find a recording of the actual presentation here as soon as it comes available.

Riga Dev Days was a good conference. I would recommend everyone to either attend or submit an abstract for their event in 2018!!

#DOAG2016, definitely a crazy week.

#DOAG2016, the largest Oracle Community gathering in Europe. Taking place in Nuremberg, at the Nuremberg Convention Center NCC, one of the more impressive places to hold such a conference, towering 4 stories high, with a big central atrium!!
It is a huge effort to get all of this together!

In this blog-post I want to highlight some of the crazy things I experienced this week… And… I did try to follow my own schedule, but I wasn’t overly successful.

Young talent

One of the things that was somehow quite clear this week, is that we have a lot of young talent out there, eager to learn and share experiences. It is not just the #NextGen “movement” of DOAG, of which Carolin Hagemann made me aware, but just young people on the conference itself.

Discussing “Young PL/SQL” at the unconference session made us all aware that our part of the IT trade is no very sexy and popular with the youngsters. This all despite what was mentioned above. In universities we train SQL, but we don’t train to create real-life business applications, leveraging the power of the one language that keeps SQL close to the data it feasts on, PL/SQL. But, more on that below (Thick Database Paradigm).
To promote PL/SQL, basically two ground requirements were defined:

  1. Create a free ‘PDB as a Service’ for schools;
  2. Inspire teachers to talk about data centric computing

By finding somebody to be regionally or globally owner of this quest, it should be possible to get young professionals as familiar using PL/SQL for creating performant and business-ready applications as they were familiar using Microsoft Excel to do their accounting “back in the days”

ACE program

“There is a disturbance in the force!”

For everybody not acquainted with the Oracle ACE Program by the Oracle Technology Network… You should be!! Please read up, as it is an incredible cool initiative.

The disturbance, you ask?
Well, to retain your “status”, Oracle expects you to do “stuff” and this “stuff” is then evaluated on a yearly basis. Basically the initiative, the disturbance, is to get some transparency in “the stuff”. And, as always, everybody wants change, but few actually are good at “change”. There are rimples and things that change, but in the end; everything will be fine, unless, obviously, when it will not be fine.

Talks

I was honored to (co)host to talks at #DOAG2016:

Bad Boys of Replication – Changing everything…
With Oracle ACED and good friend Björn Rost, about an intense migration project we did some time ago. We were even offered to host our talk in Tokio, the biggest hall at DOAG!

Saving lives at sea at an industrial scale using Oracle Cloud Technology
An insightful (at least I like to think so) talk with my colleague Oliver Limberg. The talk is about the rapid development of a global portal for the maritime logistics branch.

I had a blast, and I hope you did too!

Community spirit

Oracle User Group conferences are about sharing and are about fun. Mr. Martin Widlake wrote a good post about that.

Apart from all the “more formal” things that happened, there were quite a few extracurricular activities, mostly involving an Irish Pub or a restaurant.

This all may sound quite funny and exciting, and, yes, it is alto talk with your co-workers: “Oh, hey, you are going to have fun and party all week!” Of course it is not a drag and a bore, but it has very profound function!
Whenever you run into trouble, these are the exact same people that are not only able, but probably also inclined to help you out, as you would help them out, as friends do among each other. In the end, they, you, your boss and your clients benefit. This is not to be underestimated too much.

The extra, special bit, that DOAG offers are the so called “unconference sessions”.
Not scheduled, no slides, nothing official, just getting together and discussing subjects of interest. Our “Young PL/SQL” was one of these “unconference session”, which turned out to be a great (and valuable) success!

Meeting people

Just to name a few, heroes of long and of yet to come for #DOAG2016:

Dietmar Neugebauer
Frank Dernoncourt
Joel Kallman
Johannes Ahrends
Kamil Stawiarski
Laurent Leturgez
Maja Veselica
Marcel Hofstetter
Piet de Visser
Sabine Heimsath
Stefan Kinnen
Stew Ashton
Uwe Hesse
Zoran Pavlovic
And alle the ones I forget to mention here!!

Thick Database Paradigm

Noting new in IT…

Well, no.

The Thick Database Paradigm (opposed to the “No PL/SQL Paradigm”) is nothing new. We have actually all been doing this since the eighties. Program your business rules, your constraints, everything that makes sure that your data is all that you want it to be, close to that data.
There are so many reasons that speak in favor of this approach that it is nearly overwhelming and deserves at least a book in itself. But, let me make a small attempt to highlighting a few here:

  • spare yourself network bandwidth, by not sending data all over your network to be processed
  • safeguard your data inside the (Oracle) database, so it can be protected by all that has been invented to do so
  • Transact data where it lives and combine and aggregate it there, you will be amazed by the efficiency
  • Remind yourself why you used to think “business logic in middle teer” was a good idea

If you leave possibile religious believes aside, there is no other conclusion possible then that the reinvention of “Thick Database” is the (re)discovery of 2016, right from the time when IT still made sense.

Yes, there are cases where an “Enterprise Service Bus” makes sense, but, as with every technology withing IT, it has a very specific area where it actually adds value or even makes sense. At best, a lot less than all the places where it is used currently!
Not to get carried away in this joyful blog-post, I will leave this topic at this.

The end

I hope to see you at the next Oracle User Group conference, somewhere… Please watch for the asterisk at his page for the conferences that I will attend.

Introducing FETCHER in a running replication process

This is no regular bit of work and it will probably (and hopefully) never hit you in a production setup…

The prerequisite is that you know how on-line data replication in general, and Dbvisit Replicate specifically, work.

The following case is true:
I had half of a replication pair running.
It means that the MINE process was running, converting REDO-log in PLOG-format. The APPLY process had not yet started because the target database was still being prepared.

dbvisit-replicate-logical-replication-made-easy-18-638-300x225The reason for this is that we needed to start converting redo-log information to PLOG information while we were setting up the target environment. The reason for that was that the setup (exporting source, copying dump to target and importing) was taking quite a bit of time, which would impact redo-log storage to heavily in this specific situation.

It was my suspicion that the MINE process was unable to get enough CPU-cycles from the production server to actually MINE more redo-log seconds than wall-clock seconds passed. In effect, for every second of redo-log information that was mined, between 1 and 6 seconds passed.

This means that the replication is lagging behind and will never be able to catch up.

To resolve this, the plan was to take the MINE process of the production server and placed on an extra server. On the production server, a process called FETCHER would be introduced. The task of this process is to act as a broker between the database and the MIN process, forwarding the requested on-line an archived redo log files.

Normally (!) you would use the nifty opportunities that Replicate offers with the setup wizard and just create a new setup. And actually, this is what I used to figure out this setup. And, if you can, please do use this…

Why didn’t I then, you would rightfully ask?

Well… The instantiation process would take to long, and did I say we were under time-pressure?

  • Setup wizard, 5 minutes
  • The famous *-all.sh script, ~ 1 hr.
  • Datapump Export, ~ 10 hrs.
  • Copy from DC old to DC new,  ~ 36 hrs.
  • Datapump Import, ~ 10 hrs.

So, totally we could spend 57:05 hrs. to try to fix this on the go…

Okay, here we go:

Note: cst-migration is the name of the replication project as you specified it in setup wizard when setting up Replication.

TIP: When setting up on-line replication, it is worth your effort to create separate tnsnames.ora entries for your project, like ‘repl-source’ and ‘repl-target’ acros all nodes.
It can get hellishly confusing if you have, as in this case, a database that is called <cst> and is called the same on the source and target server!

1. Step one:
We obviously had the ./cst-migration/config directory from our basic setup with just MINE & APPLY. This directory holds (among others) the ./cst-migration/config/cst-migration-ontime.ddc file. This file holds the Dbvisit Replicate Repository contents that is needed to run the processes.

From this setup, MINE is actually running. We actually concluded the fact that we were not catching up from this process.

2. Step two:
Now we run dbvrep -> setup wizard again and create a Replicate setup directory with FETCHER and isolate the ./cst-migration+fetcher/config/cst-migration+fetcher-onetime.ddc.

By comparing the two files, I was able to note the differences and therewith conclude the changes necessary to introduce a FETCHER process. It is a meticulous job to make sure all the paths on all the three servers are correct, that port numbers are correct and that all the individual steps are take in the right order. This is the overview.

Having these changes, it is all downhill from now.

3. Step three:
Using the Dbvisit Replicate console, the new entries and the changes were made to the DDC-information stored in the Replicate repository. You can enter these manually or execute your change-file by executing @<change-file-name> inside the console.

4. Step four:
Create the ./cst-migration directory on the system you will use for the relocated MINE process and copy the cst-migration-MINE.ddc and cst-migration-run-source-node.sh in this directory.
Rename the cst-migration-run-source-node.sh to cst-migration-run-mine-node.sh to reduce confusion.
Make sure that the paths mentioned in the cst-migration-MINE.ddc are correct for the system you are starting it on!

NOTE: Please make sure that you can reach both the source and the target database from this node using the tnsnames-entries you have created for the replication setup.

5. Step five:
Rename the cst-migration-MINE.ddc on the source node (!) to cst-migration-FETCHER.ddc and change the cst-migration-run-source-node.sh file to start the FETCHER process in stead of MINE process.

You are now ready to start your new replication processes!

NOTE: If you are running APPLY already, there are some additional things you need to be aware of.

Although it was not the case when I came across this challenge, I am happy to say that Dbvisit have verified and accepted this solutions as a supported action.

Hope this helps.

My picks, no, Agenda… for UKOUG_Tech15

I went over the agenda for UKOUG_Tech15 and took my picks & suggestions.
Then I thought, why not share these…

MONDAY

The Oracle Database In-Memory Option: Challenges & Possibilities
Christian Antognini – Trivadis AG

Standard Edition Something for the Enterprise or the Cloud?
Ann Sjökvist – SE – JUST LOVE IT

All about Table Locks: DML, DDL, Foreign Key, Online Operations,…
Franck Pachot – DBi Services

Silent but Deadly : SE Deserves Your Attention
Philippe Fierens – FCP
Co-presenter(s): Jan Karremans – JK-Consult (Having a link here would be silly, right)

Oracle SE – RAC, HA and Standby are Still Available. Even Cloud!
Chris Lawless – Dbvisit

SE DBA’s Life a Bed of Roses?
Ann Sjökvist – SE – JUST LOVE IT

Oracle Standard Edition Round Table
Joel Goodman – Oracle
Co-presenter(s): Ann Sjokvist, Philippe Fierens, Jan Karremans

TUESDAY

Watch out for #RepAttack… all day long!!
And earn your RepAttack badge-ribbon…

Advanced ASH Analytics: ASHmasters
Kyle Hailey – Delphix

Community Keynote – Dominic Giles

Oracle BI Cloud Service – Moving Your Complete BI Platform to the Cloud
Mark Rittman – Rittman Mead

Infiniband for Engineerd Systems
Klaas-Jan Jongsma – VX Company

Oracle Database In-Memory Option – Under the Hood
Maria Colgan – Oracle

Do an Oracle Data Guard Switchover without Your Applications Even Knowing
Marc Fielding – Pythian

Using Oracle NoSQL to Prioritise High Value Customers
James Anthony – RedStack tech

WEDNESDAY

HA for Single Instance Databases without Breaking the Bank
Niall Litchfield – Markit

Database Password Security
Pete Finnigan – PeteFinnigan.com

Connecting Oracle & Hadoop
Tanel Poder – PoderC LLC

Enterprise Use Cases for Internet of Things
Lonneke Dikmans – eProseed
Co-presenter(s): Luc Bors – eProseed

Bad Boys of On-line Replication – Changing Everything
Bjoern Rost – portrix Systems GmbH
Co-presenter(s): Jan Karremans – JK-Consult

RMAN 12c Live : It’s All About Recovery,Recovery,Recovery
René Antúnez – Pythian

Hopefully it will attend you to some interesting session for you!

Register redo-log manually with Divisit Replicate

For those of you who haven’t been working with on-line data replication; in short, it is a way to copy data from a source database to a target database and do this on-line (both databases are active) and do it near-real-time.
This means that when you enter data in you source database, you can immediately query it from your target database. This makes on-line data replication ideal for numerous tasks, like moving and / or upgrading your database while it is being used, with almost no downtime at all.

This tale is of an actual project that I conducted. I used Dbvisit Replicate as my tool of choice.

dbvisit-replicate-logical-replication-made-easy-18-638Dbvisit Replicate can use a so-called FETCHER process to act as the “long-arm” for the MINE process. Mining extracts the information from the redo-log files, but, in specific situations, this can be too much of an overhead for the source database server. By moving the MINE to a proxy server, this overhead can be significantly reduced.

In some cases it can be useful to manually transfer redo-log files to the mining stage directory of Dbvisit.
I came across this requirement when catching up a lot of redo from a RAC database. In this case, the RAC cluster creates two streams of redo. When starting the replication processes, the first thread is transferred by FETCHER from the source server to the proxy, before the second thread is transferred. This means mining will pause until the second thread successful delivers the first redo-log file of the second thread. The redo-log information from the second stream is necessary to create consistent and chronologically ordered SQL-statements for the target database. In effect, the SCN’s from first redo-log information of the first stream need to line up with the SCN’s of the second redo-log information.

In this case, this meant having to wait a day or more before mining can start. This is why I decided to copy a number of redo-log files from the source server to the proxy server, where the MINE process is running, manually.
After the copy, the files need to be registered with in the dbvrep-repository. Without this information, the MINE process has no knowledge of the files that are present and about what their contents are.

The update is an easy insert statement, but it should be handled with care, as this needs to be quite precise and it needs a bit of specific information about the redo-log files being added.
You can use the following insert statement to register the files:

insert into dbvrp.dbrsmine_redo_log_history
       (
       ddc_id
     , mine_process_name
     , sequence
     , thread
     , resetlogs_id
     , first_scn
     , next_scn
     , online_name
     , arch_name
     , read_count
     , from_fetcher
     , last_mine_start
     , last_mine_end
     , create_date
     , last_change_date
       )
values
       (
       1
     , ‘MINE’
     , 128779 -- sequence number of the copied file;
     , 2 -- assuming you are updating this thread.
     , 804864915 -- the reset-logs id from the redo-log file
     , 199910296688 -- the first scn from the redo-log file
     , 199911476897 -- the next scn from the redo-log file
     , null
     , ‘/u01/app/oracle/some-big-storage/dbvrep-mine/mine-stage/thread_2_seq_128719.1485.804864915’
       -- full path and name of the file
     , 0
     , ‘Y'
     , null
     , null
     , sysdate
     , sysdate
       )
;

And you can get the information you need about the files here:

select lh.sequence#
     , di.resetlogs_id
     , lh.first_change#
     , lh.next_change#
  from v$log_history lh
 inner join v$database_incarnation di
 using (resetlogs_change#)
 where sequence# = 128779
;

After registering the first file for the second thread, in the Replicate-console, you can watch the MINE process kick off. This process will then again halt after the first file of the second stream is processed in parallel with the first file of the first stream.

Schermafbeelding 2015-05-31 om 21.23.11

I kept adding files until the FETCHER process was able to take over, or you could do this until you test-case or PoC is over.

Okay, and now my database server crashed…

RTO/RPO, who has ever heard of that! That was Star Wars, right?
Storing data and never having to go without or losing any… Yes, that’s more like it.

Server Crashed

Okay, and these two have everything to do with each other!

Talking about these two fancy IT abbreviations I have raised many eyebrows and aided securing businesses!

What is it:
RTO: Recovery Time Objective, or rather, how long should it take before your database is up-and-running again!
RPO: Recovery Point Objective. How much data can you stand to lose?

It is customary to put real amounts of time for these both parameters. This is one of these true points where IT ‘meets’ business, one of those do or die SLA parameters.
How long before you can start working again after something has gone somewhat horribly wrong? Dependent on the business (and for sake of argument), you will get something like; “Oh well, if we are back in business in say an hour, I guess we’ll be fine.” Okay, so we have RTO = 1hr.
And, how much data can you afford to lose? “Losing data, what do you mean?” Well, let’s say you have been on the phone and in the field harvesting order data and putting this in the database… how much of this information can be reproduced when your environment fails? We’ll go with two scenario’s. We will presume “Oh no, NOTHING!” and “Hmmm, well, 10 minutes, if needs be!”, making respectively RPO = 0min. and RPO=10min.

  • RTO = 1 hour
  • RPO = 0 minutes or 10 minutes.

Let us investigate what this means, assuming we have a functional backup running every night and that our drama happens at 15:45 on a working day.

What do we have when we do nothing?
After establishing we have a system crash at hands we need to start working immediately to rebuild something, but do we have something to build upon?
Do we have hardware? And does it somewhat meet specs? Can we run our OS (version) on it? Do we have OS media to install with? Do we have Oracle media to install with? Can we get network, and so on…
And if we have this do we have enough expertise to get it installed?
Well, I guess it’s clear… We need to invest big-time! Few hours getting all the facts straight and getting hardware, a few hours to install and configure the OS, a few more for Oracle, getting it to resemble the former production environment and then restoring the backup!
RTO = starting at 8 hours.
Looking at our RPO? Well, okay, that’s easy! We backup at midnight (0:00) and we crash at 15:45. So we will have lost 15 hours and 3 quarters.
RPO = 15:45 hours.
Acceptable? No, not really!

It’s clear we have to do something.
The first step is to reduce RTO, we need to be able to continue work faster.
We can do this by making sure we have a second server standing by in a different location. Have it installed, have it configured and ready to jump into action. You could call this a Standby Server.
But even now there is no guarantee we make our target since restoring a backup and getting the database up and running could still easily take over 1 hour, when dealing with red-tape and decision levels. To hit the home run we need to add one more feature, we need to have not only a Standby Server, we also need to have a Standby Database. A database that can be “opened” or “activated” in mere minutes.

  • Are you running Enterprise Edition Database then you can use Oracle Data Guard, included in your database license.
  • Are you running Standard Edition Database then you can get the Smart Alternative from Dbvisit.

With Standby Database in place:
RTO = 5 minutes!!

Now we need to tackle RPO!
Or… do we still?
RPO = 10 minutes, actually is tackled by the Standby Database implementation.
Because of the characteristics of Standby Database, we do not only have an RTO of mere minutes, we also have an RPO of a configurable duration.
Data is transferred to the Standby Database environment by means of archived Redo Log files and this mechanism is influenced by manual switching of log files and if you do this with small enough intervals (less than our target of 10 minutes) we make sure that age of the data in the Standby Database meets the target “Recovery Point Objective”!
RPO = 0 minutes
Well, okay, this is something else. And if we think about this a little, it’s something completely different!
Recovery Point Objective, the amount of data we can stand to lose, is 0 (nothing!). Actually meaning we have to create a Standby database setup which is kept up to date with the primary environment. This kind of Standby Database environment allows you to switch to this second environment within seconds and continue your business operation without delay!

And, with your Active-Active Standby Database solutions in place:
RPO = 0 minutes!

So, now you know about RTO/RPO to secure your data and know this guy is something else.

r2-d2

Increasing the reach of your SE database license

Imagine the following situation…

Since a few years your business has been investing in centralizing valuable business information. After some research in the market you have found the Oracle database to be the best fit for your requirements.
Using the free Oracle Application Express (APEX) framework, helping you to rapidly develop the web-applications needed to support both internal and external users, was a premium. Making this installation available based on the Oracle Standard Edition One database, you have created this solution against the lowest possible investment!

As many great projects go, the use and the number of APEX applications is growing. With the addition of ready to use applications to inspire you, many cool plug-ins to ever increase the usability and integration possibilities you get caught up in the data growth dogma!
With an ever increasing user population and expansion of data-reporting for ever faster business reporting your initial system is starting to fail, showing ever more frequent performance lags or system unavailability. These problems form a risk for your business, a risk you need to eliminate as soon as possible!
The standard advise here would be to upgrade your environment, the standard advise here would be to upgrade to a bigger machine and to an Enterprise Edition database. This is what your investment would be then…

  • Medium Oracle Sun Server X2-4 with 4 x 10 core CPU’s at € 42,500
  • (40 cores x 0,5 core-factor **) 20 Oracle Database Enterprise Edition licenses               at € 914,800

Without rendering your application infrastructure worthless by the required investment, a more reasonable step would be to migrate to Oracle Database Standard Edition.

  • Medium Oracle Sun Server X2-4 with 4 x 10 core CPU’s at € 42,500
  • 4 Oracle Database Standard Edition licenses at € 67,400

Still requiring a total investment of more than a hundred thousand Euro and leaving you with the old server and licenses to be decommissioned.

In many implementations, not data entry but data-mining or information aggregation are the costly processes. So probably this will be true in this situation too. With a little investigation it is possible to separate a number of functions that will only query data and not necessarily modify data. Especially in this situation you can also increase your application performance by moving these specific processes to a new environment.

But… how…

The information in the new environment needs to be real-time consistent with the “production” or primary environment. Here we introduce a real-time data replication solution like Dbvisit Replicate which will create just this real-time consistent query environment for you! This makes for the following investment:

  • Medium Oracle Sun Server X4-2 with 2 x 8 core CPU’s at € 19,500
  • 2 Oracle Database Standard Edition One licenses at € 11,200
  • 4 Dbvisit Replicate XTD at € 16,180

With this installation you add another € 50 k. of licensing in stead of € 100 k. with the Standard Edition migration. With this choice, you separate your time-critical data-entry process from the query environment, making sure a mis-fired query will not influence the availability of your data-entry process environment, which is a cool extra advantage!

* All prices are based on list-prices, excluding VAT and including 1 year of support.
** Based on the Oracle Processor Core Factor Table.

Cloud Database Offers On-premise Advantages

These are times when there are technologies abundantly available to help you make the very best of the data you gather from your business processes.

Increasing numbers of businesses choose the option to host their production database environment in one of the many cloud forms that are available these days. This example of a smart alternative discusses an additional service you could implement or request when you are dealing with cloud based databases.

In many organizations there is a BI-team responsible for the development of company specific KPIs or compose competitively strategic information based on the information that is gathered during day-to-day business. There often are key management positions that have a need for ad hoc queries to live data. In recent years the grave importance of this intelligence has been recognized as being of the greatest importance for decision support, and giving your organization the biggest competitive advantage possible.

Developing or even running these activities on live data gives the sharpest edge. Doing this on a production environment, nevertheless, is out of the question. Uninterrupted availability and maximum responsiveness for regular activities of these databases are unquestionably important. How can you combine these factors with the proposition of running your database in the cloud while staying smart?

The smart alternatives of Dbvisit enable you to do just this! By leveraging Dbvisit Replicate in a hosted environment you can create one or many local copies of live production data with specific local database settings to do precisely what you need, be it running or developing heavy BI queries or having departmental management looking at or analyzing data as it is recorded. Having (a subset of) the live data uni-directionally delivered from the cloud to your local (desktop) database creates a safe environment to analyze and enable knowledge workers to do the their job without any holds barred!

Retail Innovation with Dbvisit Replicate

In these current times it’s a “dog-eat-dog” world like it has never been. We are fighting over every bit of margin and trying to create value without increasing cost. The following example from a retail background shows an innovative way you could accomplish this, leveraging the #1 database at the lowest thinkable investment combined with the smart alternative from the makers of Dbvisit Software.

In this example we are following a supermarket in their quest to “do business a little different” and they are thinking of combining ‘shopping audience attitude’ in an interactive way with a time specific advertisement technique.
The idea is to look at what people are checking out at the cash register and combining this, in real-time, with both amounts of items in stock and possible specific business rules applicable to any discounts to be given.

By gathering information at the counters, information about what kind of groceries people are actually buying at that specific moment, you get an insight in the natural fluctuation of buyers behavior during the day. With this information you can do stuff, like figure out how much of what articles you need in stock or direct resupplies in the store during the day, getting the maximum revenue out of the employees responsible for making sure everything is plentyfull ready for the taking.
But why not take this one step further they thought. If we can combine this cash-register information with some kind of continuously changing system of discounting, we can create an element of interactivity. By looking at what articles are sold, combining this with remaining stock and using the fact if there already is some regular discount on specific articles, you can make a system where, for instance every 15 minutes, there is another specific item on ‘super special sale’. Delivering this “buy now” message to the actual customer can be done in several ways, either by loading this information in self-checkout bar-code scanners or for instance by label printers offering the specific discount label to be scanned at the checkout counter. After a ‘super special sale’ moment elapses, everything changes and a new article is the hot deal of the moment.

Where in a normal setting the POS entries are fed into the regular business database to be processed in a batch-like fashions you would have no chance of getting this information recycled. This backbone infrastructure cannot be used for the very data intensive activities we would need for our initiative to take shape. Having delays here would inevitably mean delays and errors at the checkout counters with queues and unsatisfied customers as the least of your concerns.

Regular data replication solutions would render any business case useless before somebody even had the heart to dream up such an idea. Today, by leveraging the Oracle database Standard Edition or even Standard Edition One, you have an environment which is capable of handling such information loads. Combine this hyper cost effective installation with a Smart Alternative like Dbvisit Replicate, replicating data away from you core POS infrastructure, delivering this data at a special database for this initiative. Here it is combined with stock information, also delivered by Dbvisit Replicate, to create a system that is real-time, robust and a system that doesn’t interfere with regular business. Moreover it creates a system which does support the business case by requesting up to 80% less investment.
This example shows that many of the smart ideas which were created by the business have stranded in an impossible business case. Today, the Smart Alternatives of Dbvisit create the opportunity for you to rethink these ideas and really start realizing them.

Just because it’s possible now!