Category Archives: Database Replication

Oracle Open World 2014


In flight to San Francisco on the 27th of September 2014. Heading out to Oracle Open World for the second time.
Much has changed since my previous visit.

The previous time I came to this biggest of IT events in the world, I came as a spectator, representing an IT company, where my mission was to soak up as much knowledge as I possibly could, submerging myself in the flow of the event.
This time ‘round, I come as a participant, representing another IT company that wants to add to the scene and deliver a smart alternative.
And also personally there is a huge difference! Previously I went alone and was thrilled to find Frits Hoogland at the gate, which was already a familiar face to me back then! Now I am travelling to meet up with many more friends… listening to Metallica on the flight already reminds me that I will meet Gurcan Orhan over there! And in the pervious weeks many promises were made for quick meet ups and catch ups on the grounds of what we call “Oracle Open World”!

Clock set to Pacific Summertime, good morning world!!
Time has come a long way since my previous trip! Where I was bound to the onboard entertainment system a few years back, now I can work, prepare and write this text in flight. Hoping to meet all of you guys out here.
And today, Oracle Open World came to a real kick-off, when we went to the Golden Gate Bridge Run, organized by @thatleffsmith, where we ran or walked with a great number of Oracle celebrities, ranging from @oraclebase through @helifromfinland and Frits Hoogland to @dbvisit!
After this @ilmarkerm, myself and two lovely ladies from Finland shared a cab to Moscone where we met up with the RACAttack Ninja’s at the OTN Lounge…

image

It is turning out to be a good day, with the building of the Dbvisit stand, sneaking into the sessions of the Dbvisit speakers and meeting many, many friends!


#RepAttack, it’s all about learning

Everything we do in our daily life is about learning. Especially in IT we are used to continuously learning. Digging through documentation, figuring out how this or that piece of software should work. Downloading, installing, configuring, trying, tweaking, tuning…
Dbvisit Standby

For Dbvisit, it all started with Dbvisit Standby. Logical data replication, but, logical data replication is not so hard in the end. To get it running stable, to make it do exactly what you want it to do, is an oversee-able task. With it’s wizard driven installer and the clear task of having two exactly the same databases and a little bit of time, you’ll have this process of shipping archived log files, nailed. Getting it stable and reliable is built in, so not much worry there.

Logical data replication on the other hand, is a whole different ballgame!
For a long time logical data replication was just for bigger companies with intricate information needs. And it is a little more challenging than physical data replication. There are database, schema or table considerations, what and what not to replicate to where, making sure you get it stable and reliable in your environment. Checking and following up on changes and doing all kinds of work to make sure you get the best our of your setup.

Nevertheless, Logical data replication will help you in doing:

  • “Zero downtime database migrations”
  • “Report offloading”
  • “Schema consolidation”
  • “Real-time business intelligence” operations

And because these things are about you…

You deserve a “flying headstart”

with Dbvisit Replicate!

Dbvisit_Replicate_HR croppedTo be able to bring you this, we looked at the heroes from the Oracle Technology Network for inspiration. This special group of gurus called the RACAttack Ninja’s have been involved in educating and supporting any and all with a setup of Oracle’s Real Application Cluster technology on your laptop.

Inspired by this example, Dbvisit created #RepAttack! A techno-opportunity that will be traveling the world with it’s inaugural session nowhere less than at Oracle Open World 2014.

#RepAttack is a great opportunity to network with your peers who are just as curious as you are, and to access a fantastic team of warriors who will work with you one-on-one to ensure you are up and running quickly and leaping over any hurdles effortlessly. The session will include a deep dive into core concepts to make sure you return to your organization with an in-depth understanding of how both replication and virtualization really work. Take the time to attend and be that “go-to” person when questions around these concepts come up at work.

Keep an eye out as new details will emerge over the coming days and weeks!
Make sure you checkout Twitter hashtag #RepAttack or just submit your e-mail address below!

#RepAttack sessions by her warriors have been confirmed to be at:
Oracle Open World 2014 in San Francisco, USA
Deutsche Oracl-Anwendergruppe (DOAG) Jahreskonferenz 2014 in Nürnberg, Germany

And remember!
#RepAttack is about YOU!

Watch this following video of one of my personal heroes Ronald Rood playing with Logical Data Replication in Dbvisit Replicate:

TCL, Total Cost of Loss, a new business perspective

‘Total cost of Loss’ (TCL) was launched at the World Premiere of the Standard Edition Round Table during the OUGF Harmony 2014 annual user conference.

Doing nothing does not mean it costs nothing

Joel J. Goodman, Finland 2014

“TCL.” Abbreviations.com. STANDS4 LLC, 2014. Web. 15 Jun 2014. <http://www.abbreviations.com/term/1519392>.

Total Cost of Loss is the representation of the cost for an organization when data is lost. Experience learns that this is the hardest exercise in business continuity to figure out and the most neglected threat to an organization.

Next to the two best known terms RTO & RPO and the less well known term RTDA (‘Recover Time to Data Availability’), TCL is aimed at providing the business with an extra ratio to conduct BCP.

To correctly evaluate investments that have to be done to create a sufficient RTO time frame or RPO granularity, there has to be an understanding of the magnitude of the (financial) importance of the underlaying (data)system. TCL is aimed at calculating this figure where this figure is valid per specific data system.

The following components have currently been identified as being part of TCL:

  1. Collection price per granule of data*
  2. Present value per granule of data
  3. Business value per granule of data
  4. Added value in a dataset combination

* a granule of data is the smallest possible set of variables comprising a usable piece of information.

1. Collection price per granule of data:
The amount of effort (time, computing power, etc.) which is required to assemble and record the granule of data in the data-structure.

For example: 1) the time it takes to pick up an item and scan it’s bar-code with a bar-code scanner and put the item back, or 2) the time it takes to enter somebodies name and address at admittance inclusive of possible preparation and filing.

2. Present value per granule of data:
The current amount of effort (if possible) which is required to reassemble and record the granule of data in the datastructure. This entity is taking into account that historical data could be easy to collect at the historic point in time (#1) but would take an unequal effort to collect at present.

For example: 1) establishing if the item was on stock at the given moment, what it’s bar-code would have read at that time and possibly who scanned it at what location, or 2) finding out what person came to be admitted at that specific date and retracing what the date would have been that was entered at that specific moment and possibly by whom.

3. Business value per granule of data:
The value of the single entity of data for the operational business after the moment of measurement. During data lifetime, the value of a specific granule of data can change. Most often it will become less valuable, making it possible to archive or even cumulate** the data in multi teer storage solutions, but, when called upon, it could be this specific granule of data could be of vital importance!

For example: 1) knowing how many of a specific item is in stock, or 2) having identified a specific person within the clientgroup.

4. Added value in a dataset combination:
It can very well be and most probably is, that any granule of data is of key importance to a dataset combination, where several bits of data of different datasets of data-systems combined create information which is vital to any specific action within an organization.

For example: 1) knowing how many of a specific item is in stock to support a JIT-delivery system to keep a production line uninterruptedly going, or 2) delivering the right treatment to any specific person and being able to bill them accordingly.

** Cumulation of data can destroy a recovery path for retrieving any specific granule of data.

Creating a formula to calculate any TCL will be relatively easy.

Creating a model to extract or calculate or even guesstimate the values for the different variables of the formula will be the challenge.
A challenge that needs to be met because of the ever increasing volume of data and the ever increasing importance of certain realms, like healthcare, public services, transportation, etc., within this data mass.

Please step on board and help define TCL as it could prove to be a critical factor when push comes to shove!

Okay, and now my database server crashed…

RTO/RPO, who has ever heard of that! That was Star Wars, right?
Storing data and never having to go without or losing any… Yes, that’s more like it.

Server Crashed

Okay, and these two have everything to do with each other!

Talking about these two fancy IT abbreviations I have raised many eyebrows and aided securing businesses!

What is it:
RTO: Recovery Time Objective, or rather, how long should it take before your database is up-and-running again!
RPO: Recovery Point Objective. How much data can you stand to lose?

It is customary to put real amounts of time for these both parameters. This is one of these true points where IT ‘meets’ business, one of those do or die SLA parameters.
How long before you can start working again after something has gone somewhat horribly wrong? Dependent on the business (and for sake of argument), you will get something like; “Oh well, if we are back in business in say an hour, I guess we’ll be fine.” Okay, so we have RTO = 1hr.
And, how much data can you afford to lose? “Losing data, what do you mean?” Well, let’s say you have been on the phone and in the field harvesting order data and putting this in the database… how much of this information can be reproduced when your environment fails? We’ll go with two scenario’s. We will presume “Oh no, NOTHING!” and “Hmmm, well, 10 minutes, if needs be!”, making respectively RPO = 0min. and RPO=10min.

  • RTO = 1 hour
  • RPO = 0 minutes or 10 minutes.

Let us investigate what this means, assuming we have a functional backup running every night and that our drama happens at 15:45 on a working day.

What do we have when we do nothing?
After establishing we have a system crash at hands we need to start working immediately to rebuild something, but do we have something to build upon?
Do we have hardware? And does it somewhat meet specs? Can we run our OS (version) on it? Do we have OS media to install with? Do we have Oracle media to install with? Can we get network, and so on…
And if we have this do we have enough expertise to get it installed?
Well, I guess it’s clear… We need to invest big-time! Few hours getting all the facts straight and getting hardware, a few hours to install and configure the OS, a few more for Oracle, getting it to resemble the former production environment and then restoring the backup!
RTO = starting at 8 hours.
Looking at our RPO? Well, okay, that’s easy! We backup at midnight (0:00) and we crash at 15:45. So we will have lost 15 hours and 3 quarters.
RPO = 15:45 hours.
Acceptable? No, not really!

It’s clear we have to do something.
The first step is to reduce RTO, we need to be able to continue work faster.
We can do this by making sure we have a second server standing by in a different location. Have it installed, have it configured and ready to jump into action. You could call this a Standby Server.
But even now there is no guarantee we make our target since restoring a backup and getting the database up and running could still easily take over 1 hour, when dealing with red-tape and decision levels. To hit the home run we need to add one more feature, we need to have not only a Standby Server, we also need to have a Standby Database. A database that can be “opened” or “activated” in mere minutes.

  • Are you running Enterprise Edition Database then you can use Oracle Data Guard, included in your database license.
  • Are you running Standard Edition Database then you can get the Smart Alternative from Dbvisit.

With Standby Database in place:
RTO = 5 minutes!!

Now we need to tackle RPO!
Or… do we still?
RPO = 10 minutes, actually is tackled by the Standby Database implementation.
Because of the characteristics of Standby Database, we do not only have an RTO of mere minutes, we also have an RPO of a configurable duration.
Data is transferred to the Standby Database environment by means of archived Redo Log files and this mechanism is influenced by manual switching of log files and if you do this with small enough intervals (less than our target of 10 minutes) we make sure that age of the data in the Standby Database meets the target “Recovery Point Objective”!
RPO = 0 minutes
Well, okay, this is something else. And if we think about this a little, it’s something completely different!
Recovery Point Objective, the amount of data we can stand to lose, is 0 (nothing!). Actually meaning we have to create a Standby database setup which is kept up to date with the primary environment. This kind of Standby Database environment allows you to switch to this second environment within seconds and continue your business operation without delay!

And, with your Active-Active Standby Database solutions in place:
RPO = 0 minutes!

So, now you know about RTO/RPO to secure your data and know this guy is something else.

r2-d2

Increasing the reach of your SE database license

Imagine the following situation…

Since a few years your business has been investing in centralizing valuable business information. After some research in the market you have found the Oracle database to be the best fit for your requirements.
Using the free Oracle Application Express (APEX) framework, helping you to rapidly develop the web-applications needed to support both internal and external users, was a premium. Making this installation available based on the Oracle Standard Edition One database, you have created this solution against the lowest possible investment!

As many great projects go, the use and the number of APEX applications is growing. With the addition of ready to use applications to inspire you, many cool plug-ins to ever increase the usability and integration possibilities you get caught up in the data growth dogma!
With an ever increasing user population and expansion of data-reporting for ever faster business reporting your initial system is starting to fail, showing ever more frequent performance lags or system unavailability. These problems form a risk for your business, a risk you need to eliminate as soon as possible!
The standard advise here would be to upgrade your environment, the standard advise here would be to upgrade to a bigger machine and to an Enterprise Edition database. This is what your investment would be then…

  • Medium Oracle Sun Server X2-4 with 4 x 10 core CPU’s at € 42,500
  • (40 cores x 0,5 core-factor **) 20 Oracle Database Enterprise Edition licenses               at € 914,800

Without rendering your application infrastructure worthless by the required investment, a more reasonable step would be to migrate to Oracle Database Standard Edition.

  • Medium Oracle Sun Server X2-4 with 4 x 10 core CPU’s at € 42,500
  • 4 Oracle Database Standard Edition licenses at € 67,400

Still requiring a total investment of more than a hundred thousand Euro and leaving you with the old server and licenses to be decommissioned.

In many implementations, not data entry but data-mining or information aggregation are the costly processes. So probably this will be true in this situation too. With a little investigation it is possible to separate a number of functions that will only query data and not necessarily modify data. Especially in this situation you can also increase your application performance by moving these specific processes to a new environment.

But… how…

The information in the new environment needs to be real-time consistent with the “production” or primary environment. Here we introduce a real-time data replication solution like Dbvisit Replicate which will create just this real-time consistent query environment for you! This makes for the following investment:

  • Medium Oracle Sun Server X4-2 with 2 x 8 core CPU’s at € 19,500
  • 2 Oracle Database Standard Edition One licenses at € 11,200
  • 4 Dbvisit Replicate XTD at € 16,180

With this installation you add another € 50 k. of licensing in stead of € 100 k. with the Standard Edition migration. With this choice, you separate your time-critical data-entry process from the query environment, making sure a mis-fired query will not influence the availability of your data-entry process environment, which is a cool extra advantage!

* All prices are based on list-prices, excluding VAT and including 1 year of support.
** Based on the Oracle Processor Core Factor Table.

Cloud Database Offers On-premise Advantages

These are times when there are technologies abundantly available to help you make the very best of the data you gather from your business processes.

Increasing numbers of businesses choose the option to host their production database environment in one of the many cloud forms that are available these days. This example of a smart alternative discusses an additional service you could implement or request when you are dealing with cloud based databases.

In many organizations there is a BI-team responsible for the development of company specific KPIs or compose competitively strategic information based on the information that is gathered during day-to-day business. There often are key management positions that have a need for ad hoc queries to live data. In recent years the grave importance of this intelligence has been recognized as being of the greatest importance for decision support, and giving your organization the biggest competitive advantage possible.

Developing or even running these activities on live data gives the sharpest edge. Doing this on a production environment, nevertheless, is out of the question. Uninterrupted availability and maximum responsiveness for regular activities of these databases are unquestionably important. How can you combine these factors with the proposition of running your database in the cloud while staying smart?

The smart alternatives of Dbvisit enable you to do just this! By leveraging Dbvisit Replicate in a hosted environment you can create one or many local copies of live production data with specific local database settings to do precisely what you need, be it running or developing heavy BI queries or having departmental management looking at or analyzing data as it is recorded. Having (a subset of) the live data uni-directionally delivered from the cloud to your local (desktop) database creates a safe environment to analyze and enable knowledge workers to do the their job without any holds barred!

Retail Innovation with Dbvisit Replicate

In these current times it’s a “dog-eat-dog” world like it has never been. We are fighting over every bit of margin and trying to create value without increasing cost. The following example from a retail background shows an innovative way you could accomplish this, leveraging the #1 database at the lowest thinkable investment combined with the smart alternative from the makers of Dbvisit Software.

In this example we are following a supermarket in their quest to “do business a little different” and they are thinking of combining ‘shopping audience attitude’ in an interactive way with a time specific advertisement technique.
The idea is to look at what people are checking out at the cash register and combining this, in real-time, with both amounts of items in stock and possible specific business rules applicable to any discounts to be given.

By gathering information at the counters, information about what kind of groceries people are actually buying at that specific moment, you get an insight in the natural fluctuation of buyers behavior during the day. With this information you can do stuff, like figure out how much of what articles you need in stock or direct resupplies in the store during the day, getting the maximum revenue out of the employees responsible for making sure everything is plentyfull ready for the taking.
But why not take this one step further they thought. If we can combine this cash-register information with some kind of continuously changing system of discounting, we can create an element of interactivity. By looking at what articles are sold, combining this with remaining stock and using the fact if there already is some regular discount on specific articles, you can make a system where, for instance every 15 minutes, there is another specific item on ‘super special sale’. Delivering this “buy now” message to the actual customer can be done in several ways, either by loading this information in self-checkout bar-code scanners or for instance by label printers offering the specific discount label to be scanned at the checkout counter. After a ‘super special sale’ moment elapses, everything changes and a new article is the hot deal of the moment.

Where in a normal setting the POS entries are fed into the regular business database to be processed in a batch-like fashions you would have no chance of getting this information recycled. This backbone infrastructure cannot be used for the very data intensive activities we would need for our initiative to take shape. Having delays here would inevitably mean delays and errors at the checkout counters with queues and unsatisfied customers as the least of your concerns.

Regular data replication solutions would render any business case useless before somebody even had the heart to dream up such an idea. Today, by leveraging the Oracle database Standard Edition or even Standard Edition One, you have an environment which is capable of handling such information loads. Combine this hyper cost effective installation with a Smart Alternative like Dbvisit Replicate, replicating data away from you core POS infrastructure, delivering this data at a special database for this initiative. Here it is combined with stock information, also delivered by Dbvisit Replicate, to create a system that is real-time, robust and a system that doesn’t interfere with regular business. Moreover it creates a system which does support the business case by requesting up to 80% less investment.
This example shows that many of the smart ideas which were created by the business have stranded in an impossible business case. Today, the Smart Alternatives of Dbvisit create the opportunity for you to rethink these ideas and really start realizing them.

Just because it’s possible now!