Category Archives: Standard Edition

Register redo-log manually with Divisit Replicate


For those of you who haven’t been working with on-line data replication; in short, it is a way to copy data from a source database to a target database and do this on-line (both databases are active) and do it near-real-time.
This means that when you enter data in you source database, you can immediately query it from your target database. This makes on-line data replication ideal for numerous tasks, like moving and / or upgrading your database while it is being used, with almost no downtime at all.

This tale is of an actual project that I conducted. I used Dbvisit Replicate as my tool of choice.

dbvisit-replicate-logical-replication-made-easy-18-638Dbvisit Replicate can use a so-called FETCHER process to act as the “long-arm” for the MINE process. Mining extracts the information from the redo-log files, but, in specific situations, this can be too much of an overhead for the source database server. By moving the MINE to a proxy server, this overhead can be significantly reduced.

In some cases it can be useful to manually transfer redo-log files to the mining stage directory of Dbvisit.
I came across this requirement when catching up a lot of redo from a RAC database. In this case, the RAC cluster creates two streams of redo. When starting the replication processes, the first thread is transferred by FETCHER from the source server to the proxy, before the second thread is transferred. This means mining will pause until the second thread successful delivers the first redo-log file of the second thread. The redo-log information from the second stream is necessary to create consistent and chronologically ordered SQL-statements for the target database. In effect, the SCN’s from first redo-log information of the first stream need to line up with the SCN’s of the second redo-log information.

In this case, this meant having to wait a day or more before mining can start. This is why I decided to copy a number of redo-log files from the source server to the proxy server, where the MINE process is running, manually.
After the copy, the files need to be registered with in the dbvrep-repository. Without this information, the MINE process has no knowledge of the files that are present and about what their contents are.

The update is an easy insert statement, but it should be handled with care, as this needs to be quite precise and it needs a bit of specific information about the redo-log files being added.
You can use the following insert statement to register the files:

insert into dbvrp.dbrsmine_redo_log_history
       (
       ddc_id
     , mine_process_name
     , sequence
     , thread
     , resetlogs_id
     , first_scn
     , next_scn
     , online_name
     , arch_name
     , read_count
     , from_fetcher
     , last_mine_start
     , last_mine_end
     , create_date
     , last_change_date
       )
values
       (
       1
     , ‘MINE’
     , 128779 -- sequence number of the copied file;
     , 2 -- assuming you are updating this thread.
     , 804864915 -- the reset-logs id from the redo-log file
     , 199910296688 -- the first scn from the redo-log file
     , 199911476897 -- the next scn from the redo-log file
     , null
     , ‘/u01/app/oracle/some-big-storage/dbvrep-mine/mine-stage/thread_2_seq_128719.1485.804864915’
       -- full path and name of the file
     , 0
     , ‘Y'
     , null
     , null
     , sysdate
     , sysdate
       )
;

And you can get the information you need about the files here:

select lh.sequence#
     , di.resetlogs_id
     , lh.first_change#
     , lh.next_change#
  from v$log_history lh
 inner join v$database_incarnation di
 using (resetlogs_change#)
 where sequence# = 128779
;

After registering the first file for the second thread, in the Replicate-console, you can watch the MINE process kick off. This process will then again halt after the first file of the second stream is processed in parallel with the first file of the first stream.

Schermafbeelding 2015-05-31 om 21.23.11

I kept adding files until the FETCHER process was able to take over, or you could do this until you test-case or PoC is over.


OUGN15, The “boat conference” revisited

Jan at shipsport
Reflections on OUGN

Sometimes things in life can change quickly! It is only two years ago that I came to Oslo for the first time to join the Scandinavian Oracle crew on a boat trip to Kiel.
At that time I had never actually participated in this kind of experience and I wasn’t into presenting either. Together with my good friend Philippe Fierens I discovered a whole new world back then. You could have read about these experiences in some blogpost, but this was lost in the move to my own site, sorry!

And this trip couldn’t have been more different though! With three presentations accepted the two days at sea will be a reunion with the friends I made over the last years, as well as a way to contribute to one of the most tight knit tech communities I know. And this will be in a scene that I remember vividly from being a newbie… And this is somewhat strange, believe me

After a quick and pleasant flight I touched down in Oslo, flying from Amsterdam with a decent sized crew of Dutch Oracle enthusiasts, including my good friends Patrick Barel and Alex Nuijten. Waiting in the Oslo airport for Luís Marques, I catches up with Gurcan Orhan, which was a great surprise.
Later that day we found ourselves in the Oslo harbor for the speakers dinner. You can imagine the collective amount of Oracle knowledge packed into that one restaurant!

frits
Enkitek’s Frits Hoogland on Ansible

After a somewhat restless night we arrived, on Thursday morning, at the ship Color Fantasy with the Heli Helskyaho-company, just in time for the keynotes. It was good to see Mark Rittman and James Morle made it on board too. Especially as James was up for the delivery of version 2.0 of his vibrant keynote! Next we proceeded to bring our luggage to our cabins and grab a spot of lunch on the exhibition floor down in the belly of the ship. The setup of the exhibition was quite nice and gave a good opportunity to mix and mingle.
The afternoon was spent on sessions, where I visited Frits Hoogland with the Ansible talk, and preparation for my own session at 18:00. This is the last run of this APEX presentation, as I have retired it after OUGN15. The slides will be archived here.
After finalizing the preparation for the third edition of the Standard Edition Round Table (aka “slide polishing”) with the #orclSERT team, comprising of Ann Sjökvist, Philippe and myself, it was time for the souree and for diner in the grand restaurant on board. It has been a good first day!

Diner
Dinner with the international crew on board the Color Fantasy.
Gin-tonic
Warm reception at Kiel port.

The second day of OUGN15 started with a multitude of sessions including the third edition of the Oracle Standard Edition Round Table, which was actually quite busy and interactive. We had some good discussions, and that at 09:00, so thank you, everybody.
Of course, as was declared a tradition, Björn Rost was present in the Kiel harbor. With the famous “Basil smash Gin & tonic” and sandwiches we were welcomed on German soil.
My afternoon comprised 3 sessions, starting with my own called “Okay, and now my database server crashed…” which was quite nicely received. Next Alex Nuijten on 12c new features for developer, topped off with Tim Gorman who taught us to be CSI people, in finding issues in the database.
After an enjoyable evening in the various bars and discotheques of the ship we retired the official part of the Oracle User Group Norway Vårseminar 2015, thanking the board and of course especial Øyvind Isene, for their hard work.

If you want to catch up further on the unconference communications surrounding this event, please do checkout the Twitter hashtag #OUGN15. This will also include a great set of snapshots and pictures taken along the way…

Oslo, until the next time!

A new form of on-line data protection

In the last few years I have been active with data replication solutions in the Oracle realm as you may know. This data replication field is one that has many angels, so there is something new to learn every day and sometimes there even are really new possibilities!

Take heed…

The first and most familiar form of the data replication forms is ‘physical data replication’, also known as ‘Standby Database‘.
In this form of replication, both source and target database are binary identical. Changes are propagated by copying the archived redo logfile from the source database to the environment for the standby database lives. Most often this is another server, preferably in another building in another town, far enough away to not be struck by the same havoc.

There are basically 3 ways to accomplish this;

  1. Use Oracle Data Guard (in Enterprise Edition Oracle database)
  2. Use Dbvisit Standby (in all Oracle database Editions)
  3. Write your own scripting (not recommended in any case)

The second and more emerging form of data replication is ‘Logical Data Replication’.
In this form of replication, there is not real relationship between the source and the target database, other than that the target database houses data coming from the source database. They can live on different systems, be from different database version, a different operating system or even be from a different vendor.
Data is harvested from the source database, converted and copied over to the target database / system. On the target system this data is being applied, in the native speech of the the target database.

There are a few ways to accomplish this, but basically every vendor has the same technique. It is more a matter of pricing, basically.

  1. Oracle Golden Gate (expensive, complex)
  2. Dell Shareplex (somewhat expensive)
  3. IBM Infosphere (ComPlex, expensive)
  4. Dbvisit Replicate (easy, affordable)

So, having discussed this, as this is not new, why this blogpost?

Well…

A Standby database is more or less closed. You can open it occasionally to query some data, but that interrupts the apply-process.
On-line data replication does what it says, you have an active database, where data is continuously added. This way you can, for example query, the same data on two sources to spread load.

The case I mean to discuss is the following:

“I have 10 source database and I want one target database (ah, presto, on-line data replication) and I want to backup 5 tables from each source to the target database (again, on-line data replication, but wait, backup?) so I can easily copy back specific data to the source (eeeuhm, yes…) whenever a user messes up the source tables (aï…) and I want the target to be update each day at 23:00 (so… okay!)

This reeks after somewhat of a hybrid approach!

We cannot do regular on-line data replication, for this is aimed at being real-time.
And we cannot leverage Standby database, since it needs to be centralized in one database and not 10. Next to that it would take some administration to open up the standby database in read-only mode, take the copy, and close the database again.

Working with Dbvisit, we came up with “Pause Apply” and “Resume Apply”, which we combine to form “Delayed Apply“.
This delayed apply would neatly answer the question posed.

  • By “delaying” the application of changes to the data, we could make sure the requested tables are only updated from 23:00 on;
  • We can combine the 50 tables (10 databases x 5 tables) in one single target database, since it is a logical approach to the matter;
  • We can easily restore or copy back corrupted data, since both the source and the target database remain continuously open.

Using Dbvisit Replicate, having this kind of protection for your “logical test-cases”, what this company was doing to require this solution, is really affordable.
It can help in dynamically and quickly resetting specific data-sets or test-cases while remaining much more flexible than creating scripts to reset a specific data-set or test-case! And, of course, there are many more ways to use this neat feature…

DOAG 2014, Nüremberg visited

Traveling to Nuremberg, anticipating three days of Oracle submersion. There are so many speaker heading over there it cannot be anything but successful.
This will be the first conference I will attend after being accredited as Oracle ACE Associate which, for me, makes it again a little more special.TurboProp
The first surprise, though, was just that. Arriving, by bus, at the boarding-location, there was a Bombardier DASH8-Q400 waiting, which turned out to be a turbo-prop aircraft. Okay, I jumped from a Cessna Caravan twin engine turbo-prop before, but this was still a first. As I am writing these lines, we’re descending upon Nüremberg.

On the first day of the conference, which started with a beautiful but rainy morning stroll to the conference center, The action started to really kick in from about 12:00 with the first session of my good friend Peter Raganitsch, talking about the 10 worst practices in APEX. A refresingh way of looking at software development by focussing on how to do it wrong!
The day ended with one of the “most pleasantly unorganized sessions” of the conference, where Johannes Ahrends and Philppe Fierens joined me on stage for the Standard Edition Round Table, #DOAG14-edition.

The second day was full of sessions, and I vistited Joel Kallman “APEX fast=true”, discussing the the knowledge needed to do serious application development on APEX, creating #DBADev. And, off course, the sharp presentation of my friend Franck Pachot about interpreting AWR-reports!
At 17:00 it was time for my third event, the “Electronic Patients Records system based on Oracle APEX” talk, which had quite a good turnout.
GatheringThe day ended with a super-cool meet-up with Mia Urman, Lonneke Dikmans AND Brynn Llewellyn… And later on we had a real nice depiction of #DBADev 2.0, involving Joel Kallman, Philippe Fierens, Illoon Ellen and myself.

Gathering

The third and last day of the conference was spend executing #RepAttack. This session concluded 3 full days of hands-on hacking with cool software and getting a feel of some of the new stuff.

RepAttack

A few of the cool new meetings (which we’ve dubbed the e-people to real people conversion by IRL) involved:

Thank you, DOAG, for a superb conference. I thoroughly enjoyed it. To all the Oracle aficionados, until next time!!

Post #OOW14

Back at San Francisco International. Unfortunately in the knowledge that Alex Nuijten and I (probably amongst others) found out to be standby for this flight. –> During the writing of this post we found out we will be home in time, which is great news in any case.

This time around Oracle Open World has indeed been different. Usually I always like to post a list of people I had, until now, just met on-line. Each and every gathering I went to, there would be a bunch of new people.
I am going to skip this habit because the post would just get too long 🙂 There is a big difference attending a conference of the magnitude of Open World, knowing people or being here as a solitaire visitor, which I did in 2010.

As I wasn’t just a visitor at Open World, I actually had the chance to actively contribute to this technology fest. #RepAttack, as an opportunity to share knowledge about logical data replication. Together with a fine crew of Dbvisit replication specialists!

DataTitansMy colleague Vit Spinka spoke about the evolution of redo logs, which gives a great background to this technological solution. Vit spoke, as did other members of the Dbvisit crew, like Arjen Visser and Mike Donovan.

The major highlight for me personally was to have the ability to host #RepAttack at the Oak Table location. Together with the champions of Delphix and Confio, we occupied the Children’s Creativity Museum Community Lab and explained and taught about different technologies. Our subject being Logical Data Replication (of course).

RepAttack CCM

Un-very-fortunately, I attended Oracle Open World on an exhibitor pass. This meant I got to see no session at all (Okay, except the on or two I snuck into). All the more time to stroll around the exhibition terrain, see the demo-grounds and have lunch with Martin Nash in the sun (which was a very nice experience), especially since Martin persuaded my to go out and hand out the left-over lunch to some homeless people around the Moscone Center. Thus heading the call of Connor Mc’Donald.

Okay, so before I forget… A big thank you to Björn Rost and Henning Voss of Portrix Systems for making the Appreciation Event a night worthy of remembrance! And to my friends at Dbvisit of course, for getting me to San Francisco in the first plance!!

Time flies when you’re having none…

A saying all too true! Having spent too little time with too few of my Oracle friends in the great city of San Francisco, it was again time to fly home… As you saw, in the beginning of this post.

Next stop… #DOAG14!!

Oracle Open World 2014

In flight to San Francisco on the 27th of September 2014. Heading out to Oracle Open World for the second time.
Much has changed since my previous visit.

The previous time I came to this biggest of IT events in the world, I came as a spectator, representing an IT company, where my mission was to soak up as much knowledge as I possibly could, submerging myself in the flow of the event.
This time ‘round, I come as a participant, representing another IT company that wants to add to the scene and deliver a smart alternative.
And also personally there is a huge difference! Previously I went alone and was thrilled to find Frits Hoogland at the gate, which was already a familiar face to me back then! Now I am travelling to meet up with many more friends… listening to Metallica on the flight already reminds me that I will meet Gurcan Orhan over there! And in the pervious weeks many promises were made for quick meet ups and catch ups on the grounds of what we call “Oracle Open World”!

Clock set to Pacific Summertime, good morning world!!
Time has come a long way since my previous trip! Where I was bound to the onboard entertainment system a few years back, now I can work, prepare and write this text in flight. Hoping to meet all of you guys out here.
And today, Oracle Open World came to a real kick-off, when we went to the Golden Gate Bridge Run, organized by @thatleffsmith, where we ran or walked with a great number of Oracle celebrities, ranging from @oraclebase through @helifromfinland and Frits Hoogland to @dbvisit!
After this @ilmarkerm, myself and two lovely ladies from Finland shared a cab to Moscone where we met up with the RACAttack Ninja’s at the OTN Lounge…

image

It is turning out to be a good day, with the building of the Dbvisit stand, sneaking into the sessions of the Dbvisit speakers and meeting many, many friends!

#RepAttack, it’s all about learning

Everything we do in our daily life is about learning. Especially in IT we are used to continuously learning. Digging through documentation, figuring out how this or that piece of software should work. Downloading, installing, configuring, trying, tweaking, tuning…
Dbvisit Standby

For Dbvisit, it all started with Dbvisit Standby. Logical data replication, but, logical data replication is not so hard in the end. To get it running stable, to make it do exactly what you want it to do, is an oversee-able task. With it’s wizard driven installer and the clear task of having two exactly the same databases and a little bit of time, you’ll have this process of shipping archived log files, nailed. Getting it stable and reliable is built in, so not much worry there.

Logical data replication on the other hand, is a whole different ballgame!
For a long time logical data replication was just for bigger companies with intricate information needs. And it is a little more challenging than physical data replication. There are database, schema or table considerations, what and what not to replicate to where, making sure you get it stable and reliable in your environment. Checking and following up on changes and doing all kinds of work to make sure you get the best our of your setup.

Nevertheless, Logical data replication will help you in doing:

  • “Zero downtime database migrations”
  • “Report offloading”
  • “Schema consolidation”
  • “Real-time business intelligence” operations

And because these things are about you…

You deserve a “flying headstart”

with Dbvisit Replicate!

Dbvisit_Replicate_HR croppedTo be able to bring you this, we looked at the heroes from the Oracle Technology Network for inspiration. This special group of gurus called the RACAttack Ninja’s have been involved in educating and supporting any and all with a setup of Oracle’s Real Application Cluster technology on your laptop.

Inspired by this example, Dbvisit created #RepAttack! A techno-opportunity that will be traveling the world with it’s inaugural session nowhere less than at Oracle Open World 2014.

#RepAttack is a great opportunity to network with your peers who are just as curious as you are, and to access a fantastic team of warriors who will work with you one-on-one to ensure you are up and running quickly and leaping over any hurdles effortlessly. The session will include a deep dive into core concepts to make sure you return to your organization with an in-depth understanding of how both replication and virtualization really work. Take the time to attend and be that “go-to” person when questions around these concepts come up at work.

Keep an eye out as new details will emerge over the coming days and weeks!
Make sure you checkout Twitter hashtag #RepAttack or just submit your e-mail address below!

#RepAttack sessions by her warriors have been confirmed to be at:
Oracle Open World 2014 in San Francisco, USA
Deutsche Oracl-Anwendergruppe (DOAG) Jahreskonferenz 2014 in Nürnberg, Germany

And remember!
#RepAttack is about YOU!

Watch this following video of one of my personal heroes Ronald Rood playing with Logical Data Replication in Dbvisit Replicate:

TCL, Total Cost of Loss, a new business perspective

‘Total cost of Loss’ (TCL) was launched at the World Premiere of the Standard Edition Round Table during the OUGF Harmony 2014 annual user conference.

Doing nothing does not mean it costs nothing

Joel J. Goodman, Finland 2014

“TCL.” Abbreviations.com. STANDS4 LLC, 2014. Web. 15 Jun 2014. <http://www.abbreviations.com/term/1519392>.

Total Cost of Loss is the representation of the cost for an organization when data is lost. Experience learns that this is the hardest exercise in business continuity to figure out and the most neglected threat to an organization.

Next to the two best known terms RTO & RPO and the less well known term RTDA (‘Recover Time to Data Availability’), TCL is aimed at providing the business with an extra ratio to conduct BCP.

To correctly evaluate investments that have to be done to create a sufficient RTO time frame or RPO granularity, there has to be an understanding of the magnitude of the (financial) importance of the underlaying (data)system. TCL is aimed at calculating this figure where this figure is valid per specific data system.

The following components have currently been identified as being part of TCL:

  1. Collection price per granule of data*
  2. Present value per granule of data
  3. Business value per granule of data
  4. Added value in a dataset combination

* a granule of data is the smallest possible set of variables comprising a usable piece of information.

1. Collection price per granule of data:
The amount of effort (time, computing power, etc.) which is required to assemble and record the granule of data in the data-structure.

For example: 1) the time it takes to pick up an item and scan it’s bar-code with a bar-code scanner and put the item back, or 2) the time it takes to enter somebodies name and address at admittance inclusive of possible preparation and filing.

2. Present value per granule of data:
The current amount of effort (if possible) which is required to reassemble and record the granule of data in the datastructure. This entity is taking into account that historical data could be easy to collect at the historic point in time (#1) but would take an unequal effort to collect at present.

For example: 1) establishing if the item was on stock at the given moment, what it’s bar-code would have read at that time and possibly who scanned it at what location, or 2) finding out what person came to be admitted at that specific date and retracing what the date would have been that was entered at that specific moment and possibly by whom.

3. Business value per granule of data:
The value of the single entity of data for the operational business after the moment of measurement. During data lifetime, the value of a specific granule of data can change. Most often it will become less valuable, making it possible to archive or even cumulate** the data in multi teer storage solutions, but, when called upon, it could be this specific granule of data could be of vital importance!

For example: 1) knowing how many of a specific item is in stock, or 2) having identified a specific person within the clientgroup.

4. Added value in a dataset combination:
It can very well be and most probably is, that any granule of data is of key importance to a dataset combination, where several bits of data of different datasets of data-systems combined create information which is vital to any specific action within an organization.

For example: 1) knowing how many of a specific item is in stock to support a JIT-delivery system to keep a production line uninterruptedly going, or 2) delivering the right treatment to any specific person and being able to bill them accordingly.

** Cumulation of data can destroy a recovery path for retrieving any specific granule of data.

Creating a formula to calculate any TCL will be relatively easy.

Creating a model to extract or calculate or even guesstimate the values for the different variables of the formula will be the challenge.
A challenge that needs to be met because of the ever increasing volume of data and the ever increasing importance of certain realms, like healthcare, public services, transportation, etc., within this data mass.

Please step on board and help define TCL as it could prove to be a critical factor when push comes to shove!

DOAG 2014 Nuremberg Germany

I am very happy to say that I will be at DOAG 2014.

In the quest to emphasize the Oracle technology as not just the #1 in technology but also the #1 in afford ability and ease there will be 3 events I shall contribute:

  • Electronic Patients Records system based on Oracle Application Express
  • Oracle Standard Edition Round Table*
  • Undisclosed surprise!!

*co-presented with my good friend Philippe Fierens ACEA

Please stay tuned to learn more as we progress towards November 2014.

doag

Oracle in perspective

A brief overview of alternatives…

This document focuses on the perception of the Oracle database related to ‘Small and Medium businesses’, European Style.
First we will take a quick look at Enterprise licensing and give a ballpark idea of prizes en possibilities. Next I will put this in perspective with more detail and will highlight possibilities to get ‘high end results’ with what is branded as ‘entry level’ investments. Everywhere I say Oracle, I mean the Oracle database.

Oracle is investment intensive
Oracle Enterprise Edition licenses are price-listed for over € 35.000 per processor. These CPU’s actually are not ‘real CPU’S’ but units which are defined according to Oracle’s Core Factor Table.
An Oracle Enterprise Edition license allows you to a) install and use the Oracle Enterprise Edition software and b) buy additional tooling to complete the Enterprise software stack. In this setting there is Oracle Active Data Guard, Oracle Database Vault, Partitioning, etc. to consider.
With Oracle Enterprise Edition it is possible to create a high performance, high available and ‘disaster resistant’ environment. Where it needs to be remarked that this program-set comes with an according price tag.

Oracle Standard Edition environment
A special exception in the Oracle license politics is the Oracle Standard Edition database. This installation uses the exact same database-software (binary compatible) as the Enterprise Edition edition but comprises a significantly reduced set of features and options that can be found in this global overview. The most important question is if these features and options are really needed to realize a high performance, high available and ‘disaster resistant’ environment.
Let’s first quickly zoom into a practical example the indicate an investment-perspective.
Based on a HP Proliant DL380 Gen8 E5-2690v2 Server with 2 processors with each 10 cores.

— Oracle Enterprise Edition:
2 x 10 cores x 0,5 core factor = 10 licenses x € 37,492 = € 374,920 excluding maintenance.
— Oracle Standard Edition:
2 x 1 processor = 2 licenses x € 13,813 = € 27,626 excluding maintenance.
— Oracle Standard Edition One:
2 x 1 processor = 2 licenses x € 4,578 = € 9,156 excluding maintenance.

In this setting we can save up to € 365,764 by leveraging Standard Edition. The reason is that the Standard Edition software is significantly cheaper but mainly because of the fact that the Standard Edition software is licensed per processor socket in stead of by the units defined by the ‘Core Factor Table’!
The limitation is that Standard Edition has a limit of 4 sockets per server and Standard Edition One is limited to 2 sockets per server. This is an important fact!

Room for investment
In our example it is possible to decide in favor of Standard Edition One. What we can subsequently deduce is that we have a theoretical budget of about € 350,000 available to make sure we have a sufficient high performance, high available and ‘disaster resistant’ installation. Even if we were to consume all of this budget, which is not very likely, the return on this investment remains high because the year-by-year support-cost for this environment is ((10 x € 8,248.19) -/- (2 x € 1,007.15)) € 79,467.60 per year cheaper.
In this calculation possible discounts have not been included. Looking at the volume of the investment differences any discounts will have to terminating influence. The year-by-year support-cost will remain based on the original price of the software.

Virtualization
One of the most significant hurdles with leveraging the Oracle software is virtualization, where technical considerations are not the toughest to deal with; the license consequences are!
As we concluded, Oracle Standard Edition is applicable on max 4 processors. In case of virtualization, it is true that all processors of all hardware, where the Oracle database can migrate to, either automatically or with live migration.
With this rule it is nearly impossible to leverage Standard Edition licenses and will is it be nearly impossible to use virtualization in a ‘small to medium business’ setting… Unless a smart alternative is chosen.

Alternatives
1. The abstraction layer
By leveraging virtualization-software as a abstraction layer, a server installation can be separated from the physical hardware configuration on which it runs. By using this alternative it is possible to recover from hardware failure more efficiently.
2. 2 x 2 sockets
By using a limited virtualization-cluster of 2 nodes with 2 sockets each having the maximum possible number of processor cores, the complete advantage of virtualization can be created using the maximum advantage of Standard Edition. Please note that we would need a Standard Edition license. Alternatively you could create a cluster with 2 x 1 socket to facilitate the usage of a Standard Edition One license.
3. ESL
In the case software from a third party is used, this software development party can agree on using a Embedded Software License; from Oracle. This form of licensing is quite specific and is therefor not further discussed here.
4. What will virtualization not solve
Virtualization is not replacement for Backup and it is no alternative for disaster proofing an Oracle database. These specific tasks are resolved by using backup of standby database tooling.

Tooling
In the beginning of this article it is indicated that the Oracle Enterprise Edition software give you the right to buy additional tooling to complete the Enterprise Software installation.
Alternatives for this tooling are also available for Standard Edition installations. Please consider:

  • Dbvisit as an alternative for Oracle Data Guard or Oracle Golden Gate
  • OraSash as an alternative for Oracle Active Session History
  • Nagios or SPS GenSys as alternatives to Oracle Enterprise Manager

Conclusion
Based on the information above we can conclude there are good possibilities to leverage the Oracle Database in a ‘Small and Medium Business’ environment. The information above is no complete and ultimate description of all possibilities, but this quick overview gives enough to work with to zoom into any specific challenge.