Category Archives: Business

Big Data: Hadoop and Oracle technologies explained


MarkRittmanUnder the title “Hadoop and Oracle technologies on BI projects” Mark Rittman flew to The Netherlands on the 14th of July to visit the Oracle Usergroup Holland.

As I had obviously heard a lot about Hadoop, I never really did anything further with it and left it to a synaptic link to Gwen Shapira. This lack of action created a kind of threshold in the understanding of the technology. When I heard about this session I realized this would be the moment to take a step further. It turned out the be the  first real talk that puts “Big Data” in the perspective it needs to be consumable and realistic.

In these current times where “The Internet of Things”, more and more social media and ever further digitization we are heading to a Big Data Disruption. This is both a conceptual as a very real thing if you take a moment to think about it. According to real world experience it is also not something “which will once be”, it is something which is actually here today!

On the technical side of thhadoopings, data is captured in something that is called a “data reservoir” (or “data lake” or “data dump (yard)”). Compared with “regular” data storage, you can conclude that data-governance, or a data-structure, in a Big Data system is applied later  We are used to apply this structure, this governance, beforehand, by applying data definition. Using Hadoop in combination with noSQL give you “schema on read” capabilities making quering of the Hadoop data reservoir possible.

Adding this structure later is harder! This leads to the following:

  • Data is much easier to get into Hadoop then into a star-schema
  • Data is much easier to get out of a star-schema then out of Hadoop

This could be one of the essential things to consider when thinking about engaging in a Big Data project!

As Tanel Poder concluded: “High value, high density data will remain in the Oracle database” which I think is a very true conclusion. In the end, the high value conclusions (or the engineering of Big Data results) will also happen within the Oracle database.

On the horizon is “Oracle Big Data Discovery” which will help with the time consuming and tedious work of sorting and interpreting raw data in the data reservoir. The use of ‘R’, as the data exploration tool of duty, is expected to be replaced by this discovery tooling, over time…

To sum up the concept of the first half of the presentation, to my taste:

  • Hadoop changes business
  • NoSQL scales business
  • Oracle runs business

It takes eons to list all names of the Buddha” nicely sums up the number of different applications that make up and are needed to execute a successful Big Data project.
Plus, “You’d better keep the 13 rules for relational databases close at hand“!

presentation

Part two of the evening was spent on mapping these concepts on actually tools, disclosing data through Hadoop to Oracle SQL and making actual use of Big Data. The exercise was completed by demos and illustrated by screenshots from the slides (link below).
A special word of warning goes out to the security aspect of Big Data, which is something to really pay close attention to. Kerberos authentication and apache Sentry are imperative things to implement in your Big Data environment.

All in all, this evening turned out to be 110% more informative and necessary as I expected when I embarked on the journey to Utrecht! Thank you for sharing, Mark!

Thanks to Piet de Visser for the nice quotes! And a great “hi there” to Klaas-jan Jongsma, René Kuipers and Marti Koppelmans.

If you want to work with Big Data on your Smal(ler) Device, please download the Big data light VM from OTN.

The link to the slides for anyone who wants to review the “extended remix”!


My Oracle 2014, what a blast…

Twenty-fourteen… What a year!

As the year draws to a close, I just wanted to take a few minutes to look back at the passed crazy 12 months… Crazy from a personal as well as a professional point of view!

ougfIn June things took off for the Oracle stuff with a visit to OUGF14. My second real talk after starting to speak in UKOUG-Tech13. Plus the bonus, the first ever Round Table on Oracle Standard Edition, together with my friend Philippe Fierens and the support of Ann Sjökvist. Always imagined, never experienced, the way technology binds people. For all the events happening in Haltia, Finland, please read this post.
OUFG14 was also where I met Gurcan Orhan for the first time. My partner in (hard) rockin’ Oracle stuff!! Together with my international peers, we have quite a team and this makes me super proud.

I owe an apology to the Scotts! I should have been in Linlithgow presenting. I was honored to be selected to travel to the beautiful city of Edinburgh in one of the most beautiful parts of the world, but there were to many things going on, so I had to cancel. I am so sorry!!Scotland

The next stop on the agenda was Oracle Open World. But before I could pack my bags, up and leave, there was A LOT of work to be done at VIR e-Care Solutions coordinating and rolling out a brand new Oracle infrastructure to all of their clients.
And not alone that, there was a trip to Hamburg for Dbvisit with a presentation at the DOAG Data Replication SIG meeting, organized by Johannes Ahrends. Of course, also Björn Rost was present, plus a number of the other representatives of the data replication scene in Germany.

Oracle Open World was not the high-point of 2014, which was somewhat surprising, actually… I cannot really put my finger on it, because the days were packed with good stuff, unexpected encounters and many more goodies… But somehow, the second time around, and with a lot more OUG-experience, it didn’t crack up to be event Numero Uno of 2014! I can safely say, looking back, that national Oracle User Group events are more interesting. You get to have more quality time with the people you just get to meet a few times a year.
Still, with all the content and with everything that happened… And especially the lunch with Martin NStretchedLimoash in the sun at the CCM as well as the unparalleled drive by Stretched Limo to Treasure Island, hosted by Portrix (Henning and Björn) with Yuri Velikanov, Ilmar Kerm and others. I am not complaining!!

Coming back from America there was a huge surprise and honor for me.

ACEAssociateNominated by my peers, Oracle Corp. saw it fit to award my efforts for the Oracle Community with the Oracle ACE Associate recognition. I had never thought or expected this to be a possibility for me, so this was a complete surprise which started a chain of events, ending at the end of this post.

The final event for 2014 was the DOAG Jahreskonferenz in Nuremberg. I guess the biggest event in Europe and the biggest even for me by means of contributions. I had my “Pre-APEX” talk, there was the second edition of the Standard Edition Round Table, co-hosted by Philippe and chaired by Johannes Ahrends!! What a shock 😉 There was the Data Replication forum and Dbvisit #RepAttack!
DOAG also, and again, brought a sheer endless list of new and re-encounters with Oracle Hero’s. My good friend Peter Raganitsch was also there…

At the end of the year, you would think that you would be “home free”, right?

Well 2014 had a last trick up it’s sleeve! The year ended with me saying goodbye at VIR e-Care Solutions BV. After 16 years we had to decide to break up. It’s like tearing off a band-aid. You do it quick and it hurts less, but still…

So, I am a free man!

With this last development, set in the light of everything I had the chance of doing this year, it has been a great deal to handle. The first visit to Oracle Open World, back in 2010 has started a chain of events that has invoked some quite unforeseen twists and turns. It all looks and feels like it has been worth it, but it has indeed taken it’s toll.
Currently I am enjoying some well deserved but also much needed time off with my wife and family and I will start thinking about new ventures in a bit.
Please take a look here and here for some more information!!

Rest assured, there are some new ideas and they are EXCITING!!!
One of them being that  I will be speaking at #OUGN15, better known as the boat conference, with a brand new talk.
But hey, we’re looking back here and I wouldn’t want to spoil too much of the surprise.

Stay tuned…

A new form of on-line data protection

In the last few years I have been active with data replication solutions in the Oracle realm as you may know. This data replication field is one that has many angels, so there is something new to learn every day and sometimes there even are really new possibilities!

Take heed…

The first and most familiar form of the data replication forms is ‘physical data replication’, also known as ‘Standby Database‘.
In this form of replication, both source and target database are binary identical. Changes are propagated by copying the archived redo logfile from the source database to the environment for the standby database lives. Most often this is another server, preferably in another building in another town, far enough away to not be struck by the same havoc.

There are basically 3 ways to accomplish this;

  1. Use Oracle Data Guard (in Enterprise Edition Oracle database)
  2. Use Dbvisit Standby (in all Oracle database Editions)
  3. Write your own scripting (not recommended in any case)

The second and more emerging form of data replication is ‘Logical Data Replication’.
In this form of replication, there is not real relationship between the source and the target database, other than that the target database houses data coming from the source database. They can live on different systems, be from different database version, a different operating system or even be from a different vendor.
Data is harvested from the source database, converted and copied over to the target database / system. On the target system this data is being applied, in the native speech of the the target database.

There are a few ways to accomplish this, but basically every vendor has the same technique. It is more a matter of pricing, basically.

  1. Oracle Golden Gate (expensive, complex)
  2. Dell Shareplex (somewhat expensive)
  3. IBM Infosphere (ComPlex, expensive)
  4. Dbvisit Replicate (easy, affordable)

So, having discussed this, as this is not new, why this blogpost?

Well…

A Standby database is more or less closed. You can open it occasionally to query some data, but that interrupts the apply-process.
On-line data replication does what it says, you have an active database, where data is continuously added. This way you can, for example query, the same data on two sources to spread load.

The case I mean to discuss is the following:

“I have 10 source database and I want one target database (ah, presto, on-line data replication) and I want to backup 5 tables from each source to the target database (again, on-line data replication, but wait, backup?) so I can easily copy back specific data to the source (eeeuhm, yes…) whenever a user messes up the source tables (aï…) and I want the target to be update each day at 23:00 (so… okay!)

This reeks after somewhat of a hybrid approach!

We cannot do regular on-line data replication, for this is aimed at being real-time.
And we cannot leverage Standby database, since it needs to be centralized in one database and not 10. Next to that it would take some administration to open up the standby database in read-only mode, take the copy, and close the database again.

Working with Dbvisit, we came up with “Pause Apply” and “Resume Apply”, which we combine to form “Delayed Apply“.
This delayed apply would neatly answer the question posed.

  • By “delaying” the application of changes to the data, we could make sure the requested tables are only updated from 23:00 on;
  • We can combine the 50 tables (10 databases x 5 tables) in one single target database, since it is a logical approach to the matter;
  • We can easily restore or copy back corrupted data, since both the source and the target database remain continuously open.

Using Dbvisit Replicate, having this kind of protection for your “logical test-cases”, what this company was doing to require this solution, is really affordable.
It can help in dynamically and quickly resetting specific data-sets or test-cases while remaining much more flexible than creating scripts to reset a specific data-set or test-case! And, of course, there are many more ways to use this neat feature…

My wordpress site just disappeared

I was just thinking about re-checking something I wrote. That happens.

So I went to my blog-site… Just to find out it was gone…

GONE?

Yes, gone!

I got the message that this domain was reserved by my provider, TransIP. And that was not exactly what I was looking for.

The mystery was quickly resolved.
By checking out the ControlPanel at my TransIP account, I found I had received a message:

Dear customer,

Because of complaints of high load on our web hosting platform we have regrettably been forced to (temporary) block your website jk-consult.nl. We have done this to ensure the stability of our servers.

WHAT!!

This high load is caused by the (automatic) posting of a high number of comments on your WordPress-website. It is highly probably that this concerns unapproved comments.

In many cases this is a form of automated spam. This leads to a high load on our web hosting servers, which leads to performace problems for you and other clients on this same server.

Eeewww…

And this was followed by some comments to quiet down this spamming.

I have now installed these two plugin’s from WordPress:

https://wordpress.org/plugins/stop-spam-comments/
https://wordpress.org/plugins/spam-comments-cleaner/

And I just checked. In stead of 8000+  there were now just 2!

Another problem solved.

Thank you Arjan van den Berg (I see these are not the first kudos you’ve received)

Printing directly with APEX

When looking for a print solution with APEX you will find .PDF

You will find a lot of .PDF

And .PDF is good. There is nothing wrong with .PDF. In fact, .PDF looks cool and you can do a lot of neat stuff with it. With toolkits like pl/pdf you can create .PDF’s directly from PL/SQL.

But sometimes there is the need to be able to print directly.
For instance with batch-processing or with nightly print-runs or whatever. And this is where you would find yourself locked out with .PDF and, glancing Google, you would guess you’d be out of luck!
Since we had:

  • created a web based solution
  • the need to print directly
  • print in nightly-runs

plus we had:

  • about 400 reports (.rdf files) which we need to reuse (without having the opportunity to rebuild them in something like pl/pdf)
  • combine different output / distribution mechanisms

we needed to tackle this challenge!

So we did !!

It was fixed by using some old and new technology mixed together:

Oracle reports builder
and
Oracle Fusion Middleware, more specifically, Oracle Reports Server, aka WLS_Reports

By using this combination of products, you can create a printing solutions which is capable of printing directly to your network printer, create HTML or PDF reports.
Schedule them, e-mail them, and all this by URL-control!

http://<your-reports-server-node>:8888/reports/rwservlet?command=argument&command=argument&and-so-on

Use the following (much used, but far from a complete list of) control-commands:

  • report=<name of your .rdf>
  • userid=<userid/password@database>
  • desformat=HTML/PDF
  • destype=type of output of the report
  • desname=name of your output (device, file, whatever)

More commands in the link to the documentation on the bottom of this post!!

Notes:

  • You can post these parameters to the Reports Server without calling them in the original URL!
  • You can set a “local” on your Reports Server for omitting <@database> in ‘userid’ for your default database
  • Actually you can set all environment variables, like TNS_ADMIN, NLS_LANG, REPORTS_PATH, etc.

What we found is we needed to run Oracle Reports Server on Windows, just to take advantage of the Windows Printing System which is quite stable and easy to configure. (So, yes, okay, there you have it, a good thing about Windoze!)

Basically you can create a simple solution, but you can easily expand it quite a bit, making a printing and reporting solutions worthy of and enterprise environment, with distributing reports via e-mail, creating reports in file-systems, embedding reports in websites, and basically anything you want or would need.

And, you get a nice Management Console for free with this installation!

08-forms-em
Oracle Enterprise Manager Console

From this management console you can administer your print-jobs, set all kinds of parameters, which is quite neat!!

But, wait… the catch… It’s gonna cost you!

Or, can you keep it under control?

But of course!

Printing is mostly a half-on-line thing, and for a lot of stuff, it’s not extremely performance / time critical… So what can we do?

Oracle Reports Server is licensed as “Oracle Forms & Reports Server” and it will set you back € 370 per Named User or € 18.200 per CPU (being Oracle CPU’s according to the Core Factor Table!)
It’s still a whole lot of money, but would you really need more than 2 cores? If you give the machine enough memory and fast disks? Probably not.

Is it worth considering taking another node in your environment? Perhaps. This print-solutions could be a viable reason to do so. It brings you quite a bit of functionality straight from the box. But, as always, do your math and make educated choices.

The documentation link promised:
https://docs.oracle.com/cd/E16764_01/bi.1111/b32121/toc.htm

If you would like more info, please just drop me a line!

TCL, Total Cost of Loss, a new business perspective

‘Total cost of Loss’ (TCL) was launched at the World Premiere of the Standard Edition Round Table during the OUGF Harmony 2014 annual user conference.

Doing nothing does not mean it costs nothing

Joel J. Goodman, Finland 2014

“TCL.” Abbreviations.com. STANDS4 LLC, 2014. Web. 15 Jun 2014. <http://www.abbreviations.com/term/1519392>.

Total Cost of Loss is the representation of the cost for an organization when data is lost. Experience learns that this is the hardest exercise in business continuity to figure out and the most neglected threat to an organization.

Next to the two best known terms RTO & RPO and the less well known term RTDA (‘Recover Time to Data Availability’), TCL is aimed at providing the business with an extra ratio to conduct BCP.

To correctly evaluate investments that have to be done to create a sufficient RTO time frame or RPO granularity, there has to be an understanding of the magnitude of the (financial) importance of the underlaying (data)system. TCL is aimed at calculating this figure where this figure is valid per specific data system.

The following components have currently been identified as being part of TCL:

  1. Collection price per granule of data*
  2. Present value per granule of data
  3. Business value per granule of data
  4. Added value in a dataset combination

* a granule of data is the smallest possible set of variables comprising a usable piece of information.

1. Collection price per granule of data:
The amount of effort (time, computing power, etc.) which is required to assemble and record the granule of data in the data-structure.

For example: 1) the time it takes to pick up an item and scan it’s bar-code with a bar-code scanner and put the item back, or 2) the time it takes to enter somebodies name and address at admittance inclusive of possible preparation and filing.

2. Present value per granule of data:
The current amount of effort (if possible) which is required to reassemble and record the granule of data in the datastructure. This entity is taking into account that historical data could be easy to collect at the historic point in time (#1) but would take an unequal effort to collect at present.

For example: 1) establishing if the item was on stock at the given moment, what it’s bar-code would have read at that time and possibly who scanned it at what location, or 2) finding out what person came to be admitted at that specific date and retracing what the date would have been that was entered at that specific moment and possibly by whom.

3. Business value per granule of data:
The value of the single entity of data for the operational business after the moment of measurement. During data lifetime, the value of a specific granule of data can change. Most often it will become less valuable, making it possible to archive or even cumulate** the data in multi teer storage solutions, but, when called upon, it could be this specific granule of data could be of vital importance!

For example: 1) knowing how many of a specific item is in stock, or 2) having identified a specific person within the clientgroup.

4. Added value in a dataset combination:
It can very well be and most probably is, that any granule of data is of key importance to a dataset combination, where several bits of data of different datasets of data-systems combined create information which is vital to any specific action within an organization.

For example: 1) knowing how many of a specific item is in stock to support a JIT-delivery system to keep a production line uninterruptedly going, or 2) delivering the right treatment to any specific person and being able to bill them accordingly.

** Cumulation of data can destroy a recovery path for retrieving any specific granule of data.

Creating a formula to calculate any TCL will be relatively easy.

Creating a model to extract or calculate or even guesstimate the values for the different variables of the formula will be the challenge.
A challenge that needs to be met because of the ever increasing volume of data and the ever increasing importance of certain realms, like healthcare, public services, transportation, etc., within this data mass.

Please step on board and help define TCL as it could prove to be a critical factor when push comes to shove!

My introduction to Synology DiskStation DS214

DataGrowth
Data growth visualization

In these times no household can be without ‘Home IT’ completely with a ‘home data center’. The figures for business data growth are at least equally applicable to the home situation, al least!

As in probably many situations, we started out with just some loose PC’s, which could be hooked up to The Internet, and after some time, also could be hooked up to one another. Gradually a fileserver was introduced because files were always on some other computer than you would need them. This made for a nice status quo, until the introduction of tablet computers and what not…

Now it was no longer so easy to access files or do whatever you wanted or needed, space was becoming scarcer by the day. Plus, our trusted old fileserver was starting to decay. I need to intervene. With a ‘data center’ just existing of a router a WiFi Accespoint, a central printer, some old and some newer workstations and tablets and this trusted old fileserver, an exchange wouldn’t be a daunting task…

A few years back I came across a company called Synology. I met some product guru of them and somehow registered the product name. Lately some of my colleagues started working with Synology in their home IT, so I decided this would be the way to go!

As I was running a fileserver with just a 100 Gigs of storage it was not hard to findBarracuda something with a bit more capacity. And the decision finally wasn’t too hard indeed.

A Synology DS (DiskStation) 214play it had to be and I decided upon 2 hard disks of 4 TB (Seagate Barracuda ST4000DM000) in a S.M.A.R.T. mirror setup.

Okay, so decision made, product purchased, product setup (which was just made too easy by all the discovery tooling). Disks setup in RAID 1 and on with the show!

And now the discovery fun begins…

DiskInsertsIt appears Synology has made 9 apps (for iPad / iPhone) that do all kinds of things!

  • DS Cloud, for your personal cloud solution
  • DS Mobile, monitor and manage download tasks on your DS
  • DS Finder, find Disk Stations and manage them (even remotely)
  • DS Photo, manage and view your photo collection
  • DS Cam, turn your DS into a video surveillance server
  • DS Download, to download anything you want (but mostly movies of course)
  • DS File, find files and anything else
  • DS Video, stream video anywhere
  • DS Audio, stream audio anywhere

So, okay, this is a bonus already! Just place files in the correct folder and your in business!

Currently I am migrating data towards the DS having first enabled some of the many apps inside the station, like:

  • Anti Virus
  • Java
  • iTunes Server
  • Mail Server (need to look into this, with many accounts with as many providers mail in a menace!)

Okay, well, these are just a few, but I fear there are MANY more apps out there!

Actually, from opening the box until being in business it took me 2 hours. Having a first run of customization down took me another 2 hours.

I guess doing the rest of the stuff

  • Hooking onto CrashPlan and possible MindTime
  • Getting VPN set up
  • Being able to access the DS over The Internet
  • Figuring out automatic movie download
  • Getting iTunes Server hooked up and running

will take me some hours still, but hey, this is more or less classifiable as hobby or at least additional functionality!

Some other things I heard / saw a DS was used for:DS214Play

  • Content Management server
  • Database server
  • Directory server
  • DNS server
  • ERP system
  • Forum host
  • GIT server
  • Online shop
  • Python
  • Website host
  • Wiki host
  • WordPress host

So, in the end, all is good now, we got plenty of space, a lot of new functionality. I would say, ‘Yay Synology’!

Oracle in perspective

A brief overview of alternatives…

This document focuses on the perception of the Oracle database related to ‘Small and Medium businesses’, European Style.
First we will take a quick look at Enterprise licensing and give a ballpark idea of prizes en possibilities. Next I will put this in perspective with more detail and will highlight possibilities to get ‘high end results’ with what is branded as ‘entry level’ investments. Everywhere I say Oracle, I mean the Oracle database.

Oracle is investment intensive
Oracle Enterprise Edition licenses are price-listed for over € 35.000 per processor. These CPU’s actually are not ‘real CPU’S’ but units which are defined according to Oracle’s Core Factor Table.
An Oracle Enterprise Edition license allows you to a) install and use the Oracle Enterprise Edition software and b) buy additional tooling to complete the Enterprise software stack. In this setting there is Oracle Active Data Guard, Oracle Database Vault, Partitioning, etc. to consider.
With Oracle Enterprise Edition it is possible to create a high performance, high available and ‘disaster resistant’ environment. Where it needs to be remarked that this program-set comes with an according price tag.

Oracle Standard Edition environment
A special exception in the Oracle license politics is the Oracle Standard Edition database. This installation uses the exact same database-software (binary compatible) as the Enterprise Edition edition but comprises a significantly reduced set of features and options that can be found in this global overview. The most important question is if these features and options are really needed to realize a high performance, high available and ‘disaster resistant’ environment.
Let’s first quickly zoom into a practical example the indicate an investment-perspective.
Based on a HP Proliant DL380 Gen8 E5-2690v2 Server with 2 processors with each 10 cores.

— Oracle Enterprise Edition:
2 x 10 cores x 0,5 core factor = 10 licenses x € 37,492 = € 374,920 excluding maintenance.
— Oracle Standard Edition:
2 x 1 processor = 2 licenses x € 13,813 = € 27,626 excluding maintenance.
— Oracle Standard Edition One:
2 x 1 processor = 2 licenses x € 4,578 = € 9,156 excluding maintenance.

In this setting we can save up to € 365,764 by leveraging Standard Edition. The reason is that the Standard Edition software is significantly cheaper but mainly because of the fact that the Standard Edition software is licensed per processor socket in stead of by the units defined by the ‘Core Factor Table’!
The limitation is that Standard Edition has a limit of 4 sockets per server and Standard Edition One is limited to 2 sockets per server. This is an important fact!

Room for investment
In our example it is possible to decide in favor of Standard Edition One. What we can subsequently deduce is that we have a theoretical budget of about € 350,000 available to make sure we have a sufficient high performance, high available and ‘disaster resistant’ installation. Even if we were to consume all of this budget, which is not very likely, the return on this investment remains high because the year-by-year support-cost for this environment is ((10 x € 8,248.19) -/- (2 x € 1,007.15)) € 79,467.60 per year cheaper.
In this calculation possible discounts have not been included. Looking at the volume of the investment differences any discounts will have to terminating influence. The year-by-year support-cost will remain based on the original price of the software.

Virtualization
One of the most significant hurdles with leveraging the Oracle software is virtualization, where technical considerations are not the toughest to deal with; the license consequences are!
As we concluded, Oracle Standard Edition is applicable on max 4 processors. In case of virtualization, it is true that all processors of all hardware, where the Oracle database can migrate to, either automatically or with live migration.
With this rule it is nearly impossible to leverage Standard Edition licenses and will is it be nearly impossible to use virtualization in a ‘small to medium business’ setting… Unless a smart alternative is chosen.

Alternatives
1. The abstraction layer
By leveraging virtualization-software as a abstraction layer, a server installation can be separated from the physical hardware configuration on which it runs. By using this alternative it is possible to recover from hardware failure more efficiently.
2. 2 x 2 sockets
By using a limited virtualization-cluster of 2 nodes with 2 sockets each having the maximum possible number of processor cores, the complete advantage of virtualization can be created using the maximum advantage of Standard Edition. Please note that we would need a Standard Edition license. Alternatively you could create a cluster with 2 x 1 socket to facilitate the usage of a Standard Edition One license.
3. ESL
In the case software from a third party is used, this software development party can agree on using a Embedded Software License; from Oracle. This form of licensing is quite specific and is therefor not further discussed here.
4. What will virtualization not solve
Virtualization is not replacement for Backup and it is no alternative for disaster proofing an Oracle database. These specific tasks are resolved by using backup of standby database tooling.

Tooling
In the beginning of this article it is indicated that the Oracle Enterprise Edition software give you the right to buy additional tooling to complete the Enterprise Software installation.
Alternatives for this tooling are also available for Standard Edition installations. Please consider:

  • Dbvisit as an alternative for Oracle Data Guard or Oracle Golden Gate
  • OraSash as an alternative for Oracle Active Session History
  • Nagios or SPS GenSys as alternatives to Oracle Enterprise Manager

Conclusion
Based on the information above we can conclude there are good possibilities to leverage the Oracle Database in a ‘Small and Medium Business’ environment. The information above is no complete and ultimate description of all possibilities, but this quick overview gives enough to work with to zoom into any specific challenge.

Oracle XML DB content easily moved

We have this application where we just store some specific content in an XDB-schema.

After a quick move of the Oracle database from a legacy system to a new environment we found that the XDB-schema and it’s contents were not moved. Okay, this is what happens when you use “good-ole” imp/exp instead of some “newer” technologies like RMAN or expdp.

What now? We can start the entire move again (but that would mean downtime for recreating the database, amongst others) or we could do a specific move for the XDB-schema (but meanwhile new content was being added to the system already). Actually all of these are not the nicest scenario’s and seemingly adding too much complexity. Not what we want…

What about a a smart alternative here too! We could simply use ftp after all.

From the EPG (Embedded PL/SQL Gateway) functionality of the database, we can just enable ftp through the Oracle database listener. With this functionality we can access the application database on the legacy system through ftp and easily copy the content to a local directory, especially since there is just a few hundered Megabytes of data.

Enable this access by:
execute dbms_xdb.setFtpPort(2100);

With a tool like Filezilla, the contents were copied to a local directory.

After the action is complete, ftp-access to the database is closed, you can never be too careful!
execute dbms_xdb.setFtpPort(0);

Loading this contents in the new location is a repetition of the actions. Enable the EPG-ftp-port on the new database, use an ftp-tool to upload the data and don’t forget to disable the EPG-ftp-port afterwards.

One tricky thing is that you should mind the data-ownership. This is easiest done by connecting to the ftp-account with the same user that owned the database in the source database!

When you run into errors, probably there could be something wrong with your XDB-installation. Please look at this post for some more on that!!