Category Archives: General

#DOAG2016, definitely a crazy week.


#DOAG2016, the largest Oracle Community gathering in Europe. Taking place in Nuremberg, at the Nuremberg Convention Center NCC, one of the more impressive places to hold such a conference, towering 4 stories high, with a big central atrium!!
It is a huge effort to get all of this together!

In this blog-post I want to highlight some of the crazy things I experienced this week… And… I did try to follow my own schedule, but I wasn’t overly successful.

Young talent

One of the things that was somehow quite clear this week, is that we have a lot of young talent out there, eager to learn and share experiences. It is not just the #NextGen “movement” of DOAG, of which Carolin Hagemann made me aware, but just young people on the conference itself.

Discussing “Young PL/SQL” at the unconference session made us all aware that our part of the IT trade is no very sexy and popular with the youngsters. This all despite what was mentioned above. In universities we train SQL, but we don’t train to create real-life business applications, leveraging the power of the one language that keeps SQL close to the data it feasts on, PL/SQL. But, more on that below (Thick Database Paradigm).
To promote PL/SQL, basically two ground requirements were defined:

  1. Create a free ‘PDB as a Service’ for schools;
  2. Inspire teachers to talk about data centric computing

By finding somebody to be regionally or globally owner of this quest, it should be possible to get young professionals as familiar using PL/SQL for creating performant and business-ready applications as they were familiar using Microsoft Excel to do their accounting “back in the days”

ACE program

“There is a disturbance in the force!”

For everybody not acquainted with the Oracle ACE Program by the Oracle Technology Network… You should be!! Please read up, as it is an incredible cool initiative.

The disturbance, you ask?
Well, to retain your “status”, Oracle expects you to do “stuff” and this “stuff” is then evaluated on a yearly basis. Basically the initiative, the disturbance, is to get some transparency in “the stuff”. And, as always, everybody wants change, but few actually are good at “change”. There are rimples and things that change, but in the end; everything will be fine, unless, obviously, when it will not be fine.

Talks

I was honored to (co)host to talks at #DOAG2016:

Bad Boys of Replication – Changing everything…
With Oracle ACED and good friend Björn Rost, about an intense migration project we did some time ago. We were even offered to host our talk in Tokio, the biggest hall at DOAG!

Saving lives at sea at an industrial scale using Oracle Cloud Technology
An insightful (at least I like to think so) talk with my colleague Oliver Limberg. The talk is about the rapid development of a global portal for the maritime logistics branch.

I had a blast, and I hope you did too!

Community spirit

Oracle User Group conferences are about sharing and are about fun. Mr. Martin Widlake wrote a good post about that.

Apart from all the “more formal” things that happened, there were quite a few extracurricular activities, mostly involving an Irish Pub or a restaurant.

This all may sound quite funny and exciting, and, yes, it is alto talk with your co-workers: “Oh, hey, you are going to have fun and party all week!” Of course it is not a drag and a bore, but it has very profound function!
Whenever you run into trouble, these are the exact same people that are not only able, but probably also inclined to help you out, as you would help them out, as friends do among each other. In the end, they, you, your boss and your clients benefit. This is not to be underestimated too much.

The extra, special bit, that DOAG offers are the so called “unconference sessions”.
Not scheduled, no slides, nothing official, just getting together and discussing subjects of interest. Our “Young PL/SQL” was one of these “unconference session”, which turned out to be a great (and valuable) success!

Meeting people

Just to name a few, heroes of long and of yet to come for #DOAG2016:

Dietmar Neugebauer
Frank Dernoncourt
Joel Kallman
Johannes Ahrends
Kamil Stawiarski
Laurent Leturgez
Maja Veselica
Marcel Hofstetter
Piet de Visser
Sabine Heimsath
Stefan Kinnen
Stew Ashton
Uwe Hesse
Zoran Pavlovic
And alle the ones I forget to mention here!!

Thick Database Paradigm

Noting new in IT…

Well, no.

The Thick Database Paradigm (opposed to the “No PL/SQL Paradigm”) is nothing new. We have actually all been doing this since the eighties. Program your business rules, your constraints, everything that makes sure that your data is all that you want it to be, close to that data.
There are so many reasons that speak in favor of this approach that it is nearly overwhelming and deserves at least a book in itself. But, let me make a small attempt to highlighting a few here:

  • spare yourself network bandwidth, by not sending data all over your network to be processed
  • safeguard your data inside the (Oracle) database, so it can be protected by all that has been invented to do so
  • Transact data where it lives and combine and aggregate it there, you will be amazed by the efficiency
  • Remind yourself why you used to think “business logic in middle teer” was a good idea

If you leave possibile religious believes aside, there is no other conclusion possible then that the reinvention of “Thick Database” is the (re)discovery of 2016, right from the time when IT still made sense.

Yes, there are cases where an “Enterprise Service Bus” makes sense, but, as with every technology withing IT, it has a very specific area where it actually adds value or even makes sense. At best, a lot less than all the places where it is used currently!
Not to get carried away in this joyful blog-post, I will leave this topic at this.

The end

I hope to see you at the next Oracle User Group conference, somewhere… Please watch for the asterisk at his page for the conferences that I will attend.


OTN Appreciation Day: PL/SQL

We are posting these blog posts today as part of the OTN Appreciation Day, a celebration for the Oracle Technology Network as suggested by Tim Hall, inspired by Debra Lilley.

The mission was not too hard: write about your favorite bit of Oracle Technology.

As a developer and a core-tech DBA and APEX enthusiast… the choice was easy! PL/SQL (apart from how it is pronounced or even written ;-)!

Why? Easy!

Not just because PL/SQL is easy to learn, for a language that is basically so extremely powerfull! But foremost because it is an easy choice.
If you have a database like the Oracle database and you have your valuable data living inside that database, you want to maximize the potential value of that data. Use and manipulate it quickly, keep it safe, make it available in a sensible way. For that you use PL/SQL.
You create a safe haven for the data, making sure it can only be manipulated in a safe and pre-defined way by exposing data retrieval and manipulation through PL/SQL packages and business logic, creating an API-interface, controlling the access to one of your most valuable assets.

One of the master showmanship features of PL/SQL is Oracle Application Express! Based entirely on PL/SQL, showing how powerful it is to incorporate modern web-technology on top of data-centric, data-driven application development.
With this technology, I see very little need to create utterly complex multi layered (mind you, I explicitly did not say multi tiered!) software stack approach, to create agile and modern applications. Add Edition Based Redifinition, and you have the perfect CI/CD* Agile Rapid Web Application Development environment everybody dreams of! Please, for once accept this simple truth 😉

So, my choice for this celebration: Please embrace PL/SQL and make your world a better place.

Long live the Oracle Technology Network!! Hurray!!

* Continuous Integration / Continuous Delivery

#OOW16, San Francisco, looking back

In this post I just wanted to highlight a few things that have lingered with me since the 2016 Oracle Open World Experience.

Persistent DRAM
Now, here, being at home, I must admit that I cannot find very much documentation about this, but it got me thinking… A little paradigm-shift, where computers actually wouldn’t need moving parts anymore (ie. disks of any kind). Create devices that use these memory structures, quite possibly combined with flash-disks, to run entirely on RAM. The 3D XPoint Technology could be a nice example of this. I think I would applaud such machines.
I know, not a real export point I am making here, but if anyone has a better angle, I would love to read your comments.

Thick Database
This is a much better documented topic, much more tangible too.
Toon Koppelaars started this “new” approach with his talk at OTW16. You can review his presentation here and see the video’s of the presentation here and here.
I guess some really good points there. The creation of an application is a craft. You need to get the right materials and do a number of steps to get a solid foundation. Meaning you have to create a solid data-model (yes, even in the world of BigData, schema on write, etc.) most applications still rely on a data model and all that we were taught to go with that. Not much sense in repeating what’s in the presentation here though.
An eye opener and something to (re)consider!! I plan to talk about this a bit more later.

EBR & Oren
One of the best sessions I visited during OOW16 was the presentation by Oren Nakdimon accompanied by the illustrious Bryn Llewellyn.
The presentation discussed a true implementation of CI/CD using some of the capabilities of the Thick database paradigm as discussed above, combined with the possibilites that Edition Based Redefinition brings.
Using these technologies, Oren has been able to implement a rolling upgrade scenario for Moovit. I find this impressive.

Philippe Fierens & SPARC
I had the honor and pleasure of working closely together with my good friend Philippe Fierens during this edition of OOW. It always adds a dimension if you are able to tackle some of the challenges of the week as a team! Thank you Philippe.
Though Philippe I am also affiliated to the continuing efforts to build and maintain the Oracle SPARC architecture of which he is a strong advocate. Be sure to follow his blog to learn about the latest developments in this area.

Panel discussions on the last day
Saving the best for last… Literally!
On the last days there we some panel discussion regarding SQL / PLSQL and application architecture. I found these discussions to be quite meaningful and the interaction with the attendees was grand. Having people like Chris Saxon, Connor McDonald, Toon Koppelaars and Carry Millsap on a panel, there is no way you can go wrong!

OTN & a bow
Finally, looking back at this OOW, it was actually the first one I visited as a member of the OTN Oracle ACE community.
Boy, does that make a difference in how you experience Oracle Open World!!
Of course, you can chill and relax at the OTN Lounge, learn a lot of different things, spot Oracle Hero’s as the wander by if you are a “regular” visitor to OOW. And, by all means, I recommend you do as it is extremely valuable.
But the difference this time was that I really belonged there.
A very big thank you to Jennifer for the hard work you put into making all of this possible!
And, please, support Girls who Code, the initiative OTN sponsorred this year by tweeting a selfie with the hashtag #girlswhocode and the appropriate sticker!!

#doag2016 my picks and suggestions

As many conferences evolve over the years, the number of sessions on offer can easily be overwhelming. I have overheard many conference attendees wrestling with their choices for what to see and which sessions to attend.

For DOAG 2016 I have a short overview with my picks and with one or two tip-sessions. I hope this helps, though it is just my personal preference of course…

Please note that this post is based on the printed version of the conference planner and this may obviously be subject to change. Find the on-line version of the conference planner here!

Tuesday, November 15th
08:30
Goto-session
Connecting Oracle & Hadoop by Tanel Poder
Tip
Structuring an APEX Application by Alex Nuyten
Meet your match: Advanced Row Pattern Matching by Stew Ashton

11:00
Goto-session
How to identify the Right Workload for Database In-Memory by Andy Rivenes
Tip
Die Schlechten ins Kröpfchen – SQL analyse für DBAs by Martin Klier

12:00
Goto-session
Was die IT von der Luftfahrt lernen kann by Uwe Küchler
Tip
Using image copies for Oracle Database Backup by Ilmar Kerm
Using SQL Transaction Framework to rewrite Bad SQL on the fly by Kerry Osborne

13:00
Goto-session
Plötzlich Multitennant – was ändert sich für den DBA by Uwe Hesse
Tip
Oracle VM auf Exadata – Erfahrungen aus der Praxis by Christian Pfundtner
Einsatz von Maps in APEX by Denis Kubicek

14:00
Goto-session (TOP-tip)
–> Session got cancelled, but will be at UKOUG!
Patch you application with No Downtime (& No extra Costs!) by Oren Nakdimon
Tip
Hacking Oracle’s memory – About Internals & Troubleshooting by Stefan Koehler

15:00
Goto-session
XML in der Oracle DB by Wolfgang Nast
Tip
PL/SQL Performance – Best practices für Laufzeitoptimierung by Jan Gorkow

16:00
Goto-session
The Oracle Optimizer – Upgrading Without Pain by Nigel Bayliss
Tip
Erfahrung nach einem Jahr Fusion Middleware 12c by Jan-Peter Timmerman

17:00
Goto-session
Active Session History: Advanced Analytics by David Kurtz
Tip
MySQL for Oracle DBAs by Philipp Michaly
Deploying PL/SQL Applications, Building Rome in a Day by Alan Arentsen

Wednesday, November 16th.
08:00
Goto-session
Logical Replication in 12cR2 – What are the options now? by Vit Špinka
Tip
Function madness: Use and Abuse of PL/SQL Functions by Piet de Visser

09:00
Goto-session
Ensuring your Physical Standby is Usable by Michael Abbey
Tip
RMAN – From Beginner to Advanced by Marcin Przepiorowski

10:00
Goto-session
Oracle Secure Backup – eine Livedemo by Sven-Olaf Hilmer
Tip
Oracle Hacking Session by Kamil Stawiarski
Advanced Interactive Grids by Patrick Wolf

11:00
Goto-session
The Battle: Linux vs. Windows by Dierk Lenz, Johannes Ahrends and Martin Klier
Tip
Adaptive Features or: How I Learned to Stop Worrying… by Ludovico Caldara
Controlling Execution Plans – Workshop by Kerry Osborne

12:00
Goto-session
Application Express für den DBA? Geht das? by Joel Kallman
Tip
Und Sie bewegt sich doch by Lothar Flatz
APEX Desktop Apps – Interaktion mit dem Client System by Daniel Hochleitner

13:00
Goto-session
Hash Joins and Bloom Filters by Toon Koppelaars
Tip
Ansible für Oracle DBAs by Alexander Hofstetter

14:00
Goto-session
Delivering Continuous Availability for Database Services by Michael Timpanaro-Perrotta
Tip
Dbvisit – Oder doch lieber Data Guard by Andreas Kother
Chase the Optimizer Every Step of the Way by Mauro Pagano

15:00
Goto-session
Top 7 Plan Stability Pitfalls & How to Avoid Them by Neil Chandler
Tip
Advanced RAC Programming Features by Martin Bach
Weblogic 101 for DBA by Osama Mustafa

16:00
Goto-session
Bad Boys of Replication – Changing Everything by Björn Rost and yours truly

17:00
Goto-session
Oracle System Statistics by Paul Matuszyk
Tip
Compression – Technik und sinnvolle Umsetzung by Klaus Reimers
Node.js der Alleskönner by Kai Donato

Thursday, November 17th.
08:00
Goto-session
FAQ about Masking Sensitive Data in Oracle Database by Maja Veselica
Tip
Data Guard in Oracle 12.2 – Crash Course by Zoran Pavlovic

09:00
Goto-session
Mining the AWR v2: Trend Analysis by Maris Elsins
Tip
Regular Expressions: Say What? by Alex Nuyten

10:00
Goto-session
Databases Clone Using ACFS by David Hueber
Tip
R.I.P. Oracle Database by Markus Lohn

12:00
Goto-session
Writing Efficient SQL Statements by Joze Senegacnik
Tip
Validate User Input in APEX by Richard Martens

13:00
Goto-session
Backup und Recovery PoC auf der Recovery Appliance by Frank Schneede
Tip
Ready, Steady, GIT: Einführung eines Versionskontrollsystems by Carolin Hagemann

14:00
Goto-session
Warum sollte man die Multitennant Database Option Verwenden by Johannes Ahrends
Tip
Collections in PL/SQL by Frank Haney

15:00
Goto-session
Saving Lives at Sea – At an Industrial Scale Using Oracle Cloud Technology by Oliver Limberg and yours truly
Tip
Part 1: The NoPL/SQL and Thick Database Paradigms by Toon Koppelaars and Bryn Llewellyn

16:00
Goto-session
Part 2: The NoPL/SQL and Thick Database Paradigms by Bryn Llewellyn and Toon Koppelaars

And!!
Do not forget…
The first ever APEX Hack’a’thlon is going down on Friday the 18th of November at the DOAG Education day. If you are interested or just want more information, don’t hesitate to drop a line.

#DBADev (Ops), who knows what is going on…

I have been considering writing this article for quite some time now.
APEX Connect 2016 in Germany’s capital Berlin and the DOAG Database days have finally persuaded me to talk more about #DBADev, let me explain why…

Whenever in the stone age…

During my career as DBA, I was always working closely together with Oracle Forms & Reports developers. In retrospect, the cooperation in that time was remarkable.
These Forms & Reports developers had always been used to working on a host-based platform.

For those of you who actually remember Oracle Forms & Reports and wonder…
Was there ever Forms & Reports host based?
Yes there was, but it is creepily long ago!!

Because of the nature of Forms & Reports, there always was a lot of consideration about where to place application code. This especially became true when PL/SQL was introduced and the migration to Oracle Forms & Reports 6.5 came about.
This brought the transition to client/server based computing and introduced physical distance between the database and the “front-end”.
Front-end between quotation marks, as in today’s world we don’t actually know “front-end” anymore in this same qualification. The “Frond-end” was always more elegantly and fittingly described as a “fat-client”, because of the sheer size of the software and utilities that were required on the end-users workstation.

The physical separation and distance between the presentation entity and the data manipulation engine required and inspired a lot of thought and debate on where the bulk of data processing had to be done.
You can imagine the impact of having a specific data manipulation done inside an Oracle Form that lived on a desktop on the other end of the network. Especially when the required data set is large. Having 1,000 records being fetched, where 2 where manipulated and then send back in bulk, repeated 100 times, 4 times a minute on a 10 Mbps network. OK, clear, that needs to be done smarter.
The solution: work with small data sets and do database side manipulation to limit client/server communication. And actually, that worked quite well!!

All good and fine… But how does this tie in to #DBADev? This already sounds so harmonious. And how could APEX Connect 2016 have inspired this article?

Well… Let’s see

Later on, I found that this cooperation appeared to be not so normal.
If you step out of the world of client-server computing and move on to “todays world”, that started more or less in the nineties with web based computing – or cloud-computing “avant la lettre“ or “my stuff on your computer” or however you describe it – you find a world that consists of “strange things”.
I find these things “strange things” because I believe they are suboptimal, and luckily I find myself not alone in this corner.
Suboptimal in a way that data manipulation solutions (lets call them applications for now) should be considered to be database agnostic. This independence dictates that you use the database as just a data store or even more accurately, as a persistency store. Blane Carter 2 minute TechTip

In another scenario these applications are designed and build by developers who are very good at creating intuitive and sharp looking user interfaces. Unfortunately often with a lesser developed understanding of the mechanics involved in dishing up and serving data to this newly established middle tier.

With the continuing professionalization of IT over the past 20 years, we have seen the creation of a wide variety of disciplines. These range from those who think about IT (architects, managers, designers) to those who build IT (programmers, engineers) to those who run IT (system administrators, operators) and the majority of these disciplines today are self-contained groups of professionals and specialists who excel at their own game. Basically that is good as the profession is wide and complex enough to support this.
The problem is that there is no longer anyone who has the whole picture.

Bring it on / together!

apex-logoAPEX Connect 2016, to me personally, was the first time I really saw #DBADev in practice. With the following two examples I want to illustrate my inspiration.

The first talk of this genre was @alexnuijten with his confessions, and subsequent smart tips and best practices in “Structuring an APEX application”.
As a pure database developer like Alex, you are automatically more prone to thinking about “DBA-stuff”. A lot of these best practices, although they are very database centric, like using a view for each application screen, are obviously primarily there to help the developer. And, don’t get me wrong, that is a very good thing! Alex inspires to try and combine the best of both worlds, which helps getting the most out of your application, your database, and therewith frankly, out of your total investment.

The second example was the information-packed presentation by Dietmar Aust @daust_de, called “Oracle APEX Scripting – die Kommandozeile ist Dein Freund“ (the command line is your friend).
Much more than “just about developing”, this presentation bridged gaps in more than one way. Perhaps it is even #DBADevOps if you think about it.

The recent DOAG Database days held a few additional surprises with the presence of @cczarski and @nielsdb. A very will pitched presentation by Bruno Cirone really sparked the growing interest in the topic!!

It is funny how an idea that was initiated some 18 months ago, conceived together with Sabine Heimsath @flederbine has grown and evolved out of natural demand. For me, this is one other aspect of the industry, where APEX is setting new frontiers.
With a growing awareness and more people recognizing the gap, the deficits it is bringing and the benefits cooperation brings, I have good hopes.

APEX is not only the technology that enables you to create web-based apps super quickly, it is also the technology that brings developers and DBA’s truly closer to each other, ensuring a maximum bang for the buck when it comes to utilizing your database infrastructure investments!
I am not saying we are there, but this is definitely a first step in the right direction!

Why GUI sucks…

Of course we all know GUI stands for Graphical User Interface, just as CLI stands for Command Line Interface, right!
Or, rather, a GUI is this nice, flashy screen where you can easily roam with your mouse, comparable to a multiple choice quiz, where the right answer is there for the picking.
A CLI on the other hand is this dark, mysterious blinking cursor… Nothing happens unless you know more or less what you are doing. Comparable to an open questions quiz.

Sparked by a recent Twitter discussion, I decided I should probably write the umpth blog post about this to make my contribution to this lasting dispute.

Disclaimer:
This post discusses GUI in relation to system administration, not necessarily in relation to data-entry or data manipulation applications that are used in front offices all over the world. I guess CLI has no place in a world like that…

bad gui

Why GUI sucks?
I have done my fair share of installing, scripting, ad-hoc fiddling, testing and trying. And, I have found myself in the situation where I worked with younger computer geeks or even in situations where nobody had the time to figure anything out – stuff just had to be made to work.

Probably in the few lines above, we could already have the basics for this discussion!

But, why then does GUI suck?

GIU’s suck because they are limiting, labor (or rather RSI) intense and require you, the operator, to be there, physically clicking away on your computer.

Limiting
They are limiting, or at least most of the time they are, because it is often quite hard to get a visual representation of each and every function of a device / program / system etc. If you consider, for instance, a networking device and then try to imagine having to create a GUI that lets the operator configure and define each and every parameter of a specific VLAN or VPN. And then also bear in mind that the GUI has to stay crisp, clean and intuitive.
For this reason, I have seen many vendors who have created a GUI for basic setup only, relying on the professionals to find their way in the CLI. They GUI can then stay intuitive enough to at least get the basics done.

GUI’s that aim not to be limiting, of which there also are a few out there too, need to sacrifice a lot of the things that a good GUI should stand for:

  • Short click paths (3 clicks from anywhere to get where you want to be)
  • Intuitive (don’t have to guess or read a manual to use a GUI)

So, what you end up with, then, is a maze of riddles, where you can easily spend a good day setting up some new functionality. Somehow I believe this is not what the designers had set out for nor is it a valid solution for most tasks at hand.

Labor intense
I personally find GUI’s often, quite labor intense. Not just for the absence of the ability to automate tasks, though. Especially if there is a lot of specific configuration that needs to be done, you often end up left and right clicking until your hands start hurting.
And, in the end, you always end up with the eerie feeling that you missed out on that one specific setting that would really put the icing on your configuration.

Operator presence
Last, but not least… For a GUI to work, you need to be at your workstation. Period.
Anybody who has ever worked on automated testing of applications that rely on a GUI, knows about the hideous crime of having to script test-cases, either working with hidden button-labels, screen coordinates, etc. Where these scripts fail every other day because a developer moved a window to a better spot or used a new button-label. You end up coding your application just to make it testable.
No, GUI requires operator presence, making it useless for automation or scaling.

good_cliThe bliss of CLI
Okay, middle Ages… or Stone Age…
Nothing really fancy, just a black (or, if you are feeling frivoled, you may choose some nice color) square on your screen with a blinking underscore – most often. And then you say; GUI sucks?

One of the challenges in this hyper fast moving world full of smart phones, tablet PC’s and what have you, loaded with intuitive and fast apps, is to realize that actually “hard core IT” is hard core.
You need to learn your stuff first, know what you do and know about the consequences of choices you make. You will have to learn to be able to walk the walk and to talk the talk. Once you have mastered that, this blinking underscore is no longer a roadblock but a invitation! Just like after mastering a foreign language, you will know what to say and do to open up the potential at your fingertips.

And now, reality
Of course, the above is ranting is just one side of the story.
It is even just one side of the story in hard core IT!

As already stated above, sometimes there is no time to really dive into stuff and get to know the tools you need to get to work for you. I am pretty sure we all have been in a place where we needed to get a project done or some functionality realized, where we just did not have the right devices.

What are your options at such a moment?
Get a hardcore IT specialist who does “talk the talk”?
Probably it will not be cheap and probably it will be a very thorough configuration, but just not exactly as you need it to be… Though still a valid option, even in a number of cases it’s a no-go.
This… is where a good GUI comes in handy.
It will allow you, yourself, to organize that which needs organizing in an orderly fashion. Okay, the GUI will have to be accurate and well thought through, but I that goes for all interfacing, that is also true for the CLI.

Seeing this story unfold… I guess I still think GUI sucks. (sorry!)
But GUI has a place, a very well earned place in a super-fast and highly demanding world. Still I am convinced that if you are working in a highly professional environment, having to do intricate stuff on ever live environments, I would say a good script for a CLI is the only way you can create some assurance that whatever change you need to execute will actually have a predictable result.

And putting in the effort of learning how to use any CLI? Well, I guess that’s why it is called “professional IT”.

DBA_FEATURE_USAGE_STATISTICS and SE2

This blog post is inspired on work I have been doing on Standard Edition databases and the returning confusion about what is and what is not part of Standard Edition.

DBA_FEATURE_USAGE_STATISTICS is a tool in determining license usage for the Oracle database. It is good to understand the implications of each entry, know what is happening in your database and thus be able to have a substantial conversation about the usage of your license, being SE, SEO, SE2 or EE!

This list is the full list of DBA_FEATURE_USAGE_STATISTICS and I have found no source where there is a mapping of these features to database editions. As it is a lot of tedious work I call upon the community to help complete the list and make it as accurate as can be. So, if you have news, improvements, other bits of information, please send it to me and I will make sure it gets added!

WARNING: Still… with all the work that goes into these answers, it is not the law, it is a very serious interpretation of facts which will pay a part in helping you make the right decision when it comes to database licensing.

Feature Standard Edition
Active Data Guard – Real-Time Query on Physical Standby NO !
ADDM NO !
Advanced Replication NO !
Application Express YES
ASO native encryption and checksumming NO – EE option !
Audit Options NO !
Automatic Maintenance – Optimizer Statistics Gathering YES
Automatic Maintenance – Space Advisor YES
Automatic Maintenance – SQL Tuning Advisor NO !
Automatic Memory Tuning
Automatic Segment Space Management (system) YES
Automatic Segment Space Management (user)
Automatic SGA Tuning YES
Automatic SQL Execution Memory YES
Automatic SQL Tuning Advisor NO !
Automatic Storage Management
Automatic Undo Management
Automatic Workload Repository
AWR Baseline NO !
AWR Baseline Template NO !
AWR Report NO !
Backup BASIC Compression
Backup BZIP2 Compression
Backup Encryption
Backup HIGH Compression
Backup LOW Compression
Backup MEDIUM Compression
Backup Rollforward
Backup ZLIB Compression
Baseline Adaptive Thresholds
Baseline Static Computations
Bigfile Tablespace
Block Media Recovery NO !
Change Data Capture NO !
Change-Aware Incremental Backup
Character Semantics
Character Set
Client Identifier
Clusterwide Global Transactions
Compression Advisor
Crossedition Triggers
CSSCAN
Data Guard NO !
Data Mining NO – EE option !
Data Recovery Advisor
Database Migration Assistant for Unicode
Database Replay: Workload Capture NO ! 1
Database Replay: Workload Replay NO ! 1
DBMS_STATS Incremental Maintenance
Deferred Open Read Only
Deferred Segment Creation NO !
Direct NFS
Dynamic SGA
Editioning Views
Editions
EM Database Control
EM Grid Control
EM Performance Page
Encrypted Tablespaces
Exadata
Extensibility
File Mapping
Flashback Data Archive NO ! 2
Flashback Database NO !
GoldenGate NO – EE option ! 3
HeapCompression
Hybrid Columnar Compression NO !
Instance Caging NO !
Internode Parallel Execution
Job Scheduler
Label Security NO – EE option !
LOB
Locally Managed Tablespaces (system) YES
Locally Managed Tablespaces (user)
Locator YES
Logfile Multiplexing
Long-term Archival Backup
Materialized Views (User) NO !
Messaging Gateway NO !
MTTR Advisor
Multi Section Backup
Multiple Block Sizes
Object
OLAP – Analytic Workspaces NO – EE option !
OLAP – Cubes NO – EE option !
Oracle Database Vault NO – EE option !
Oracle Java Virtual Machine (system) YES
Oracle Java Virtual Machine (user)
Oracle Managed Files
Oracle Multimedia
Oracle Multimedia DICOM
Oracle Secure Backup
Oracle Text
Oracle Utility Datapump (Export)
Oracle Utility Datapump (Import)
Oracle Utility External Table
Oracle Utility Metadata API
Oracle Utility SQL Loader (Direct Path Load)
Parallel SQL DDL Execution NO !
Parallel SQL DML Execution NO !
Parallel SQL Query Execution NO !
Partitioning (system) YES
Partitioning (user) NO – EE option !
PL/SQL Native Compilation
Quality of Service Management NO !
Read Only Tablespace
Real Application Clusters (RAC) YES 4
Real-Time SQL Monitoring
Recovery Area
Recovery Manager (RMAN) YES
Resource Manager NO !
Restore Point
Result Cache NO !
RMAN – Disk Backup
RMAN – Tape Backup
Rules Manager
SecureFile Compression (system) YES
SecureFile Compression (user)
SecureFile Deduplication (system) YES
SecureFile Deduplication (user)
SecureFile Encryption (system) YES
SecureFile Encryption (user)
SecureFiles (system) YES
SecureFiles (user)
Segment Advisor (user)
Segment Shrink
Semantics/RDF NO !
Server Flash Cache
Server Parameter File
Services
Shared Server
Spatial NO – EE option !
SQL Access Advisor
SQL Monitoring and Tuning pages NO – EE option !
SQL Performance Analyzer NO !
SQL Plan Management NO !
SQL Profile
SQL Repair Advisor
SQL Tuning Advisor
SQL Tuning Set (system) YES
SQL Tuning Set (user)
SQL Workload Manager
Streams (system) YES 5
Streams (user)
Transparent Data Encryption
Transparent Gateway YES – option
Transportable Tablespace NO ! 6
Tune MView
Undo Advisor
Very Large Memory
Virtual Private Database (VPD) NO !  7
Workspace Manager
  1. Unless used for upgrade to Enterprise Edition.
  2. Unless used without history table optimization.
  3. Goldengate can also be used with Standard Edition, it is a separate product.
  4. RAC on Enterprise Edition is an option.
  5. No capture from redo.
  6. Import transportable tablespaces in all editions.
  7. Policies on XDB$ACL$xd_sp in sys.v_$vpd_policy are internal ( “out of the box”) policies that are used by XDB to control the access to certain internal tables. All the logic is implemented in the xdb.DBMS_XDBZ package and there is no way one can control / influence the way this is working.

The rising of the Standard Edition (#orclse2)

It was the second half of 2011 when the broader introduction of Standard Edition database security tooling was introduced in The Benelux. Dbvisit Standby was the tool and protecting data in Standard Edition databases was the deal.

I remember the first meetings vividly! The Standard Edition database? Many people had not heard of this edition or, more frighteningly, the ones that knew about it, ignored it. Standard Edition was not something to be taken seriously, let alone used to run your production system on.
Still this time marked the start of the silent (r)evolution and the rising of the Standard Edition.

Since those days many things have changed.
With the continued attention and drive for promoting Oracle Standard Edition (SE) database, the visibility of this edition has flourished.
Obviously the economic hardship of the last years have encouraged companies to review their IT budgets. The investment friendly character of SE have helped its growth, especially in such times.
During the second half of 2013 the first broader initiatives around SE started to become visible. One of the highlights of the SE uprise was the world premiere of the Standard Edition Round Table during Harmony 2014 in Helsinki Finland organized by Ann Sjökvist, Philippe Fierens and myself, the same people that lead the Standard Edition community today.

With the increased attention, worries also came. The Standard Edition and Standard Edition One editions had no cap on the number of cores per processor. This means that modern servers, running SE, equipped with huge amounts of processor cores, bring tremendous processing power at extremely low cost.
Signs of change became visible with the postponed release of Oracle database patch release 12.1.0.2.0 for SE.se2And now, in 2015, Standard Edition is a tool to distinguish yourself with. Many IT consultancy firms advertise their SE-expertise and have increased visibility in this respect. Many new initiatives have been fired up to help give Standard Edition the punch it needs for the even more serious jobs. News on Standard Edition is spread by a range of blog posts (like this one) as a result and UKOUG_Tech15 is even hosting a Standard Edition track! We have come a long way!!

And finally, with the release of Oracle Standard Edition Two, on the first of September 2015, the future of Oracle Standard Edition has been secured. The release of version 12.1.0.2.0 marks a new era for this Smart Edition.
Standard Edition Two retains many of the important advantages of Standard Edition and Standard Edition One while capping the processor core factor at a very usable level.

Yes, Oracle Standard Edition is a solid product in the Oracle stack and is still capable to help Oracle offer the most complete software operations stack, especially due to the development and deployment capabilities of APEX.
An unbeatable, endlessly scalable and super affordable solution on the market today.
We have come a long way to witness the rising of the Standard Edition!

And what about the changes?

  • Oracle Database Standard Edition 2 (SE2) will replace SE and SE1 from version 12.1.0.2 onward;
  • SE2 will have a limitation of maximum 2 socket systems and a total of 16 CPU threads*;
    • SE2 has Resource Manager hard coded to use no more than 16 CPU threads, which helps protect against noncompliance.
  • SE One and SE will no longer be available to purchase from November 10th, 2015;
  • Oracle is offering license migration scenarios from SE One and SE to SE2;
    1. SE One users pay a 20% increase in support for the migration.
    2. SE customers face no other cost increases for license or support*.
  • * Named user (NUP) minimums for SE2 are now 10 per server;
  • There are no changes in the use of visualization solutions;
  • 12.1.0.1 SE and SE1 customers will have 6 months of patching support once SE2 12.1.0.2 is released with quarterly patches still being available in October of 2015 and January of 2016.

Hope this helps!

dbms_redefinition houskeeping

dbms_redefinition actually is a nifty, but powerful little toolkit that let’s you change table-definitions without actually locking the table in such a manner that it would prevent regular operations from being interrupted.

You can read loads about it in the Oracle documentation or in the wealthy library by Mr. Tim Hall.

housekeepingOne thing I noticed, and which I want to share here has lots to do with the house keeping that is automatically done by dbms_redefinition. Actually it talks about some of the bits it didn’t brush up after itself.

dbms_redefinition works using triggers and materialized views to help switch from your current active production table, via a so-called interim table, back to your shiny new, redefined production table. You can follow this beautifully by querying the dba_segments view along the way.
For this it obviously creates this materialized view and the other required components and it removes them after you finish your redefinition-trip. After all that is done, you can just remove your interim table and be done with it.

At least, that is what happened in most of the cases and is what you would expect!

Though, in some cases… it proved impossible to drop the interim table. To me this was somewhat scary… did the redefinition not finish, or did it not finish correctly?

What happened?

There was this table that I redefined. It had referential integrity constraints (aka. foreign key constraints) pointing towards it. Of course dbms_redefinition neatly created version of these to the interim table to be sure nothing went wrong.build-in-flight

When finishing redefinition (with dbms_redefinition.finish_redef_table) most of the interim bits and pieces are cleared away and you just have to drop your interim table manually (okay, we can discuss if this actually would / could / should be automated, but let’s leave that).

But… when you are then manually dropping this interim table (in a busy production system, I tend to want to be careful and just issue ‘drop table int_<tablename>‘. That does not work. dbms_redefinition “forgets” to remove these referential integrity constraints in the other tables (which are neatly names tmp$$_<constraintname>).
This than means either issue ‘drop table int_<tablename> cascade constraints‘, which is more then the basic ‘drop table‘ or find these constraints and remove them manually first:

select 'alter table '||owner||'.'||table_name||' drop constraint '||constraint_name||';'
from dba_constraints dc
where constraint_type='R'
and r_constraint_name in
(
select constraint_name
from all_constraints
where table_name = 'INT_<tablename>'
);
alter table <schema>.<foreign table> drop constraint TMP$$_<constraint name>;

I guess, personally, I would like dbms_redefinition to do this for me…

It’s smart enough! it created them!

Just a quick and additional note, setting ddl_lock_timeout to 30 or 60 for your session can actually help and prevent a lot of non-sense on a busy system.

Hope this helps someone sometime 😉

Introducing FETCHER in a running replication process

This is no regular bit of work and it will probably (and hopefully) never hit you in a production setup…

The prerequisite is that you know how on-line data replication in general, and Dbvisit Replicate specifically, work.

The following case is true:
I had half of a replication pair running.
It means that the MINE process was running, converting REDO-log in PLOG-format. The APPLY process had not yet started because the target database was still being prepared.

dbvisit-replicate-logical-replication-made-easy-18-638-300x225The reason for this is that we needed to start converting redo-log information to PLOG information while we were setting up the target environment. The reason for that was that the setup (exporting source, copying dump to target and importing) was taking quite a bit of time, which would impact redo-log storage to heavily in this specific situation.

It was my suspicion that the MINE process was unable to get enough CPU-cycles from the production server to actually MINE more redo-log seconds than wall-clock seconds passed. In effect, for every second of redo-log information that was mined, between 1 and 6 seconds passed.

This means that the replication is lagging behind and will never be able to catch up.

To resolve this, the plan was to take the MINE process of the production server and placed on an extra server. On the production server, a process called FETCHER would be introduced. The task of this process is to act as a broker between the database and the MIN process, forwarding the requested on-line an archived redo log files.

Normally (!) you would use the nifty opportunities that Replicate offers with the setup wizard and just create a new setup. And actually, this is what I used to figure out this setup. And, if you can, please do use this…

Why didn’t I then, you would rightfully ask?

Well… The instantiation process would take to long, and did I say we were under time-pressure?

  • Setup wizard, 5 minutes
  • The famous *-all.sh script, ~ 1 hr.
  • Datapump Export, ~ 10 hrs.
  • Copy from DC old to DC new,  ~ 36 hrs.
  • Datapump Import, ~ 10 hrs.

So, totally we could spend 57:05 hrs. to try to fix this on the go…

Okay, here we go:

Note: cst-migration is the name of the replication project as you specified it in setup wizard when setting up Replication.

TIP: When setting up on-line replication, it is worth your effort to create separate tnsnames.ora entries for your project, like ‘repl-source’ and ‘repl-target’ acros all nodes.
It can get hellishly confusing if you have, as in this case, a database that is called <cst> and is called the same on the source and target server!

1. Step one:
We obviously had the ./cst-migration/config directory from our basic setup with just MINE & APPLY. This directory holds (among others) the ./cst-migration/config/cst-migration-ontime.ddc file. This file holds the Dbvisit Replicate Repository contents that is needed to run the processes.

From this setup, MINE is actually running. We actually concluded the fact that we were not catching up from this process.

2. Step two:
Now we run dbvrep -> setup wizard again and create a Replicate setup directory with FETCHER and isolate the ./cst-migration+fetcher/config/cst-migration+fetcher-onetime.ddc.

By comparing the two files, I was able to note the differences and therewith conclude the changes necessary to introduce a FETCHER process. It is a meticulous job to make sure all the paths on all the three servers are correct, that port numbers are correct and that all the individual steps are take in the right order. This is the overview.

Having these changes, it is all downhill from now.

3. Step three:
Using the Dbvisit Replicate console, the new entries and the changes were made to the DDC-information stored in the Replicate repository. You can enter these manually or execute your change-file by executing @<change-file-name> inside the console.

4. Step four:
Create the ./cst-migration directory on the system you will use for the relocated MINE process and copy the cst-migration-MINE.ddc and cst-migration-run-source-node.sh in this directory.
Rename the cst-migration-run-source-node.sh to cst-migration-run-mine-node.sh to reduce confusion.
Make sure that the paths mentioned in the cst-migration-MINE.ddc are correct for the system you are starting it on!

NOTE: Please make sure that you can reach both the source and the target database from this node using the tnsnames-entries you have created for the replication setup.

5. Step five:
Rename the cst-migration-MINE.ddc on the source node (!) to cst-migration-FETCHER.ddc and change the cst-migration-run-source-node.sh file to start the FETCHER process in stead of MINE process.

You are now ready to start your new replication processes!

NOTE: If you are running APPLY already, there are some additional things you need to be aware of.

Although it was not the case when I came across this challenge, I am happy to say that Dbvisit have verified and accepted this solutions as a supported action.

Hope this helps.