Why I picked Postgres over Oracle, part I

As with many stories, if you have something to tell, it quickly takes up a lot of space. Therefore this will be a series of blog posts on Postgres and a bit of Oracle. It will be a short series, though…

Let’s begin

History

 I have started with databases quite early on in my career. RMS by Datapoint… was it really a database? Well, at least sort of. It held data in a central storage, but it was a typical serial “database”. Interestingly enough, some of this stuff is maintained up to today (talk about longevity!)
After switching to a more novel system, we adopted DEC (Digital Equipment Corporation) VAX, VMS and Micro VAX systems! Arguably still the best operating system around… In any case, it brought us the ability to run, the only valid alternative for a database around, Oracle. With a shining Oracle version 6.2 soon to be replaced by version 7.3.4. Okay, truth be told, at that time I wasn’t really that deep into databases, so much of the significance was added later. My primary focus was on getting the job done, serving the business in making people better. Still working with SQL and analyzing data soon became one of my hobbies.
From administering databases, I did a broad range of things, but always looping back to or staying connected to software and software development using databases.
Really, is there any other way, I mean, building software without using some kind of database?
At a good point in time, we were developing software using the super-trendy client-server concept. It served us well at the time and fitted the dogma of those days. No problems whatsoever. We were running our application on “fairly big boxes” for our customers (eg. single or double core HP D 3000 servers) licensed through 1 or 2 Oracle Database Standard Edition One licenses, and the client software was free anyway…
Some rain must fall
The first disconnect I experienced with licensed software was that time we needed to deploy Oracle Reports Server.
After porting our application successfully to some kind of pre-APEX framework, we needed to continue our printing facilities as before. The conclusion was to use Oracle Reports Server, which we could call to fulfill the exact same functionality as the original client-server printing agent (rwrbe60.exe, I’ll never forget) did. There was only no way we could do this, other than buying licenses for (I though it was) Oracle BI Publisher, something each of our clients had to do. This made printing more expensive than the entire database-setup, almost even the biggest part of the entire TCO of our product, which makes no sense at all.

More recently

This disconnect was the first one. Moving forward I noticed and felt more and more of a disconnect between Oracle and, what I like to call, core technology. Call me what you will, I feel that if you want to bring a database to the market and want to stay on top of your game, your focus needs to be at least seriously fixed on that database.

Instead we saw ever more focus for “non-core” technology. Oracle Fusion, Oracle Applications (okay, Oracle Apps had been there always), and as time progressed, the dilution became ever greater. I grew more and more in the belief that Oracle didn’t want to be that Database Company anymore (which proved to be true in the), but it was tough for me to believe. Here I was, having spent most of my active career focused on this technology, and now it was derailing (as it felt to me).

We saw those final things, with the elimination of Oracle Standard Edition One, basically forcing an entire contingent of their customers either out (too expensive) or up (invest in Oracle Standard Edition Two, and deal with more cost for less functionality). What appeared to be a good thing, ended up leaving a bad taste in the mouth.
And, of course… the Oracle Cloud, I am not even going to discuss that in this blog post, sorry.

The switch to Postgres

For me the switch was in two stages. First, there was this situation that I was looking for something to do… I had completed my challenge and, through a good friend, ran into the kind people of EnterpriseDB. A company I only had little knowledge of doing stuff for PostgreSQL (or Postgres if you like, please, no Postgré or things alike please, find more about the project here), a database I had not so much more knowledge of. But, their challenge was very interesting! Grow and show Postgres and the good things it brings to the market.

Before I knew it, I was studying Postgres and all the things that Postgres brings. Which was easy enough in the end, as the internal workings and structures of Postgres and Oracle do not differ that much. I decided to do a presentation on the differences between Postgres and Oracle in Riga. I was kindly accepted by the committee even when I told them, my original submission had changed!
A very good experience, even today, but with an unaccepted consequence. -> The second part of the switch was Oracle’s decision to cut me out of the Oracle ACE program.

It does free me up, somehow, to help database users across Europe, re-evaluate their Oracle buy-in and lock-in. Look at smarter and (much) more (cost)-effective ways to handle their database workloads. This finalized “the switch”, so to speak.
Meanwhile more and more people are realizing that there actually are valid alternatives to the Oracle database. After the adoption of the Oracle database as the only serious solution back in the early 1990’s, the world has changed, also for serious database applications!

End of Part I

Please follow this link to the second part of this blog post.

PGConf.EU, Postgres Rocks!!

On a rainy Tuesday morning we set off on a Polish Airlines flight to Warsaw.
Our target: PGConf.EU with some of the biggest crew EnterpriseDB ever sent off.
Our goal: spread the love of EDB within the PostgreSQL community, where EDB is such an intrinsic part of.

The evening before the conference promises to be an interesting one. We have rented off the Hard rock Cafe in Downtown Warsaw for the kickoff of our European Tour. During this tour we will visit Hard rock Cafes throughout Europe to talk about the future of Postgres.
With 80 people present, our space at the Cafe was a full house! With food and drinks it was a perfect place for networking and sharing stories about Postgres. After the drinks, Warsaw city tour-guides took us on a walking sightseeing tour through the city. This inevitably led us to Vodka and Herring, which formed the crown on the day.

For me personally it was the perfect opportunity to submerge myself in the Postgres community, feel the energy and meet some of the leading names like Bruce Momjian, Dave Page, Robert Haas, Andreas Scherbaum, (“Blame”) Magnus Hagander and many, many more.

Second day in Warsaw, first day of PGConf.EU. The EDB crew slept well and is ready to roll!
At the EDB stand we are hosting a quiz where participants can contest to actually win an electric guitar and free access to one year of on-line Postgres training. The challenge: by means of speed and correctness answer 11 questions on PostgreSQL!
In a busy and energetic there are loads of very good sessions going down, making hard to single out the sessions to follow…

The second day of PGConf.EU continues the flow of excellent talks and fierce competition at our Postgres Rocks Quiz. EDB Postgres Experts are sharing their knowledge abundantly at this conference.
Day two also is my presenting debut at a Postgres-conference. Together with friend and partner in crime Daniel Westermann of DBI-services. Our talk was well received and we were able to share some of our experiences of entering the PostgreSQL world, coming from Oracle.
EDB concluded the second day of PGConf.EU with a team-dinner in the heart of Warsaw. A unique opportunity to bring all the facets of EnterpriseDB togehter. This ranges from our community foundation to our strategic business vision, all together in one place to exchange ideas and enjoy good company over a very good meal.

As with all good things, we inevitably reach the closing day, no difference there for PGConf.EU!
More excellent talks on general Postgres development and the influence of various projects on the future of the leading Open Source database project.

We also had a very exciting final to our EDB Postgres Quiz, with a surprise victory for Rafal Hawrylak of TomTom and an excellent runner up Matthijs v/d Vleuten of Hendrikx ITC.

 

On behalf of the entire EnterpriseDB, I would like to thank the PostgreSQL conference board for an excellent experience in Warsaw!

Live free or die!!


If you like to relive pgconfeu, simply review the comprehensive tweet-timeline.

Open Source? We have been here before… right!

Since over half a year I have made an adjustment in my course… nothing too dramatic, but still it has had some impact.

I have chosen the path of the more pure technique again. Not in a sense that I don’t manage anymore, though I don’t, but that’s more of a side effect
I have chosen the path of the more pure technique in a sense that the change from Closed Source software to Open Source software allows you to actually work with and solve things by creating solutions rather than trying to figure out how something, someone created for some issue, can reconfigured so it resembles a solution for your actual problem.

Okay… okay… this of course is exaggerated, but it serves to help think about the issue.

No one ever got fired for buying Oracle” is one of the phrases I have heard numerous times over the past period.
Well, no… but it’s also no free pass to –sorry for the phrase– waste money on technology you either never going to use.

Over 80% of the installed base uses less than
20% of the technology they are paying for!

I have followed a number of the brightest mind in the industry (our industry, the database industry) for many years investing vast amounts of time in reverse-engineering pieces of technology that have been built, in order to explain certain behavior.
Of course, very necessary, no argument there, but wouldn’t it be so much more cool if this overwhelming amount of brain-power could be used to actually create stuff??

Open Source in stead of Closed Source…
The answer, I think!

Sure, I am raised with vendor created solutions, that was the default MO when I got trained. VMS, MS-DOS, HP-UX… (are you _that_ old, yes, I am _that_ old) and a number of applications that did the work.
Well, those days are gone… operating systems in data centers have (nearly) all been replaced by Linux distribution installs. And I mean like, as good as all of them.

Sufficiently stable, cost effective and they get the job done.

Next wave?
Databases!

With the current explosive growth of Open Source databases, brace yourselves. Or rather, embrace!

All the exact same arguments that are there for Operating Systems apply. There is no difference, and you, the industry, chose! And you will choose this again. Simply because “it is good enough”, it is much more cost effective and it gets the job done.
The extensibility, the agility of Open Source database software gives you the ability to let your database, be it OLTP, OLAP, Big Data, Polyglot, or whatever we come up with, do what needs to be done.
The current leader of the Open Source relational database systems is PostgreSQL. A platform developed in over 30 years to become an absolutely stable data processing engine for a fraction of the costs of the Closed Source players in this market.

Conclusion:

  1. We have seen wave 1 of Open Source where we all choose to replace the operating system standard with Linux.
  2. We will now see wave 2 of Open Source where we will choose to replace the database management system standard with… PostgreSQL (or in specific cases one of the other, more specialized systems, depending on the need).
Hope this helps!

Adding flexibility to your PostgreSQL clusters – Using EDB Failover Manager

Using PostgreSQL in enterprise environments gets more and more popular. And why not? This extremely stable and performant database can compete with ease with almost all enterprise database installations out there today.

Competing technically? Sure!
Competing from a business perspective? Absolutely!!

Making sure your database systems stay up during planned maintenance? Absolutely yes, no discussion about that!
Ensuring your systems stay up during a catastrophic failure of your master? Yes! We need to ensure 99.99999 availability.

Introducing EDB Failover Manager (or short: EFM).

A tool that will do precisely this.

  • A graceful switch-over from a master database to a slave database (and back) with just one single command. This way you have the chance to do maintenance on the (previously master) node.
  • Failover from a master node to a slave node (which will be promoted to new master).
    It is based on PostgreSQL streaming replication, which allows you to create multiple slave clusters to your master cluster.

The tool ensures access to the cluster of database clusters using a Virtual IP Address. It gives you a wealth of ‘hooks’, where you can call scripts to help you reconfigure you surrounding landscape to a switch of masters. Think of re-configuring your load-balancing tools, like Pgpool-II to make sure read and write queries get assigned to the correct cluster nodes.

Well, that sounds good, right!

So, what do you need to do?

  1. Make sure your PostgreSQL streaming replication is running.
  2. Allocate at least 3 nodes (master/slave/slave or master/slave/witness). You will need three nodes to have a quorum to prevent a split brain scenario.
  3. Install EFM on those 3 nodes and configure it.
  4. Start, run and play!

Configuration of EFM is done through efm.properties in the /etc/efm-2.1 directory.
Tip is to create 1 copy of this file and distribute this over you EFM cluster nodes. There are respectively one (master/slave/slave configuration) or two (master/slave/witness configuration) parameters that are node-specific.

  • bind.address: specific to each node, <node IP-address>:9001 (9001 is cluster communication port, same for all cluster members)
  • is.witness: put this parameter to true if the node hold no database.

All other parameters are well documented in the efm.properties file.

Enter the <IP-address>:9001 of the membership coordinator (basically the first node of the EFM-cluster you start), in the efm.nodes-file of all the cluster members.

With this, we are basically good to go!!

systemctl start efm-2.1 and your cluster is running!

The efm-command allows you to manage your cluster. Syntax for the command is: efm <command> <cluster-name> <option>.

  • efm cluster-status efm gives you a nice overview of what is happening. Precede this with the linux watch command and you can monitor this nicely.
  • efm allow-node efm pg-11 allows node pg-11 to join the EFM cluster
  • efm promote efm -switchover makes the first slave in the standby priority list the new master and converts the precious master to slave
  • efm set-priority efm pg-10 1 makes node pg-10 the first node in the standby priority list
-bash-4.2$ watch efm cluster-status efm#

Every 2.0s: efm cluster-status efm Sun Aug 27 10:02:49 2017

Cluster Status: efm
VIP: 192.168.56.10

Agent Type Address Agent DB Info
 --------------------------------------------------------------
 Master pg-10 UP UP
 Standby pg-11 UP UP
 Standby pg-12 UP UP

Allowed node host list:
 pg-10 pg-11 pg-12

Membership coordinator: pg-10

Standby priority host list:
 pg-11 pg-12

Promote Status:

DB Type Address XLog Loc Info
 --------------------------------------------------------------
 Master pg-10 0/AB0000D0
 Standby pg-11 0/AB0000D0
 Standby pg-12 0/AB0000D0

Standby database(s) in sync with master. It is safe to promote.

For troubleshooting and checking purposes, there are very informative logs in /var/log/efm-2.1

EFM truely is a very nice tool to add resilience and flexibility to your PostgreSQL database cluster configuration..

EnterpriseDB Summerschool 2017

I have been meaning to write a lot of posts, meanwhile. With the new challenges, and all, it just hasn’t happened.

But!!

although I don’t tend to do much advertising here, I really do need to share this (unique) opportunity,

I (and my other colleagues across EMEA) really want to meet you and share some of the knowledge on EDB Postgres with you. Especially targeted at Oracle DBA’s!
It will cost you one day and there is even a certificate (which you need to earn during the day) to show “I have walked the walk”.

It starts real soon, there are just very few places available, it’s free (!!) and it is – just down right plain cool – technoloy without hassle…
Bring your laptop, we provide a VM with a lot of tech pre-installed, a little bit like RAC-Attack or #RepAttack!

Visit this link: http://info.enterprisedb.com/EDB-Postgres-Summer-School and sign up!

Looking forward to seeing you personally in either Frankfurt, Munich or Hamburg.

Birth of a user group conference…

Or, how post-conference blues hit.

Seeding

You don’t actually get to witness these kinds of things to often. Yes, there was the birth of POUG conference, the passionate work of Kamil Stawiarski and the people he gathered around him. He did an awesome job and pulled it off.
Why then is this so special? Well, first of all because it is my native conference. People that have contributed for many years, working closely together with new members to create something new. I think it is kinda special.
Richard Olrichs, Ise Douwes, Luc Bors and Bart van de Laar formed the team that pulled it of! Kudos to them.

Work started already end of 2016… on the notion that this conference was being organized, there was a small Twitter bombardment to recruit as many of the interesting international speakers to come and join us. This helped create a fabulous agenda, covering 81 (!!) sessions and 3 keynotes over 2 days in the spectacular setting of “De Rijtuigenloods” in Amersfoort.

Importance for NL

We have (finally) done it! The Netherlands have experienced their very first Full Stack Oracle Conference ever! I have said this many times before, and I will say this probably many times again, this is so very important for the spread of knowledge, the exchange of experiences and cross-pollination between countries!
We have never done this before… We have APEX World, which is, of course, super important! And we have SIGs, which are very important for people working within a specialization. All good, all very important! But our business is way to specialized already. If you never take the time to look over your boundaries to what your colleague is doing (for which you don’t have time on a day-to-day basis) you’ll get isolated and miss out on possible great idea’s, changes and inspiration. For this alone, events like these are so important. For a country / region (as we span Benelux) that is so active in IT, it is a nuisance to have to go to either the UK or to Germany to experience a conference like this.

It is _that_ important…

One personal thing… nl.OUG (this is the brand new name for OGh, which also symbolizes a new start to me) focus on talks on (end-) user experience, in effect, partners with end-users coming to share project briefs… Good in itself, but not what I would applaud as main focus. These conferences – to me – are about learning and the best learning comes from professionals sharing either best practices or telling about technical implementations of technique. These stories are obviously always very welcome, but are no main focus…

My personal experiences

— International scene

As an active member of the Oracle community, I tend to know a number of people, spread out over this globe. One of the joys of a general conference is the fact that many of these people also participate. This leads to many happy encounters. With Oren Nakdimon from Israel, to Sandra Flores from Mexico, with Tim Hall, aka. Oracle Base, Maria Colgan & Brendan Tierney and therefor with Chris Saxon, more than 2rds of the Ask TOM-team!! And even many more famous speakers from home and abroad. It was a very special feeling to meet all these beautiful people on my home turf!

— Followed sessions

I didn’t get to follow many sessions, partly because of the many people I met and wanted to catch up with and partly because of, well, other responsibilities…
And perhaps a bit because of the fact that the ratio between serial sessions and parallel sessions was a bit off.
I did get to see:
The keynote by Maria Colgan highlighting the many things you can do with an Oracle database
Investigating the performance of a statement via the SQL Monitor report by Toon Koppelaars, which is always insightful!
Moving Oracle data in real-time – The 3 fundamental principles of Oracle replication by Jakub Sjeba from Dbvisit, which proved to be an excellent basis for my own session, later that day
Blockchain on the Oracle Cloud, the next big thing by Robert van Mölken, who helped me understand the technical side of the Blockchain technology.
It’s a wrap by Lucas Jellema, who did a great job at really zooming out and looking at the bigger picture.

— My sessions

I had the lucky opportunity to present even 2 sessions in Amersfoort.
Migrating your Oracle database with almost zero downtime
and
Comparing PostgreSQL to Oracle

Both sessions were well attended and interactive. I enjoyed it very much and, judging from the reactions and interactions, I guess the attendees too. Thank you for attending!!
Obviously I am happy to see the uptake of PostgreSQL and EDB Postgres in the Benelux. As said with “horses for courses”, Oracle has it’s playing-field, but so does PostgreSQL, and probably bigger than it is today 🙂

And now, the future…

This was 2017, this was the kick-off, the very first one.
With the buzz and with the post-conference blues…

It now is time to look to 2018, start preparing. Gathering lessons learned, inventorise feedback and make plans.
Whatever the outcome, there can only be 1 plan! “nl.OUGTech18″ or “Tech Experience 2018” we need to make sure the messages reaches further and wider.
With over 250 attendees for a first event, we aim for over 500 for the next event. There are more than enough potential participants in our region to pull this of.
The basic structure is there, the first succes is there, let’s do this!!

See you all next year!!
(or hopefully sooner)

Hey JAVA-developer, why don’t you love your database

Why this post?

Partly, this blogpost is a result of a promise to Lukas Eder. Basically my vision adheres quite nicely to the “Thick Database” driven by Bryn Llewellyn and Toon Koppelaars who, understandibly, drive this from an Oracle perspective.
It –more than of course- also nicely fits EnterpriseDB or even vanilla PostgreSQL database landscapes.

There is apparently still so much confusion in the world on the how, why and what of good application development and architecture that I decided to chip in my bit. I think I have a bit of an idea on how this aught to work and I also think it is not a half bad idea, plus a couple of people whom I highly regard, seem to agree with it. So here goes…

Traditionally

Traditionally there is no big love between application developers and their persistence-store. I don’t really know why because I never found the opportunity to do a real inquiry, but I think I have a reasonable understanding.

Basically there is constantly the enormous pressure of delivering new features and functionality. So much even that the basic development work, the more “boring” and “time consuming” things -why pay now, if you can also pay later- get postponed. Things like peer-reviews, (integration) testing, technical design… Basically, more people means more features.
If even these things get too little attention, why would something like an overpriced library-box get more attention? Not to mention these DBA’s you need to pass to even get close(r) to this library-box…

Here are my four reasons why I think
it should deserve a chance!

1. Easier

Plain and simple. It is easier. If you take a structured query language like PL/(pg)SQL, it is basically easy. Founded on the programming language ADA, it is easy to understand and one can quite easily build a number of routines, procedures, packages, functions to let the database chop and glue data and just deliver the results for your application to consume.
It saves you the time of having to (re)write some of this more complex stuff in your application or perhaps even over a several applications.

2. Quicker

As said under the first point: build once, use many times. By creating mechanisms in the database, you get the opportunity to think about the separating data manipulation mechanisms from data representation mechanisms and where you want to put which specific function. Of course, this decision process takes a little extra time in the beginning, but will repay many times over as your projects grows and gains meaningful complexity (is there something like “meaningful complexity”, well, yes…).

Quicker also is in operational response times. Querying a stored procedure will bring agility to your application in a sense that this stored procedure will be much quicker in getting you the answer than if you do this in a distributed (middle-ware) environment. These stored procedures can be accessed through REST-endpoints, giving flexibility and the possibile desired disconnect between the database and the application layer.

3. Cheaper

Powerful, because you are and you remain close to the data. No data transportation overhead, no latency. These kinds of slimming down, mean less requirements to infrastructure and distributed capacities (either hardware or (virtual) “cloud infrastructure”. This slimming down frees up budget which can then be spend on the more meaningful bits and pieces of your application.

4. More consistent

Finally, I think this approach brings more consistency. As you do things near the data it lives on, near the processing power you depend on, you also get single access paths to specific bits of your data, to specific constructs that drive the value of the application you are building. Through this, it does not matter from where you call this service, REST-endpoint, stored procedure, or whatever you call it, you always get the same answer, driving a consistent decision process based on that application.

And if ever something changes, there is also just one place for you to ensure these new requirements are added. Voila, consistency throughout your application landscape, as all that depends on this data-set gets this uniquely updated information.

The magic of working together

There is a lot of misunderstanding between DBA’s and Developers. I have been in both roles at some point and I have seen this happen first hand. One of the things, though, in that force field, I have learned, is the power, the joy and, through this, the magic of working together.
In the end, we all have the same goal, which is furthering our business by being the best at what we do. This means, for a developer, meeting feature request, short development cycles, quick delivery and as much as possible, get it right in one go. For a DBA this means making sure the database stays consistent, performant and available. And, in extension of that, for operations it means that the final product must be easily and quickly deployable to enter into an uneventful and dependable life-cycle.

Bringing together these seemingly conflicting disciplines is fun! By investing a little time in exploring the other disciplines, you will find common drivers, in a sense that everyone want the same thing. By getting over nearly religious initial differences, you will find magic in the combination. You will reach your goals earlier with the bonus that your co-workers will also reach their goals earlier and have a better end result than you dreamt possible.

True JAVA-Champion

Coming from another world, I do not know the requirements for becoming a JAVA-Champion. I imagine it to be not too much different from other recognition programs out there… But…
If you create more features and functions and you are able to run your application with greater concurrency on (way) less platform-power. Thus increasing the RIO on your application, this makes you a true Champion of JAVA and your business!! If you are able to combine this with some magic in your cooperation’s…
Believe me, it is more fun in the end too!

Containerization, do we need container-carriers?

In maritime logistics containers and container carriers are not really new.

Sitting in the plane, the following thoughts occurred to me…

In fact, containers in IT are a concept which is 1-on-1 derived from these physical containers.
We have seen and read many good and informative blog-posts and presentations about this. Obviously there is a lot of confusion about this as well. In my opinion you should be careful to mix and match too intensively. I think containerization and micro services, for instance have a lot less in common that some would lead you to believe.
This though is not what I wanted to discuss.

I would want to argue that one can containerize a stack too deep (or too high, depending on your viewpoint).

A container, typically, is an isolatable element which can be stacked upon another isolatable element. For instance, a Webserver is stacked upon an instance of bash, stacked upon it’s dependencies, creating an container stack which is capable of serving http-requests at port 80 of the up-address inherited from the IP-stack underneath the bash-instance.

Well, logical. Repeatable, but in a sense also complex, complexity by the sheer number of layers that compromise the stack.

Wouldn’t it be an idea to extend this train of thought and also introduce container carriers?

Just like in the analogy with container carriers in maritime logistics, these would be larger founding blocks on which various containers can be stacked.

  1. How would this differ from a setup with a regular VM? You would still have the lightweight, easily transportable qualities of containers.
  2. How would this differ from just stacking containers to create this? It would enable further development of seamless integration of the founding layers of what this container carrier is made up of, improving stability and specialization.

It eliminates the feeling of wheel-reinvention that for me, somehow still remains lingering around software containers. With the ever growing adoption of container technology, as the foundation for cloud-infrastructure, it can for a quick cost-saver.

My thought-train put to paper. Hope it helps someone, somewhere, somewhat…

Riga Dev Days 2017, new experiences in many ways.

Riga Dev Days 2017

General

It has been a while since my last blog-post.
One of the reasons is my shift from closed to open source software, databases more specifically. More on that in a later blog-post.

The reason for already mentioning this is this strange hybrid (what a popular word, these days) situation that I am in at the moment.
Thanks to the super enthusiastic, flexible and tenacious organization-team of the Riga Dev Days, I was able to participate.
Happily boarded the Air Baltic flight, I went on my way to Riga!!

Being new at the broader conference scene, I enjoyed being at a mixed source developer conference. Besides the usual suspects – some of which are my best friends – I got to meet many interesting new people.
One of the key phrases of the day is: “the more you learn, the more you realize you know nothing – John Snow…” and it’s true! You never stand to think about it, but the wealth of subjects is just tremendous and the combined knowledge at events like these is down right “Yuge, it’s awesome, tremendous!”

Day one

With a day like this, time flies. Between session (and during sessions) there are discussions, a bit of work and catching up to do.
Still I managed to catch a few sessions, like the one from Michael Hüttermann who made a clear and well rounded case regarding CI/CD in a DevOps world. A nice insight into the effort that goes into what’s behind the proverbial “push of a button”.
Another example was that by Marcos Placona about the many (and very basic) things that you have to keep in mind wen building apps. There is no silver bullet and the best you can achieve is to discourage the hacker so much, they move on. Much like securing your house, do to speak.

The day ended in the medieval basements of Riga, where we had some really good medieval food. Life is good…, well…, it has it’s moments!

Day two

The keynote address by Edson Yanaga, which kicked off day two of the Riga Dev Days, was quite interesting.
Shortening development and deployment cycles and shrinking feature release sets actually helps improving software and deployment quality by creating faster and more accurate feedback loops. By looking at these concept in this way, buzzez like DevOps and Agile actually get some hands and feet. One of the lessons, though, is that doing things this way do not eliminate work or automagically solve various issues for you! It will help in getting predictability and continuity into your software development processes.
A nice eye-opening remark finally, was… “no, I don’t pay you to make something work on your computer, I pay you to make something work on my computer(s)!!”

Another talk I was able to attend was around Blockchains. Something I knew nothing of and was actually quite interested in. Nick Zeeb took us through a very lively and very animated tour of what actually a Blockchain is and what the awesome potential of this technology can be. I was impressed.

With this, the second day draw to and end and therewith also my turn “in the pit”. As this event is held in a movie-theater, every room had a sloped tribune, which was often packed with enthusiastic participants. I had the opportunity to share my thoughts on the comparison between PostgreSQL and Oracle.
The session was very well attended with a lot of questions regarding the possibilities of using these other technologies in scales that were not really considered before. You can find a recording of the actual presentation here as soon as it comes available.

Riga Dev Days was a good conference. I would recommend everyone to either attend or submit an abstract for their event in 2018!!

#Oracle cutting in inspiration and new business?

Over the many years Oracle has been leading the database world, I guess they are now taking something of a wrong turn.
Let me briefly fill you in on my thoughts.

Basically I see two “minor” shifts that are significantly indicative of this:

  1. Oracle Standard Edition 2
  2. Oracle ACE Program

Okay, so you might think I am crazy, but let me try to explain.

Oracle Standard Edition 2

Sometime last year, the long expected, anticipated…, dreaded perhaps even, change to the Oracle database licensing strategy was there.

Oracle Standard Edition (SE) and Oracle Standard Edition One (SE1) licenses were addressed.
There was A LOT of debate on this, I mean, A LOT. Discussions which ran all the way back to HQ, and were driven by passionate people inside and outside of Oracle, inside and outside of the Oracle community… To no avail.

It had been very clear for quite a long time that the SE / SE1 strategy was nothing short of unsustainable inside the Oracle licensing realm. Even though, Oracle SE and SE1 enabled many projects and customers to adopt the phenomenal Oracle technology for their projects. It has some limitations, but with smart thinking and smart planning, a lot of projects could be run with Oracle SE(1). “I am such a good DBA, I can even do it with Oracle Standard Edition!”
Alas, we now have Oracle Standard Edition 2 (SE2) with a new and upgraded price of US 17k (!!) making this solution rather out of the question for many of the projects meant in the above. Please note that SE1 already was a significant investment for some of the projects I have learned to know over the years in regions as the Baltics and Africa.
Yes, of course, I know you can do all of this “In the cloud”. But with the limitation that there are hardly any CSPs (Cloud Service Providers 😉 that enable you to make use of the “cheaper” Oracle license. If you want to leverage your local cloud vendor (mind my word-choice here) it’s BYOL (Bring Your Own License) and, voila, you’re done in for anyway.

Hence, the first significant “shift” in Oracle’s span of attention for new business, creativity and growth…

Oracle ACE Program

More recently there was also a change in the Oracle ACE Program. Which has also led to much debate. But… that bit of the change I am not referring too, I am referring to the bit that does not affect me directly…

Oracle has a small number of very highly appreciated and “industry leading” community advocates called “Oracle ACE Directors”. These people not only have a deep knowledge of everything that is happening in this corner “of the industry”, but are also very passionate about sharing this knowledge. Sharing with Oracle Users, sharing with stakeholders within the Oracle organization, basically, with everyone with a hunger for knowledge around the technology.

For this, these Directors had a few privileges. When the invested their time and their energy in traveling this globe to share, Oracle would support them in some of their travel expenses. This always had the air of “wow, they are paid”. Believe me, it was bare minimal support, just a flying ticket and a hotel-bed to a previously approved conference, when they actually were accepted to do a talk. Nothing shiny, nothing business-classy…

Until now. With the changes to the system, also these modest privileges for the Directors have seized to be.

There was my second significant “shift” in Oracle’s span of attention for new business, creativity and growth…

It has me worried… I should not worry, as it does not affect my day-to-day business… yet.

Albeit we have this cool tech, with PL/SQL, with APEX, with all the features, options and what not, to create solutions that could really better the word (I also firmly believe this).

Oracle is just closing this door, and my toes were still in the doorway, so that hurts.

This was my rant, hope it helps.