Category Archives: Administration

Why document databases are old news…


We’re going to store data the way it’s stored naturally in the brain.

This is a phrase being heard more often today. This blog post is inspired by a short rant by Babak Tourani (@2ndhalf_oracle) and myself had on Twitter today.

How cool is that!!

This phrase is used by companies like MongoDB or Graph Database vendors to explain why they choose to store information / data in an unstructured format. It is new, it is cool, hip and happening. Al the new compute power and storage techniques enable doing this.
How cool is that!!
Well, it is… for the specific use-cases that can benefit from such techniques. Thinking of analytical challenges, where individual bits of information basically have no meaning. If you are analyzing a big bunch of captured data, which is coming from a single source like a machine, or a click-stream or social media, for instance, one single record basically has no meaning. If that is the case, and it is really not very interesting if you have and retain all individual bits of information, but you are interested in “the bigger picture”, these solutions can really help you!

How cool is it, actually?

If it comes to the other situations where you want to store and process information… where you do care about the individual records (I mean, who wants to repopulate their shopping cart on a web-shop 3 times before all the items stick in the cart) there are some historical things that you should be aware of.
Back in the day when computers were invented, all information on computers was stored “the way it’s stored naturally in the brain”.
Back in the day when computers were invented, all we had were documents to store information.
This new cool hip and happening tech is, if anything, not new at all…
Sure, things changed over the last 30 years and with all the new compute power and storage techniques, the frayed ends of data processing have significantly improved. This makes the executing of data analysis, as described above, actually so much better!! Really, we can do things to data, using these cool new things, that we never dreamt possible, 30 years ago.
But these things remain the “frayed ends of data processing”.
If you do have requirements like filling your shopping cart once, and it works all the way through check-out…
If you do have requirements where some kind of “transaction” is required (like buying something, like your bank account, like two actions that are dependent of each other)…
You need transactions…
I know, “transaction” is boring, old-fashioned and a seemingly surpassed entity…
But, I promise you, you will want those things, if you actually have to process something in your application in a way that makes real-world sense.

This was solved ages ago

For that, indeed 30 years ago (which is such a long time, most of the cool young dudes and dudetes developing applications today were not even born), the relational database theory was invented to solve the inherent issues that document based databases bring if you want to introduce these transactions to your application.
Document databases brought these issues back in the day… They bring these issues today!!!
Please believe me, they bring these issues today! This is the reason – contrary to the messages by non-relational database vendors – applications developers find that they need to add actual transactional capabilities to their applications, to either work in real life of bring any kind of scalability to them.
Imagine building an application and actually being successful with it! Isn’t that the dream of every application project? How boring is it then, to find that you are unable to meet demands? Not because you are understaffed or because you lack compute-resources? But simply because your application, based on these data storage methodologies, cannot keep up? Document database is data storage, not data processing.
For that, you would need the likes of PostgreSQL. Postgres is (also) free, it is Open Source… it is even Community Open Source, how cool is that? No annoying vendor telling fantasy stories about what Postgres can do, unlike MongoDB for instance.

So…

Coming back to the opening phrase, We’re going to store data the way it’s stored naturally in the brain.
It is kind of dumb to use a computer to store data like it would be stored in the brain. The human brain is not designed to process YUGE amounts of data, simply because the structure is not designed to accommodate that. Period.
To process large amounts of data, you need structures, either when you store the data or when at the moment you want to start doing stuff with it. Structuring data when you store it, is by far the cheapest method. Technologies like JSON data storage add sufficient flexibility to that, and engines like Postgres have no trouble what so ever processing such data.
Finally, the programs these vendors use to “store data the way it’s stored naturally in the brain” are written in computer-code, also not “naturally like the brain”. Would we need to revert to medieval clerks to start recording the data in these documents? No, I guess not.
Be smart,
Be modern,
Be hip-and happening,
Be efficient and scalable,
Use relational database techniques…


A week of PostgreSQL

One of the attractive things of my job is this… Just a bit more often than every now and then, you get the opportunity to get out and meet people to talk about Postgres. I don’t mean the kind of talk I do every day, which has more of a commercial touch to it. – Don’t get me wrong, that is very important too! – But I mean, really talk about PostgreSQL, be part of the community and help spread the understanding of what open source database technology can do for companies. Running implementations, either small or large, trivial or mission critical…

This past week was one of those weeks.

I got to travel through Germany together with Mr. Bruce Momjian himself. Bruce is the one of the most established and senior community leaders for Postgres. Bruce is also my colleague and I would like to think I may consider him my friend. My employer, EnterpriseDB, gives us the opportunity to do this. To be an integral part of the PostgreSQL community, contribute, help expand the fame of Postgres, no strings attached. Support the success of the 30 to 40,000 engineers creating this most advanced open source RDBMS.

The week started with travel, and I got to Frankfurt. Frankfurt will be the proving ground for the idea of a pop-up meet-up. Not an EDB-marketing event or somewhere where we sell EnterpriseDB services, but allow anyone just to discuss PostgreSQL.
We will be in a city, in a public place, answering questions, discussion things or just relax with some coffee. Purpose is to show what the PostgreSQL community is all about, to anyone interested!

The first day in Frankfurt, we spent at the 25hrs hotel. We had some very interesting discussions on:

  • Postgres vs. Oracle community
  • Changing role of DBA:
    • The demise of the Oracle DBA
    • RDBMS DBA not so much
  • Risk management
  • “Data scientist”
  • Significance of relational growing again

In the afternoon we took the Train to Munich, which was a quick and smooth experience. Munich would be the staging ground for a breakfast meeting, or a lunch… or just say hi.

Bruce and I spend the day discussing:

  • How to go from using Postgres as replacement of peripheral Oracle to Postgres as replacement for all Oracle
  • Using Postgres as polyglot data platform bringing new opportunities

After the meet-up we headed to Berlin training towards the final two events of this week.We spent Thursday teaching the EDB Postgres Bootcamp, having a lot of fun and absolutely not sticking to the program. With Bruce here, and very interesting questions from the participants, we were able to talk about the past and the future of Postgres and all the awesome stuff that is just around the corner.
Friday morning started with a brisk taxi-drive from Berlin to the Müggelsee Hotel. And, if you happen to talk to Bruce, you simply must ask him about this taxi-trip 😉

pgconf.de ended up being a superb event with a record breaking number of visitors and lost of interesting conversations. You will find loads of impressions here!

I got to meet a great number of the specialists that make up the Postgres community:
Andreas ‘Ads’ Scherbaum
Devrim Gündüz
Magnus Hagander
Emre Hasegeli
Oleksii Kliukin
Stefanie Stölting
Ilya Kosmodemiansky
Valentine Gogichashvili

I am already looking forward to the next Postgres events I get to attend… pgconf.de 2019 will in any case happen on the 10th of May in Leipzig.
It would be super cool to see you there, please submit your abstracts using the information from this page!

Why I picked Postgres over Oracle, part III

This is the final episode of this short series of blog posts on some of my drivers for moving to Postgres from Oracle.
Please do read Part I and Part II of the series if you have not done so. It discussed the topics “History”, “More recently”, “The switch to Postgres”, “Realization”, “Pricing”, “Support” and “Extensibility”.

In summary:

  • Part one focused on “why not Oracle anymore, so much”
  • Part two discussed on the comparison between PostgreSQL and Oracle
  • Part three talks some more on what Postgres then actually is

Community

One of the more important things to be really, really aware of is that Postgres is “not just open source“. Postgres is “community open source“.

Now, why would that be important, you might wonder.

We all know what open source stands for. There are many advantages to an open source system, and in our case, an open open source database.
A number of arguments are in this blog post series. If you take this one step further though, and realize that Postgres is a community open source project, what are  extra advantages?

A community open source project is not limited, in any way, to any one specific group of developers (let’s call them a company). For example, let’s look at MongoDB. This is an open source database, but it is developed by MongoDB inc.
It is, in essence, controlled by MongoDB.

Postgres is developed by the Postgres Developer Community coached by the Postgres Core Team.
This makes Postgres incredibly open, independent and it enables its developers to truly focus on actual business problems that need to be solved. There is no ulterior drive to satisfy commercial goals or meet any non-essential requirements.

Development

A very important discriminator, that only became this clearly and apparent to me, after I dove into Postgres some more, is the development…

The actual development of the database core-software is done by this community, we’ve just identified.

“Well, yes…” you might say, this is what open source stands for. But the impact of this extends well beyond support, as I mentioned in part 2 of this series. The ability to be part of where Postgres goes, to have actual influence on the development, is awesome, especially for a database platform in the current “world in flux”.
Postgres users don’t necessarily have to wait until “some company” decides to put something on the road-map or develop it at their discretion. These company-decisions will mostly be driven by the most viable commercial opportunity, not necessarily the most urgent technical need.

The development of Postgres is more focused on “getting it right”.
One nice example is the Postgres query optimizer. The Postgres community hates bugs. When bugs start to get discussed, it results in many emails within the community, which stand for a lot of reading!
Many bugs are fixed very quickly, so that this email storm stops!
For optimizer bugs therefore, turn-around times (from reporting to having a production-fix) can be as low as 72 hours, so even for mechanisms as complex as a query optimizer.

Invitation

I would like to invite all of you in the Oracle community to take a look at the Postgres query optimizer and share your concerns, worries, bugs or praise with the Postgres community.
If you want to, you can share this with us at the https://www.postgresql.org/list/pgsql-hackers/ email list. We are looking forward to your contributions!

Future

Oracle

I can only speak from what I see. What I see is that Oracle is becoming an on-line services company. I see them moving away from core technology like the database and accompanying functions. Oracle is more an more and moving to highly specialized applications aimed at very big companies.
Chat-bots, social media interactions, integrated services and more, delivered from a tightly integrated but also tightly locked set of Oracle owned and operated data-centers, or rather, the Oracle cloud.

Is this useful? Of course, there will be targeted customers of Oracle who will continue to find this all extremely useful, and it will be, to them.
It this for me? No, not really.

Postgresql

In the beginning, Linux was not something anyone wanted for anything serious. I mean, who wanted to run anything mission critical on anything else than Solaris, HP-UX, VMS, IBM? No-one…
And that was just a few years ago. Imagine!
Today in any old data-center, if you would eliminate the Linux based servers, you would not have much left.
This same thing is now also happening in, what I guess is the second wave of open source. More complex engines are being replaced by open source and the ever present relational database engine is one of those.

Why? Price, extensibility, flexibility, focus, you name it. We have seen it before and we will see it again.

EnterpriseDB

If you permit me just these few words.

I think EnterpriseDB is extremely important for PostgreSQL. We have been fighting on the forefront since the beginning, supporting PostgreSQL’s move to the Enterprise. EnterpriseDB has been and will continue to spend a large amount of our resources to PostgreSQL. We are a PostgreSQL support company. We just have been not very good at patting ourselves on the back…
As a company we are doing extremely well, simply because Postgres is rock solid in all facets and ready to take on the word, even the most daunting tasks – and beyond.
This will continue as Postgres will continue in this second wave of Open Source.

I thank you for your attention.
If you have addition questions or comments, please do not hesitate to contact me.

Why I picked Postgres over Oracle, part II

Continuing this short series of blog posts on some of my drivers for moving to Postgres from Oracle.
Please do read Part I of the series if you have not done so. It discussed the topics “History”, “More recently” and “The switch to Postgres”.

Realization

In the last months, discussing Postgres with my Oracle peers, playing with the software and the tooling, I actually quite quickly realized Postgres is a lot cooler, at least to me. Not so much of the overly complicated technology, but rather built to be super KISS. The elegance of simplicity and it still gets the job done.
Postgres handles a lot the more complex workloads than many (outsiders) might think. Some pretty serious mission-critical workloads are handled by Postgres today. Well, basically, it has been doing this for many, many years. This obviously is very little known, because who would want to spend good money on marketing  for Open Source Software, right? You just spent your time building the stuff, let somebody else take care of that.
Well… we at EnterpriseDB do just those things, …too!

And, please, make no mistake, Postgres is everywhere, from your fridge and video camera, through TV set-top boxes up to major on-line banking software. Many other places you would not expect a database to (be able to) run. Postgres is installed in places that never get touched again. Because of the stability and the low to no-touch administrative character of Postgres, it is ideally suited for these specific implementations. Structured on some of the oldest design principles around Postgres, it doesn’t have to be easy to create the database engine, as long as it “just works” in the end.
Many years ago, an Oracle sales director also included such an overview in his pitch. All the places Oracle touches everybody’s lives, every day. This is no different for Postgres, it is just not pitched anywhere, by anyone, as much.

I have the fortunate opportunity to work closely together with (for instance) Bruce Momjian (PostgreSQL core team founding member and EnterpriseDB colleague). I also had the opportunity to learn from him some of the core principles on which Postgres was designed and built. This is fundamentally different from many other software projects I know and I feel it truly answers some of the core-requirements of database projects out there today! There is no real overview of these principles, so that’s on my to-do list.

Working with PostgreSQL

Pricing

Postgres is open source… it is true open source. It is even a true community open source project, but more about that later in the next installment.

Open source software is free to use, it does not cost nothing!

But, wait! Open source does not mean for free! How…, why…, what do you mean??

Well… you need support, right!?
The community can and will help you, answer questions, solve some of your problems. But they will not come in to install, configure and run Postgres for you. You will need to select and integrate your specific selection out of the wealth of tools. You basically have a whole bunch of additional tasks to complete to get your Postgres platform sorted out.
Companies like EnterpriseDB can help you mitigate these tasks. This allows you to focus on the things you actually want to achieve, using Postgres

In comparison to traditional database vendors, the overall price of your solution will absolutely significantly reduce when using Postgres as your open source database engine.

Support

A significant difference between Oracle (for instance) and Open Source support services is interchangeability.
In the end, Oracle support can only be given by Oracle. They are the only ones that have access to the software sources and can look up (and hopefully fix) issues. In the support of Postgres, or any true community open source product, different companies can provide support. If you don’t like the company you work with… you switch. This drives these companies to be really good at delivering support! How is that for an eye opener.

Extensibility

One of the superb advantages of Postgres is its native extensibility. I mean, think about it for a moment… having a relational database platform with the strength of Postgres, the strength of Oracle or Microsoft SQL Server for that matter. Postgres gives you more options to integrate a wealth of data sources, data types, custom operators and many more other extensions than you will ever need! The integration into Postgres is so solid, these extensions function like any other function in the core of Postgres.
And, rest assured, chances that you will ever be faced with having to built this yourself are extremely slim. There are 30 to 40 thousand developers working with larger and smaller pieces of code of Postgres. Chances are that if you find yourself challenged, somebody else faced and solved that challenge before you. That solution will then be available for you to take and use, solve your challenge and move on. That is also open source for you.

This capability is what makes Postgres ultimately suited to fit the central role in any polyglot environment, we see being built today.
Maximizing the amount of information from data available in multiple data silos in an organization. This is a challenge we see more and more often today. Integrating traditional  applications as ERP, CRM, with data-warehousing results, again combined with Big-data analysis and event-data-capture aggregates. This generates additional decision-driving information out of the combination of these silos. Postgres, by design, is ultimately suited for this. It saves you for migrating YUGE amounts of data from one store to another, just to make good use of it.
The open source Dogma “Horses for courses” eliminates double investments, large data migrations or transformations, it just enables you to combine and learn from what you already have.

End of part II

A link to part three of this blog post will be placed here shortly.

Why I picked Postgres over Oracle, part I

As with many stories, if you have something to tell, it quickly takes up a lot of space. Therefore this will be a series of blog posts on Postgres and a bit of Oracle. It will be a short series, though…

Let’s begin

History

 I have started with databases quite early on in my career. RMS by Datapoint… was it really a database? Well, at least sort of. It held data in a central storage, but it was a typical serial “database”. Interestingly enough, some of this stuff is maintained up to today (talk about longevity!)
After switching to a more novel system, we adopted DEC (Digital Equipment Corporation) VAX, VMS and Micro VAX systems! Arguably still the best operating system around… In any case, it brought us the ability to run, the only valid alternative for a database around, Oracle. With a shining Oracle version 6.2 soon to be replaced by version 7.3.4. Okay, truth be told, at that time I wasn’t really that deep into databases, so much of the significance was added later. My primary focus was on getting the job done, serving the business in making people better. Still working with SQL and analyzing data soon became one of my hobbies.
From administering databases, I did a broad range of things, but always looping back to or staying connected to software and software development using databases.
Really, is there any other way, I mean, building software without using some kind of database?
At a good point in time, we were developing software using the super-trendy client-server concept. It served us well at the time and fitted the dogma of those days. No problems whatsoever. We were running our application on “fairly big boxes” for our customers (eg. single or double core HP D 3000 servers) licensed through 1 or 2 Oracle Database Standard Edition One licenses, and the client software was free anyway…
Some rain must fall
The first disconnect I experienced with licensed software was that time we needed to deploy Oracle Reports Server.
After porting our application successfully to some kind of pre-APEX framework, we needed to continue our printing facilities as before. The conclusion was to use Oracle Reports Server, which we could call to fulfill the exact same functionality as the original client-server printing agent (rwrbe60.exe, I’ll never forget) did. There was only no way we could do this, other than buying licenses for (I though it was) Oracle BI Publisher, something each of our clients had to do. This made printing more expensive than the entire database-setup, almost even the biggest part of the entire TCO of our product, which makes no sense at all.

More recently

This disconnect was the first one. Moving forward I noticed and felt more and more of a disconnect between Oracle and, what I like to call, core technology. Call me what you will, I feel that if you want to bring a database to the market and want to stay on top of your game, your focus needs to be at least seriously fixed on that database.

Instead we saw ever more focus for “non-core” technology. Oracle Fusion, Oracle Applications (okay, Oracle Apps had been there always), and as time progressed, the dilution became ever greater. I grew more and more in the belief that Oracle didn’t want to be that Database Company anymore (which proved to be true in the), but it was tough for me to believe. Here I was, having spent most of my active career focused on this technology, and now it was derailing (as it felt to me).

We saw those final things, with the elimination of Oracle Standard Edition One, basically forcing an entire contingent of their customers either out (too expensive) or up (invest in Oracle Standard Edition Two, and deal with more cost for less functionality). What appeared to be a good thing, ended up leaving a bad taste in the mouth.
And, of course… the Oracle Cloud, I am not even going to discuss that in this blog post, sorry.

The switch to Postgres

For me the switch was in two stages. First, there was this situation that I was looking for something to do… I had completed my challenge and, through a good friend, ran into the kind people of EnterpriseDB. A company I only had little knowledge of doing stuff for PostgreSQL (or Postgres if you like, please, no Postgré or things alike please, find more about the project here), a database I had not so much more knowledge of. But, their challenge was very interesting! Grow and show Postgres and the good things it brings to the market.

Before I knew it, I was studying Postgres and all the things that Postgres brings. Which was easy enough in the end, as the internal workings and structures of Postgres and Oracle do not differ that much. I decided to do a presentation on the differences between Postgres and Oracle in Riga. I was kindly accepted by the committee even when I told them, my original submission had changed!
A very good experience, even today, but with an unaccepted consequence. -> The second part of the switch was Oracle’s decision to cut me out of the Oracle ACE program.

It does free me up, somehow, to help database users across Europe, re-evaluate their Oracle buy-in and lock-in. Look at smarter and (much) more (cost)-effective ways to handle their database workloads. This finalized “the switch”, so to speak.
Meanwhile more and more people are realizing that there actually are valid alternatives to the Oracle database. After the adoption of the Oracle database as the only serious solution back in the early 1990’s, the world has changed, also for serious database applications!

End of Part I

Please follow this link to the second part of this blog post.

Adding flexibility to your PostgreSQL clusters – Using EDB Failover Manager

Using PostgreSQL in enterprise environments gets more and more popular. And why not? This extremely stable and performant database can compete with ease with almost all enterprise database installations out there today.

Competing technically? Sure!
Competing from a business perspective? Absolutely!!

Making sure your database systems stay up during planned maintenance? Absolutely yes, no discussion about that!
Ensuring your systems stay up during a catastrophic failure of your master? Yes! We need to ensure 99.99999 availability.

Introducing EDB Failover Manager (or short: EFM).

A tool that will do precisely this.

  • A graceful switch-over from a master database to a slave database (and back) with just one single command. This way you have the chance to do maintenance on the (previously master) node.
  • Failover from a master node to a slave node (which will be promoted to new master).
    It is based on PostgreSQL streaming replication, which allows you to create multiple slave clusters to your master cluster.

The tool ensures access to the cluster of database clusters using a Virtual IP Address. It gives you a wealth of ‘hooks’, where you can call scripts to help you reconfigure you surrounding landscape to a switch of masters. Think of re-configuring your load-balancing tools, like Pgpool-II to make sure read and write queries get assigned to the correct cluster nodes.

Well, that sounds good, right!

So, what do you need to do?

  1. Make sure your PostgreSQL streaming replication is running.
  2. Allocate at least 3 nodes (master/slave/slave or master/slave/witness). You will need three nodes to have a quorum to prevent a split brain scenario.
  3. Install EFM on those 3 nodes and configure it.
  4. Start, run and play!

Configuration of EFM is done through efm.properties in the /etc/efm-2.1 directory.
Tip is to create 1 copy of this file and distribute this over you EFM cluster nodes. There are respectively one (master/slave/slave configuration) or two (master/slave/witness configuration) parameters that are node-specific.

  • bind.address: specific to each node, <node IP-address>:9001 (9001 is cluster communication port, same for all cluster members)
  • is.witness: put this parameter to true if the node hold no database.

All other parameters are well documented in the efm.properties file.

Enter the <IP-address>:9001 of the membership coordinator (basically the first node of the EFM-cluster you start), in the efm.nodes-file of all the cluster members.

With this, we are basically good to go!!

systemctl start efm-2.1 and your cluster is running!

The efm-command allows you to manage your cluster. Syntax for the command is: efm <command> <cluster-name> <option>.

  • efm cluster-status efm gives you a nice overview of what is happening. Precede this with the linux watch command and you can monitor this nicely.
  • efm allow-node efm pg-11 allows node pg-11 to join the EFM cluster
  • efm promote efm -switchover makes the first slave in the standby priority list the new master and converts the precious master to slave
  • efm set-priority efm pg-10 1 makes node pg-10 the first node in the standby priority list
-bash-4.2$ watch efm cluster-status efm#

Every 2.0s: efm cluster-status efm Sun Aug 27 10:02:49 2017

Cluster Status: efm
VIP: 192.168.56.10

Agent Type Address Agent DB Info
 --------------------------------------------------------------
 Master pg-10 UP UP
 Standby pg-11 UP UP
 Standby pg-12 UP UP

Allowed node host list:
 pg-10 pg-11 pg-12

Membership coordinator: pg-10

Standby priority host list:
 pg-11 pg-12

Promote Status:

DB Type Address XLog Loc Info
 --------------------------------------------------------------
 Master pg-10 0/AB0000D0
 Standby pg-11 0/AB0000D0
 Standby pg-12 0/AB0000D0

Standby database(s) in sync with master. It is safe to promote.

For troubleshooting and checking purposes, there are very informative logs in /var/log/efm-2.1

EFM truely is a very nice tool to add resilience and flexibility to your PostgreSQL database cluster configuration..

EnterpriseDB Summerschool 2017

I have been meaning to write a lot of posts, meanwhile. With the new challenges, and all, it just hasn’t happened.

But!!

although I don’t tend to do much advertising here, I really do need to share this (unique) opportunity,

I (and my other colleagues across EMEA) really want to meet you and share some of the knowledge on EDB Postgres with you. Especially targeted at Oracle DBA’s!
It will cost you one day and there is even a certificate (which you need to earn during the day) to show “I have walked the walk”.

It starts real soon, there are just very few places available, it’s free (!!) and it is – just down right plain cool – technoloy without hassle…
Bring your laptop, we provide a VM with a lot of tech pre-installed, a little bit like RAC-Attack or #RepAttack!

Visit this link: http://info.enterprisedb.com/EDB-Postgres-Summer-School and sign up!

Looking forward to seeing you personally in either Frankfurt, Munich or Hamburg.

Hey JAVA-developer, why don’t you love your database

Why this post?

Partly, this blogpost is a result of a promise to Lukas Eder. Basically my vision adheres quite nicely to the “Thick Database” driven by Bryn Llewellyn and Toon Koppelaars who, understandibly, drive this from an Oracle perspective.
It –more than of course- also nicely fits EnterpriseDB or even vanilla PostgreSQL database landscapes.

There is apparently still so much confusion in the world on the how, why and what of good application development and architecture that I decided to chip in my bit. I think I have a bit of an idea on how this aught to work and I also think it is not a half bad idea, plus a couple of people whom I highly regard, seem to agree with it. So here goes…

Traditionally

Traditionally there is no big love between application developers and their persistence-store. I don’t really know why because I never found the opportunity to do a real inquiry, but I think I have a reasonable understanding.

Basically there is constantly the enormous pressure of delivering new features and functionality. So much even that the basic development work, the more “boring” and “time consuming” things -why pay now, if you can also pay later- get postponed. Things like peer-reviews, (integration) testing, technical design… Basically, more people means more features.
If even these things get too little attention, why would something like an overpriced library-box get more attention? Not to mention these DBA’s you need to pass to even get close(r) to this library-box…

Here are my four reasons why I think
it should deserve a chance!

1. Easier

Plain and simple. It is easier. If you take a structured query language like PL/(pg)SQL, it is basically easy. Founded on the programming language ADA, it is easy to understand and one can quite easily build a number of routines, procedures, packages, functions to let the database chop and glue data and just deliver the results for your application to consume.
It saves you the time of having to (re)write some of this more complex stuff in your application or perhaps even over a several applications.

2. Quicker

As said under the first point: build once, use many times. By creating mechanisms in the database, you get the opportunity to think about the separating data manipulation mechanisms from data representation mechanisms and where you want to put which specific function. Of course, this decision process takes a little extra time in the beginning, but will repay many times over as your projects grows and gains meaningful complexity (is there something like “meaningful complexity”, well, yes…).

Quicker also is in operational response times. Querying a stored procedure will bring agility to your application in a sense that this stored procedure will be much quicker in getting you the answer than if you do this in a distributed (middle-ware) environment. These stored procedures can be accessed through REST-endpoints, giving flexibility and the possibile desired disconnect between the database and the application layer.

3. Cheaper

Powerful, because you are and you remain close to the data. No data transportation overhead, no latency. These kinds of slimming down, mean less requirements to infrastructure and distributed capacities (either hardware or (virtual) “cloud infrastructure”. This slimming down frees up budget which can then be spend on the more meaningful bits and pieces of your application.

4. More consistent

Finally, I think this approach brings more consistency. As you do things near the data it lives on, near the processing power you depend on, you also get single access paths to specific bits of your data, to specific constructs that drive the value of the application you are building. Through this, it does not matter from where you call this service, REST-endpoint, stored procedure, or whatever you call it, you always get the same answer, driving a consistent decision process based on that application.

And if ever something changes, there is also just one place for you to ensure these new requirements are added. Voila, consistency throughout your application landscape, as all that depends on this data-set gets this uniquely updated information.

The magic of working together

There is a lot of misunderstanding between DBA’s and Developers. I have been in both roles at some point and I have seen this happen first hand. One of the things, though, in that force field, I have learned, is the power, the joy and, through this, the magic of working together.
In the end, we all have the same goal, which is furthering our business by being the best at what we do. This means, for a developer, meeting feature request, short development cycles, quick delivery and as much as possible, get it right in one go. For a DBA this means making sure the database stays consistent, performant and available. And, in extension of that, for operations it means that the final product must be easily and quickly deployable to enter into an uneventful and dependable life-cycle.

Bringing together these seemingly conflicting disciplines is fun! By investing a little time in exploring the other disciplines, you will find common drivers, in a sense that everyone want the same thing. By getting over nearly religious initial differences, you will find magic in the combination. You will reach your goals earlier with the bonus that your co-workers will also reach their goals earlier and have a better end result than you dreamt possible.

True JAVA-Champion

Coming from another world, I do not know the requirements for becoming a JAVA-Champion. I imagine it to be not too much different from other recognition programs out there… But…
If you create more features and functions and you are able to run your application with greater concurrency on (way) less platform-power. Thus increasing the RIO on your application, this makes you a true Champion of JAVA and your business!! If you are able to combine this with some magic in your cooperation’s…
Believe me, it is more fun in the end too!

Containerization, do we need container-carriers?

In maritime logistics containers and container carriers are not really new.

Sitting in the plane, the following thoughts occurred to me…

In fact, containers in IT are a concept which is 1-on-1 derived from these physical containers.
We have seen and read many good and informative blog-posts and presentations about this. Obviously there is a lot of confusion about this as well. In my opinion you should be careful to mix and match too intensively. I think containerization and micro services, for instance have a lot less in common that some would lead you to believe.
This though is not what I wanted to discuss.

I would want to argue that one can containerize a stack too deep (or too high, depending on your viewpoint).

A container, typically, is an isolatable element which can be stacked upon another isolatable element. For instance, a Webserver is stacked upon an instance of bash, stacked upon it’s dependencies, creating an container stack which is capable of serving http-requests at port 80 of the up-address inherited from the IP-stack underneath the bash-instance.

Well, logical. Repeatable, but in a sense also complex, complexity by the sheer number of layers that compromise the stack.

Wouldn’t it be an idea to extend this train of thought and also introduce container carriers?

Just like in the analogy with container carriers in maritime logistics, these would be larger founding blocks on which various containers can be stacked.

  1. How would this differ from a setup with a regular VM? You would still have the lightweight, easily transportable qualities of containers.
  2. How would this differ from just stacking containers to create this? It would enable further development of seamless integration of the founding layers of what this container carrier is made up of, improving stability and specialization.

It eliminates the feeling of wheel-reinvention that for me, somehow still remains lingering around software containers. With the ever growing adoption of container technology, as the foundation for cloud-infrastructure, it can for a quick cost-saver.

My thought-train put to paper. Hope it helps someone, somewhere, somewhat…

#Oracle cutting in inspiration and new business?

Over the many years Oracle has been leading the database world, I guess they are now taking something of a wrong turn.
Let me briefly fill you in on my thoughts.

Basically I see two “minor” shifts that are significantly indicative of this:

  1. Oracle Standard Edition 2
  2. Oracle ACE Program

Okay, so you might think I am crazy, but let me try to explain.

Oracle Standard Edition 2

Sometime last year, the long expected, anticipated…, dreaded perhaps even, change to the Oracle database licensing strategy was there.

Oracle Standard Edition (SE) and Oracle Standard Edition One (SE1) licenses were addressed.
There was A LOT of debate on this, I mean, A LOT. Discussions which ran all the way back to HQ, and were driven by passionate people inside and outside of Oracle, inside and outside of the Oracle community… To no avail.

It had been very clear for quite a long time that the SE / SE1 strategy was nothing short of unsustainable inside the Oracle licensing realm. Even though, Oracle SE and SE1 enabled many projects and customers to adopt the phenomenal Oracle technology for their projects. It has some limitations, but with smart thinking and smart planning, a lot of projects could be run with Oracle SE(1). “I am such a good DBA, I can even do it with Oracle Standard Edition!”
Alas, we now have Oracle Standard Edition 2 (SE2) with a new and upgraded price of US 17k (!!) making this solution rather out of the question for many of the projects meant in the above. Please note that SE1 already was a significant investment for some of the projects I have learned to know over the years in regions as the Baltics and Africa.
Yes, of course, I know you can do all of this “In the cloud”. But with the limitation that there are hardly any CSPs (Cloud Service Providers 😉 that enable you to make use of the “cheaper” Oracle license. If you want to leverage your local cloud vendor (mind my word-choice here) it’s BYOL (Bring Your Own License) and, voila, you’re done in for anyway.

Hence, the first significant “shift” in Oracle’s span of attention for new business, creativity and growth…

Oracle ACE Program

More recently there was also a change in the Oracle ACE Program. Which has also led to much debate. But… that bit of the change I am not referring too, I am referring to the bit that does not affect me directly…

Oracle has a small number of very highly appreciated and “industry leading” community advocates called “Oracle ACE Directors”. These people not only have a deep knowledge of everything that is happening in this corner “of the industry”, but are also very passionate about sharing this knowledge. Sharing with Oracle Users, sharing with stakeholders within the Oracle organization, basically, with everyone with a hunger for knowledge around the technology.

For this, these Directors had a few privileges. When the invested their time and their energy in traveling this globe to share, Oracle would support them in some of their travel expenses. This always had the air of “wow, they are paid”. Believe me, it was bare minimal support, just a flying ticket and a hotel-bed to a previously approved conference, when they actually were accepted to do a talk. Nothing shiny, nothing business-classy…

Until now. With the changes to the system, also these modest privileges for the Directors have seized to be.

There was my second significant “shift” in Oracle’s span of attention for new business, creativity and growth…

It has me worried… I should not worry, as it does not affect my day-to-day business… yet.

Albeit we have this cool tech, with PL/SQL, with APEX, with all the features, options and what not, to create solutions that could really better the word (I also firmly believe this).

Oracle is just closing this door, and my toes were still in the doorway, so that hurts.

This was my rant, hope it helps.