Category Archives: Operating Systems

Adding flexibility to your PostgreSQL clusters – Using EDB Failover Manager


Using PostgreSQL in enterprise environments gets more and more popular. And why not? This extremely stable and performant database can compete with ease with almost all enterprise database installations out there today.

Competing technically? Sure!
Competing from a business perspective? Absolutely!!

Making sure your database systems stay up during planned maintenance? Absolutely yes, no discussion about that!
Ensuring your systems stay up during a catastrophic failure of your master? Yes! We need to ensure 99.99999 availability.

Introducing EDB Failover Manager (or short: EFM).

A tool that will do precisely this.

  • A graceful switch-over from a master database to a slave database (and back) with just one single command. This way you have the chance to do maintenance on the (previously master) node.
  • Failover from a master node to a slave node (which will be promoted to new master).
    It is based on PostgreSQL streaming replication, which allows you to create multiple slave clusters to your master cluster.

The tool ensures access to the cluster of database clusters using a Virtual IP Address. It gives you a wealth of ‘hooks’, where you can call scripts to help you reconfigure you surrounding landscape to a switch of masters. Think of re-configuring your load-balancing tools, like Pgpool-II to make sure read and write queries get assigned to the correct cluster nodes.

Well, that sounds good, right!

So, what do you need to do?

  1. Make sure your PostgreSQL streaming replication is running.
  2. Allocate at least 3 nodes (master/slave/slave or master/slave/witness). You will need three nodes to have a quorum to prevent a split brain scenario.
  3. Install EFM on those 3 nodes and configure it.
  4. Start, run and play!

Configuration of EFM is done through efm.properties in the /etc/efm-2.1 directory.
Tip is to create 1 copy of this file and distribute this over you EFM cluster nodes. There are respectively one (master/slave/slave configuration) or two (master/slave/witness configuration) parameters that are node-specific.

  • bind.address: specific to each node, <node IP-address>:9001 (9001 is cluster communication port, same for all cluster members)
  • is.witness: put this parameter to true if the node hold no database.

All other parameters are well documented in the efm.properties file.

Enter the <IP-address>:9001 of the membership coordinator (basically the first node of the EFM-cluster you start), in the efm.nodes-file of all the cluster members.

With this, we are basically good to go!!

systemctl start efm-2.1 and your cluster is running!

The efm-command allows you to manage your cluster. Syntax for the command is: efm <command> <cluster-name> <option>.

  • efm cluster-status efm gives you a nice overview of what is happening. Precede this with the linux watch command and you can monitor this nicely.
  • efm allow-node efm pg-11 allows node pg-11 to join the EFM cluster
  • efm promote efm -switchover makes the first slave in the standby priority list the new master and converts the precious master to slave
  • efm set-priority efm pg-10 1 makes node pg-10 the first node in the standby priority list
-bash-4.2$ watch efm cluster-status efm#

Every 2.0s: efm cluster-status efm Sun Aug 27 10:02:49 2017

Cluster Status: efm
VIP: 192.168.56.10

Agent Type Address Agent DB Info
 --------------------------------------------------------------
 Master pg-10 UP UP
 Standby pg-11 UP UP
 Standby pg-12 UP UP

Allowed node host list:
 pg-10 pg-11 pg-12

Membership coordinator: pg-10

Standby priority host list:
 pg-11 pg-12

Promote Status:

DB Type Address XLog Loc Info
 --------------------------------------------------------------
 Master pg-10 0/AB0000D0
 Standby pg-11 0/AB0000D0
 Standby pg-12 0/AB0000D0

Standby database(s) in sync with master. It is safe to promote.

For troubleshooting and checking purposes, there are very informative logs in /var/log/efm-2.1

EFM truely is a very nice tool to add resilience and flexibility to your PostgreSQL database cluster configuration..


Containerization, do we need container-carriers?

In maritime logistics containers and container carriers are not really new.

Sitting in the plane, the following thoughts occurred to me…

In fact, containers in IT are a concept which is 1-on-1 derived from these physical containers.
We have seen and read many good and informative blog-posts and presentations about this. Obviously there is a lot of confusion about this as well. In my opinion you should be careful to mix and match too intensively. I think containerization and micro services, for instance have a lot less in common that some would lead you to believe.
This though is not what I wanted to discuss.

I would want to argue that one can containerize a stack too deep (or too high, depending on your viewpoint).

A container, typically, is an isolatable element which can be stacked upon another isolatable element. For instance, a Webserver is stacked upon an instance of bash, stacked upon it’s dependencies, creating an container stack which is capable of serving http-requests at port 80 of the up-address inherited from the IP-stack underneath the bash-instance.

Well, logical. Repeatable, but in a sense also complex, complexity by the sheer number of layers that compromise the stack.

Wouldn’t it be an idea to extend this train of thought and also introduce container carriers?

Just like in the analogy with container carriers in maritime logistics, these would be larger founding blocks on which various containers can be stacked.

  1. How would this differ from a setup with a regular VM? You would still have the lightweight, easily transportable qualities of containers.
  2. How would this differ from just stacking containers to create this? It would enable further development of seamless integration of the founding layers of what this container carrier is made up of, improving stability and specialization.

It eliminates the feeling of wheel-reinvention that for me, somehow still remains lingering around software containers. With the ever growing adoption of container technology, as the foundation for cloud-infrastructure, it can for a quick cost-saver.

My thought-train put to paper. Hope it helps someone, somewhere, somewhat…

Why GUI sucks…

Of course we all know GUI stands for Graphical User Interface, just as CLI stands for Command Line Interface, right!
Or, rather, a GUI is this nice, flashy screen where you can easily roam with your mouse, comparable to a multiple choice quiz, where the right answer is there for the picking.
A CLI on the other hand is this dark, mysterious blinking cursor… Nothing happens unless you know more or less what you are doing. Comparable to an open questions quiz.

Sparked by a recent Twitter discussion, I decided I should probably write the umpth blog post about this to make my contribution to this lasting dispute.

Disclaimer:
This post discusses GUI in relation to system administration, not necessarily in relation to data-entry or data manipulation applications that are used in front offices all over the world. I guess CLI has no place in a world like that…

bad gui

Why GUI sucks?
I have done my fair share of installing, scripting, ad-hoc fiddling, testing and trying. And, I have found myself in the situation where I worked with younger computer geeks or even in situations where nobody had the time to figure anything out – stuff just had to be made to work.

Probably in the few lines above, we could already have the basics for this discussion!

But, why then does GUI suck?

GIU’s suck because they are limiting, labor (or rather RSI) intense and require you, the operator, to be there, physically clicking away on your computer.

Limiting
They are limiting, or at least most of the time they are, because it is often quite hard to get a visual representation of each and every function of a device / program / system etc. If you consider, for instance, a networking device and then try to imagine having to create a GUI that lets the operator configure and define each and every parameter of a specific VLAN or VPN. And then also bear in mind that the GUI has to stay crisp, clean and intuitive.
For this reason, I have seen many vendors who have created a GUI for basic setup only, relying on the professionals to find their way in the CLI. They GUI can then stay intuitive enough to at least get the basics done.

GUI’s that aim not to be limiting, of which there also are a few out there too, need to sacrifice a lot of the things that a good GUI should stand for:

  • Short click paths (3 clicks from anywhere to get where you want to be)
  • Intuitive (don’t have to guess or read a manual to use a GUI)

So, what you end up with, then, is a maze of riddles, where you can easily spend a good day setting up some new functionality. Somehow I believe this is not what the designers had set out for nor is it a valid solution for most tasks at hand.

Labor intense
I personally find GUI’s often, quite labor intense. Not just for the absence of the ability to automate tasks, though. Especially if there is a lot of specific configuration that needs to be done, you often end up left and right clicking until your hands start hurting.
And, in the end, you always end up with the eerie feeling that you missed out on that one specific setting that would really put the icing on your configuration.

Operator presence
Last, but not least… For a GUI to work, you need to be at your workstation. Period.
Anybody who has ever worked on automated testing of applications that rely on a GUI, knows about the hideous crime of having to script test-cases, either working with hidden button-labels, screen coordinates, etc. Where these scripts fail every other day because a developer moved a window to a better spot or used a new button-label. You end up coding your application just to make it testable.
No, GUI requires operator presence, making it useless for automation or scaling.

good_cliThe bliss of CLI
Okay, middle Ages… or Stone Age…
Nothing really fancy, just a black (or, if you are feeling frivoled, you may choose some nice color) square on your screen with a blinking underscore – most often. And then you say; GUI sucks?

One of the challenges in this hyper fast moving world full of smart phones, tablet PC’s and what have you, loaded with intuitive and fast apps, is to realize that actually “hard core IT” is hard core.
You need to learn your stuff first, know what you do and know about the consequences of choices you make. You will have to learn to be able to walk the walk and to talk the talk. Once you have mastered that, this blinking underscore is no longer a roadblock but a invitation! Just like after mastering a foreign language, you will know what to say and do to open up the potential at your fingertips.

And now, reality
Of course, the above is ranting is just one side of the story.
It is even just one side of the story in hard core IT!

As already stated above, sometimes there is no time to really dive into stuff and get to know the tools you need to get to work for you. I am pretty sure we all have been in a place where we needed to get a project done or some functionality realized, where we just did not have the right devices.

What are your options at such a moment?
Get a hardcore IT specialist who does “talk the talk”?
Probably it will not be cheap and probably it will be a very thorough configuration, but just not exactly as you need it to be… Though still a valid option, even in a number of cases it’s a no-go.
This… is where a good GUI comes in handy.
It will allow you, yourself, to organize that which needs organizing in an orderly fashion. Okay, the GUI will have to be accurate and well thought through, but I that goes for all interfacing, that is also true for the CLI.

Seeing this story unfold… I guess I still think GUI sucks. (sorry!)
But GUI has a place, a very well earned place in a super-fast and highly demanding world. Still I am convinced that if you are working in a highly professional environment, having to do intricate stuff on ever live environments, I would say a good script for a CLI is the only way you can create some assurance that whatever change you need to execute will actually have a predictable result.

And putting in the effort of learning how to use any CLI? Well, I guess that’s why it is called “professional IT”.

Using a terminal emulator on Mac

Dumb title for a blog post? No, not really I guess…

ZOC Terminal emulatorI have been using a terminal emulator, basically ever since I got away from the VT100 terminal:

  • ICE.TCP Pro
  • KEAVT
  • Reflection ‘X’

And a few other obscure applications that I cannot even recall anymore.
Currently, and over the last 6 to 8 years, I have been using ZOC.

The background of this story is: In the beginning these were the first DOS PC’s and later some Windows based machines that needed to interface with (in my case) VAX VMS, and later with the other UNIXes.

But why use a terminal emulator on MAC, for crying out loud? I hear you think… OS X is a Unix, so it should be all native, right?
Wrong! Well, kind of…

There are so many small (and bigger) differences when using various systems that it pays off to have a program that allows you to tune into these differences. Nothing more annoying than a backspace key that does not work or key-combinations that act different than you would expect.
This is even more true when you work with a mix of different operating systems, Solaris, HP UX, Oracle Linux, perhaps even some IBM OS’s.
And for when you would like to have further tune-ability of you toolkit, ranging from colors, to sizes, from fonts to layout.  Frivolities? Perhaps, but if you spent a lot of your time everyday in such a  tool, it does make a difference.

Printer terminal, also a terminal emulatorMore importantly are configurable logging, for documentation and troubleshooting. You can regard this as the modern variation to the old school print terminal (who can remember those?)
Configuring transfer types, modem and commutation settings and keeping these organized. As well as password storage and administrative support.

Well, basically, this is why I use a terminal emulator on my MAC!
And I think I found a valid tool in ZOC, by Markus Schmidt. Please check it out ZOC

Well, I hope you get to enjoy your terminal work as much as I do!

Setting up SQL Instant Client on MAC

In doing more work directly from my Macbook Air, I ran into a situation where native connectivity to an Oracle environment was needed.Connectivity over Oracle Instant Client
From experience I have always been a big fan of the Full Oracle client, just because it comes with a lot of tools and utilities for troubleshooting, which makes the actual experience a bit more pleasant.
Looking & asking around, though, I learned fairly quickly that this client is just not available for Mac OSX… Thanks to Osama Mustafa for confirming.

So, a fact, although quite a number of IT pro’s are working with Mac!

This leaves no other choise than to divert to the Oracle Instant Client 11, which then, indeed, is just an 11g Instant Client (11.2.0.4)!
It would humor me if Oracle were to bring out a 12c Full Client for Mac, as well as an instant client, if someone would so desire.
To have some more tooling around the client, I downloaded all the packages including at least SQL*Plus.

Though the install process is relatively straight forward (download the archives and unzip them in place) getting SQL*Plus to actually run is a somewhat different ballgame!
As usual, when you start a tool, you’re bombarded by messages about unfound dynamic libraries. This set me (very briefly) on a path to place these files where they were expected on my Mac.

In a place like:

/ade/b/2649109290/oracle/sqlplus/lib

for instance, you would need to place a number of these libraries.
This leaves you with the option to populate your system with all these specific libraries, which is of course just fine, but not my choice (think of the mess in ever having to clean up) and especially not when it can be avoided.

A quick search pointed me to this excellent blogpost by Casey Lucas about this exact same issue. With a tool called ‘otool’* applied as suggested, I am now able to run SQL*Plus natively on my Mac without error messages.

* otool – object file displaying tool
If you need it, call it from the command line. It will install this and other development tools on your Mac.

That is nice, but it’s just only over halfway there.

manneke stopt de stekker erin

 

Now I want something where I can just run:
sqlplus <username>@<database>
without intricate connect-strings.

 

 

This leaves one minor “hack”, or rather “edit” required, your .bash_profile needs a bit of a path addition and an environment setting:
alias ll="ls -l"
export TNS_ADMIN=/Applications/instantclient_11_2
export PATH=/Applications/instantclient_11_2:$PATH

Note: the alias was already in there 😉

To top it off, I created a small tnsnames.ora in the directory with the instant client (keeping all related files neatly tucked away together)

xesource =
(DESCRIPTION =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = 192.168.56.66)
(PORT = 1521)
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = xe)
)
)

And voila, goal acquired.

sqlplus usera@xesource
Never specify the password on the command line. Not only will it be shown, it will also be sent (most probably) unencrypted over SQL*Net

Oracle on OpenVMS – revival

Can it be true?

Will there be Oracle on OpenVMS again? Meaning the “regular” (sorry) Oracle (12c) RDBMS on a revived VMS?

As many who have ever lived on OpenVMS have always known:

OpenVMS will never die!

OpenVMS can never die, becOpenVMSause it is still running way to many hidden, hyper-mission-critical environments.
The fact that these environments are hidden, combined with the fact nobody ever spent any marketing budget on OpenVMS at all, created a super solution nobody knows about. And you cannot love what you don’t know.

A lot has been happening around this tormented operating system. OpenVMS is indefinitely bound to Digital Equipment Corporation (DEC) which was acquired by Compaq in June of 1998 and then merged into Hewlett Packard in May 2002.

DIGITAL-logo
Personally I have lost access to OpenVMS, and Oracle op OpenVMS, around 1995, when these systems were replace by HP Unix. I never fully recovered 😉

Until a few years ago I was introduced to a Alpha emulator, which creates a virtual machine with (obviously) an Alpha processor, which allows you to run OpenVMS. This was one step closer (back) to Oracle on OpenVMS.

Recently (like the day before yesterday, recently) I learned a number of new things! One is that the ongoing development OpenVMS will be take up by VMS Software Inc. (VSI).
But, more importantly, they will be creating new versions for mainstream hardware (such as x86)!! Wilm Boerhout of VX Company wrote an announcement about this not too long ago (article in Dutch)!

And now these rumors…

A porting of Oracle on OpenVMS!

Will we once again see the day that systems just won’t go down? Oracle environments with an uptime with like a dozen or two ‘nines’ behind the decimal-mark? Wouldn’t that be something?
Your own VMS server running an Oracle database with Oracle Application Express (APEX)? Wouldn’t that be something else? High time to clear some of your calendar and get (re)acquainted with this super OS!

A very special “thank you” goes to my dear friend Gerrit WoertmanOpenVMS Ambassador, who never seized to remain a link to the VMS World!!

If you are from The Netherlands, please also join Interexperience, to stay close to the game.

Live free or die

Can you boost your Oracle database performance on HP-UX for free?

Database performance, as is true with all performance related matters, has to do with resources.

This story specifically focuses on a real life experience with Oracle database performance on HP Unix running on Intel integrity CPU’s like these:

CPUinfo

The issue with installation is the hyper-threading aka. the use of the logical processors.
When the server is booted and is running, you can do basic performance review with a default tool like top.

lcpu_attr=0

In this exact case the server is running fine and there is no need to investigate further. But, in cases where there were performance issues, it would be a good idea to be aware of the numbering of the CPU’s in this overview (0, 2). This numbering suggests there would also be a ‘1’ and, where there would be a ‘1’ there would probably also be a ‘2’…

Yes there is and it is called ‘lcpu_attr’. A HP Unix kernel parameter which is, to my taste, a bit odd, not well known or well documented…

lcpu_attr (Tunable Kernel Parameter)

When turned on, lcpu_attr activates the logical CPU’s immediately. When you run top again, this is what it’ll look like (immediately)

lcpu_attr=1

Okay! Great… but… there are some catches.
This parameter lcpu_attr is a dynamically tunable kernel parameter but… it’ll crash your databases. So you will need a minimum of planned downtime for this action.

Also, you can set hyper-threading on in the EFI boot-loader.
But then you should be aware of this!

In the end, in this real-life story, we helped the situation advance by just doing:

1. stopping Oracle database(s)
2. kctune lcpu_attr=1
3. starting Oracle databas(s)

All in all, it could be not difficult to boost your Oracle database performance on HP-UX for free!

Thanks to my good friend Gerard van der Kooij for finding the final link!