Category Archives: Oracle tips

Username & password required at Weblogic domain startup


When installing a new WebLogic Domain for any a-specific Oracle (Fusion) Middleware application or any other implementation requiring a WebLogic domain like ORDS for instance, a new ‘home’ is created under [MW-home]/user_domains/. [MW-Home] translates, for instance, to /u01/oracle/product/Middleware.

ordsTo start your brand-new domain, or perhaps and rather, to automate the startup of your domain, you would use the supplied [MW-home]/user_domains/[DomainName]/startWebLogic.sh command-file.
This file will start the Weblogic domain (the Admin Server) and the deployed components. After this start, you will be able to follow through with the administration over the web-console. Typically its URL is: http://[ServerName]:[PortNumber]/console.

One nasty thing you can run into, is that starting the server can require you to enter username and password during the run of [MW-home]/user_domains/[DomainName]/startWebLogic.sh. Of course this is rather annoying because it requires interaction which is not good for auto-start. Regular input-tooling you can wrap around this command-file, for example with input redirection, would require you to save your username / password combination in plain text. That is certainly never a good idea!!

Luckily there is a trick to enable your WebLogic domain to start without this interaction. And it also makes sure that username & password are not stored in plain text. Actually it is quite easy to get this facility in place.

This is how:

Go to [MW-home]/user_domains/[DomainName]/Servers/AdminServer/security and create a plain text file called boot.properties.

This file gets two lines:
username: Your WebLogic Username
password: Your WebLogic Password

Basically, this is now a plain-text recording of the username and password on the server, which seems quite scary.

Good thing though, is that when you have successfully run [MW-home]/user_domains/[DomainName]/startWebLogic.sh command file, which will now continuously run through, username and password will be encrypted:

#Thu Mar 10 14:11:38 UTC 2016
password={AES}JoMm+ymJUvbcQld84ofjSR5KhwFVP7mCgTpYBtTS7TA\=
username={AES}vY8NlWXCh156j/uAIpyFY4MVxPt8cdAbUpaTku+sJsU\=

You will now be able to call [MW-home]/user_domains/[DomainName]/startWebLogic.sh from your startup-script without having to worry about the need to interactively entering username / password or have to worry about plain text storage of these to artifacts.

Hope this helps!


The importance of meetup.com

The Oracle community convenes at the various events, SIG-meetings and gatherings that are organized by the national Oracle User Group organizations. This is, for my reckoning, one if the most important parts of the power of the Oracle user community.

During these events, local Oracle stars are joined by the travelling Rock Stars of the Oracle Tech community, together they share knowledge and experience to teach and learn about the tricks of the trace. As said many times before, by people much greater than me, this is truly a unique and powerful way to nurture and grow the combined knowledge about Oracle products and the best ways to use them. My favorite quote remains that of Monty Laitiolais: “This truly is a celebration of Tech!” where he obviously was referring to the yearly KScope happening, but which easily translates to many of the Oracle events around the globe.

Since quite a while now, the phenomenon meetup.com has emerged. It is an on-line place where people can initiate meet ups of like-minded people. Either being a small initiative with just a few people in a cafe up to bigger, or perhaps more commercially colored, happenings. Whatever the subject or idea, from travel to innovation and from hobby to profession, you can find a meet up to suit your needs.

Even though the richness and the broadness of the activities of the Oracle User Group organizations worldwide, recently I have seen more and more activities of Oracle aficionados on Meetup.com.
As far as I recall, the ever-vibrant APEX community started organizing these kinds of events under the flag of APEX-meet ups, using this platform. Gathering to share best practices and share experiences about APEX and all the various bit and pieces that adhere to this technology.
If you would look at the following list you would conclude that it even has quite a big list of meetups there… The adoption of meetup.com by the Oracle community is growing rapidly.

Is this a bad thing for the ‘regular’ Oracle user community?

I think not.

From my experiences participating in both “regular” Oracle user group events as well as in Oracle-related meet ups, I think they have a complimentary function.
The traditional user group events are usually more “speaker – audience” oriented, which is a very good format for educating and teaching. A format that is indispensable because it enables a larger group of people to gain knowledge and understanding quickly and effectively.
The meetups have a, let’s call it, more informal character, one where the interchange of information and knowledge is more of a group event. And, let’s face it, the social aspect of meetups is also a little more on the foreground, which in itself is a good thing too.
The need for this kind of contact already was there in the form of SIG’s. During the traditional events, with the emerging of the round-table phenomenon, this has been even more obvious.

Conclusive I would like to state: Let’s embrace meet ups. Go find or organize a meet up – preferably about our much-loved Oracle technology – in your neighborhood. Find and inspire people, share, learn and laugh! It is worth your time, I can tell from experience.

DBA_FEATURE_USAGE_STATISTICS and SE2

This blog post is inspired on work I have been doing on Standard Edition databases and the returning confusion about what is and what is not part of Standard Edition.

DBA_FEATURE_USAGE_STATISTICS is a tool in determining license usage for the Oracle database. It is good to understand the implications of each entry, know what is happening in your database and thus be able to have a substantial conversation about the usage of your license, being SE, SEO, SE2 or EE!

This list is the full list of DBA_FEATURE_USAGE_STATISTICS and I have found no source where there is a mapping of these features to database editions. As it is a lot of tedious work I call upon the community to help complete the list and make it as accurate as can be. So, if you have news, improvements, other bits of information, please send it to me and I will make sure it gets added!

WARNING: Still… with all the work that goes into these answers, it is not the law, it is a very serious interpretation of facts which will pay a part in helping you make the right decision when it comes to database licensing.

Feature Standard Edition
Active Data Guard – Real-Time Query on Physical Standby NO !
ADDM NO !
Advanced Replication NO !
Application Express YES
ASO native encryption and checksumming NO – EE option !
Audit Options NO !
Automatic Maintenance – Optimizer Statistics Gathering YES
Automatic Maintenance – Space Advisor YES
Automatic Maintenance – SQL Tuning Advisor NO !
Automatic Memory Tuning
Automatic Segment Space Management (system) YES
Automatic Segment Space Management (user)
Automatic SGA Tuning YES
Automatic SQL Execution Memory YES
Automatic SQL Tuning Advisor NO !
Automatic Storage Management
Automatic Undo Management
Automatic Workload Repository
AWR Baseline NO !
AWR Baseline Template NO !
AWR Report NO !
Backup BASIC Compression
Backup BZIP2 Compression
Backup Encryption
Backup HIGH Compression
Backup LOW Compression
Backup MEDIUM Compression
Backup Rollforward
Backup ZLIB Compression
Baseline Adaptive Thresholds
Baseline Static Computations
Bigfile Tablespace
Block Media Recovery NO !
Change Data Capture NO !
Change-Aware Incremental Backup
Character Semantics
Character Set
Client Identifier
Clusterwide Global Transactions
Compression Advisor
Crossedition Triggers
CSSCAN
Data Guard NO !
Data Mining NO – EE option !
Data Recovery Advisor
Database Migration Assistant for Unicode
Database Replay: Workload Capture NO ! 1
Database Replay: Workload Replay NO ! 1
DBMS_STATS Incremental Maintenance
Deferred Open Read Only
Deferred Segment Creation NO !
Direct NFS
Dynamic SGA
Editioning Views
Editions
EM Database Control
EM Grid Control
EM Performance Page
Encrypted Tablespaces
Exadata
Extensibility
File Mapping
Flashback Data Archive NO ! 2
Flashback Database NO !
GoldenGate NO – EE option ! 3
HeapCompression
Hybrid Columnar Compression NO !
Instance Caging NO !
Internode Parallel Execution
Job Scheduler
Label Security NO – EE option !
LOB
Locally Managed Tablespaces (system) YES
Locally Managed Tablespaces (user)
Locator YES
Logfile Multiplexing
Long-term Archival Backup
Materialized Views (User) NO !
Messaging Gateway NO !
MTTR Advisor
Multi Section Backup
Multiple Block Sizes
Object
OLAP – Analytic Workspaces NO – EE option !
OLAP – Cubes NO – EE option !
Oracle Database Vault NO – EE option !
Oracle Java Virtual Machine (system) YES
Oracle Java Virtual Machine (user)
Oracle Managed Files
Oracle Multimedia
Oracle Multimedia DICOM
Oracle Secure Backup
Oracle Text
Oracle Utility Datapump (Export)
Oracle Utility Datapump (Import)
Oracle Utility External Table
Oracle Utility Metadata API
Oracle Utility SQL Loader (Direct Path Load)
Parallel SQL DDL Execution NO !
Parallel SQL DML Execution NO !
Parallel SQL Query Execution NO !
Partitioning (system) YES
Partitioning (user) NO – EE option !
PL/SQL Native Compilation
Quality of Service Management NO !
Read Only Tablespace
Real Application Clusters (RAC) YES 4
Real-Time SQL Monitoring
Recovery Area
Recovery Manager (RMAN) YES
Resource Manager NO !
Restore Point
Result Cache NO !
RMAN – Disk Backup
RMAN – Tape Backup
Rules Manager
SecureFile Compression (system) YES
SecureFile Compression (user)
SecureFile Deduplication (system) YES
SecureFile Deduplication (user)
SecureFile Encryption (system) YES
SecureFile Encryption (user)
SecureFiles (system) YES
SecureFiles (user)
Segment Advisor (user)
Segment Shrink
Semantics/RDF NO !
Server Flash Cache
Server Parameter File
Services
Shared Server
Spatial NO – EE option !
SQL Access Advisor
SQL Monitoring and Tuning pages NO – EE option !
SQL Performance Analyzer NO !
SQL Plan Management NO !
SQL Profile
SQL Repair Advisor
SQL Tuning Advisor
SQL Tuning Set (system) YES
SQL Tuning Set (user)
SQL Workload Manager
Streams (system) YES 5
Streams (user)
Transparent Data Encryption
Transparent Gateway YES – option
Transportable Tablespace NO ! 6
Tune MView
Undo Advisor
Very Large Memory
Virtual Private Database (VPD) NO !  7
Workspace Manager
  1. Unless used for upgrade to Enterprise Edition.
  2. Unless used without history table optimization.
  3. Goldengate can also be used with Standard Edition, it is a separate product.
  4. RAC on Enterprise Edition is an option.
  5. No capture from redo.
  6. Import transportable tablespaces in all editions.
  7. Policies on XDB$ACL$xd_sp in sys.v_$vpd_policy are internal ( “out of the box”) policies that are used by XDB to control the access to certain internal tables. All the logic is implemented in the xdb.DBMS_XDBZ package and there is no way one can control / influence the way this is working.

dbms_redefinition houskeeping

dbms_redefinition actually is a nifty, but powerful little toolkit that let’s you change table-definitions without actually locking the table in such a manner that it would prevent regular operations from being interrupted.

You can read loads about it in the Oracle documentation or in the wealthy library by Mr. Tim Hall.

housekeepingOne thing I noticed, and which I want to share here has lots to do with the house keeping that is automatically done by dbms_redefinition. Actually it talks about some of the bits it didn’t brush up after itself.

dbms_redefinition works using triggers and materialized views to help switch from your current active production table, via a so-called interim table, back to your shiny new, redefined production table. You can follow this beautifully by querying the dba_segments view along the way.
For this it obviously creates this materialized view and the other required components and it removes them after you finish your redefinition-trip. After all that is done, you can just remove your interim table and be done with it.

At least, that is what happened in most of the cases and is what you would expect!

Though, in some cases… it proved impossible to drop the interim table. To me this was somewhat scary… did the redefinition not finish, or did it not finish correctly?

What happened?

There was this table that I redefined. It had referential integrity constraints (aka. foreign key constraints) pointing towards it. Of course dbms_redefinition neatly created version of these to the interim table to be sure nothing went wrong.build-in-flight

When finishing redefinition (with dbms_redefinition.finish_redef_table) most of the interim bits and pieces are cleared away and you just have to drop your interim table manually (okay, we can discuss if this actually would / could / should be automated, but let’s leave that).

But… when you are then manually dropping this interim table (in a busy production system, I tend to want to be careful and just issue ‘drop table int_<tablename>‘. That does not work. dbms_redefinition “forgets” to remove these referential integrity constraints in the other tables (which are neatly names tmp$$_<constraintname>).
This than means either issue ‘drop table int_<tablename> cascade constraints‘, which is more then the basic ‘drop table‘ or find these constraints and remove them manually first:

select 'alter table '||owner||'.'||table_name||' drop constraint '||constraint_name||';'
from dba_constraints dc
where constraint_type='R'
and r_constraint_name in
(
select constraint_name
from all_constraints
where table_name = 'INT_<tablename>'
);
alter table <schema>.<foreign table> drop constraint TMP$$_<constraint name>;

I guess, personally, I would like dbms_redefinition to do this for me…

It’s smart enough! it created them!

Just a quick and additional note, setting ddl_lock_timeout to 30 or 60 for your session can actually help and prevent a lot of non-sense on a busy system.

Hope this helps someone sometime 😉

Introducing FETCHER in a running replication process

This is no regular bit of work and it will probably (and hopefully) never hit you in a production setup…

The prerequisite is that you know how on-line data replication in general, and Dbvisit Replicate specifically, work.

The following case is true:
I had half of a replication pair running.
It means that the MINE process was running, converting REDO-log in PLOG-format. The APPLY process had not yet started because the target database was still being prepared.

dbvisit-replicate-logical-replication-made-easy-18-638-300x225The reason for this is that we needed to start converting redo-log information to PLOG information while we were setting up the target environment. The reason for that was that the setup (exporting source, copying dump to target and importing) was taking quite a bit of time, which would impact redo-log storage to heavily in this specific situation.

It was my suspicion that the MINE process was unable to get enough CPU-cycles from the production server to actually MINE more redo-log seconds than wall-clock seconds passed. In effect, for every second of redo-log information that was mined, between 1 and 6 seconds passed.

This means that the replication is lagging behind and will never be able to catch up.

To resolve this, the plan was to take the MINE process of the production server and placed on an extra server. On the production server, a process called FETCHER would be introduced. The task of this process is to act as a broker between the database and the MIN process, forwarding the requested on-line an archived redo log files.

Normally (!) you would use the nifty opportunities that Replicate offers with the setup wizard and just create a new setup. And actually, this is what I used to figure out this setup. And, if you can, please do use this…

Why didn’t I then, you would rightfully ask?

Well… The instantiation process would take to long, and did I say we were under time-pressure?

  • Setup wizard, 5 minutes
  • The famous *-all.sh script, ~ 1 hr.
  • Datapump Export, ~ 10 hrs.
  • Copy from DC old to DC new,  ~ 36 hrs.
  • Datapump Import, ~ 10 hrs.

So, totally we could spend 57:05 hrs. to try to fix this on the go…

Okay, here we go:

Note: cst-migration is the name of the replication project as you specified it in setup wizard when setting up Replication.

TIP: When setting up on-line replication, it is worth your effort to create separate tnsnames.ora entries for your project, like ‘repl-source’ and ‘repl-target’ acros all nodes.
It can get hellishly confusing if you have, as in this case, a database that is called <cst> and is called the same on the source and target server!

1. Step one:
We obviously had the ./cst-migration/config directory from our basic setup with just MINE & APPLY. This directory holds (among others) the ./cst-migration/config/cst-migration-ontime.ddc file. This file holds the Dbvisit Replicate Repository contents that is needed to run the processes.

From this setup, MINE is actually running. We actually concluded the fact that we were not catching up from this process.

2. Step two:
Now we run dbvrep -> setup wizard again and create a Replicate setup directory with FETCHER and isolate the ./cst-migration+fetcher/config/cst-migration+fetcher-onetime.ddc.

By comparing the two files, I was able to note the differences and therewith conclude the changes necessary to introduce a FETCHER process. It is a meticulous job to make sure all the paths on all the three servers are correct, that port numbers are correct and that all the individual steps are take in the right order. This is the overview.

Having these changes, it is all downhill from now.

3. Step three:
Using the Dbvisit Replicate console, the new entries and the changes were made to the DDC-information stored in the Replicate repository. You can enter these manually or execute your change-file by executing @<change-file-name> inside the console.

4. Step four:
Create the ./cst-migration directory on the system you will use for the relocated MINE process and copy the cst-migration-MINE.ddc and cst-migration-run-source-node.sh in this directory.
Rename the cst-migration-run-source-node.sh to cst-migration-run-mine-node.sh to reduce confusion.
Make sure that the paths mentioned in the cst-migration-MINE.ddc are correct for the system you are starting it on!

NOTE: Please make sure that you can reach both the source and the target database from this node using the tnsnames-entries you have created for the replication setup.

5. Step five:
Rename the cst-migration-MINE.ddc on the source node (!) to cst-migration-FETCHER.ddc and change the cst-migration-run-source-node.sh file to start the FETCHER process in stead of MINE process.

You are now ready to start your new replication processes!

NOTE: If you are running APPLY already, there are some additional things you need to be aware of.

Although it was not the case when I came across this challenge, I am happy to say that Dbvisit have verified and accepted this solutions as a supported action.

Hope this helps.

Oracle Standard Edition 2, a bright new future

Okay, it is not very much more than smoke, since Ludovico Caldara found MOS note 2027072.1 about the support of Standard Edition 12.1.0.2.0 and blogged and tweeted about it.

Despite Ludovico’s disclaimer, there is, nevertheless, some smoke… And Twitter quite quickly filled up (at least the early where I take interest). Dominic Giles stated: “More to come soon!” And Ann Sjökvist urged calmness by saying: “let’s wait for facts!” And of course she is right.

Why then this blogpost?

As one of the founders of the Oracle Standard Edition Round Table (#orclSERT), this interests me. Standard Edition One comprises the most cost effective software stack around. For a deeper dive on that statement, please visit an orclSERT session at an Oracle Usergroup event near you, or drop me a line.
Obviously some kind of news of this category has been long expected as the development of “ever more cores per socket” kept increasing. We have been eagerly awaiting this for a few years actually, hesitant to speak about it… For obvious reasons 😉

Where there is smoke, there is fire, especially when PM’s speak. And as the note states that the release of SE2 is foreseen for Q3 of 2015 (which coincidently is THIS quarter) I would like to prepare myself…

This is what we have to go by for now:

Beginning with the release of Oracle Database 12.1.0.2 Standard Edition 2 (SE2), Standard Edition (SE) and Standard Edition One (SE1) are replaced by Standard Edition 2. SE2 will run on systems with up to 2 sockets and will have the ability to support a two node RAC cluster. 12.1.0.1 was the final edition that we will produce for SE and SE1. Customers running SE or SE1 will need to migrate their licenses to SE2 to be able to upgrade to 12.1.0.2.

1. SE2 will run on systems with up to 2 sockets.
– This is not different from the current SE1 rule;
– This means that a 4 socket SE installation will have to be revised, either migrate up to EE or revamp to 2 sockets;

2. SE2 will have the ability to run a 2 node cluster.
– RAC will become available across the board in the Standard Edition realm.
– How many sockets will a full SE2 cluster be able to support? 2 sockets, if you would follow current rules, 4 sockets if the license would be optimal!

And it is always good to speculate about price… And mind you, this is smoke! The best educated guess so far, 3/4 between SE1 and SE, which could (hopefully) bring the price to round about 10k euro per socket, but… who knows? Perhaps, as Ludovico also stated, socket licensing could become history?

The great news is, Oracle Standard Edition will remain available as the alternative to the Enterprise Edition installations. For more information we will just have to hold our breath a little longer.
But, be assured, during the next session of #orclSERT we will be able to tell you (much) more!

Meanwhile I will keep preparing (and talking about it), for this change will have some impact yet…

Using a terminal emulator on Mac

Dumb title for a blog post? No, not really I guess…

ZOC Terminal emulatorI have been using a terminal emulator, basically ever since I got away from the VT100 terminal:

  • ICE.TCP Pro
  • KEAVT
  • Reflection ‘X’

And a few other obscure applications that I cannot even recall anymore.
Currently, and over the last 6 to 8 years, I have been using ZOC.

The background of this story is: In the beginning these were the first DOS PC’s and later some Windows based machines that needed to interface with (in my case) VAX VMS, and later with the other UNIXes.

But why use a terminal emulator on MAC, for crying out loud? I hear you think… OS X is a Unix, so it should be all native, right?
Wrong! Well, kind of…

There are so many small (and bigger) differences when using various systems that it pays off to have a program that allows you to tune into these differences. Nothing more annoying than a backspace key that does not work or key-combinations that act different than you would expect.
This is even more true when you work with a mix of different operating systems, Solaris, HP UX, Oracle Linux, perhaps even some IBM OS’s.
And for when you would like to have further tune-ability of you toolkit, ranging from colors, to sizes, from fonts to layout.  Frivolities? Perhaps, but if you spent a lot of your time everyday in such a  tool, it does make a difference.

Printer terminal, also a terminal emulatorMore importantly are configurable logging, for documentation and troubleshooting. You can regard this as the modern variation to the old school print terminal (who can remember those?)
Configuring transfer types, modem and commutation settings and keeping these organized. As well as password storage and administrative support.

Well, basically, this is why I use a terminal emulator on my MAC!
And I think I found a valid tool in ZOC, by Markus Schmidt. Please check it out ZOC

Well, I hope you get to enjoy your terminal work as much as I do!

Updating SQL Developer to use newer Java version

I was being teased by SQL Developer.

Everytime I started it came nagging about that it was being forced to live in an old Java version called jdk1.7.0_45 and that is was not feeling happy about it.
So, I should remedy this, I thought to myself.

First visit was, inspired by some search-work on the WWW, a file called product.conf. Which offered two possiblities:

java

SetJavaHome to some logical location
or
SetJavaHome to nothing, and then SQL Developer would kindly ask me to point it to somwhere to live.

Well… no. My SQL Developer refused it all and just started with this jdk 1.7.

Same hack done in another file on another location, a file called sqldeveloper.conf.
Same result.

Freshly downloaded SQL Developer, put in place… No help!

Erm…

Rename
drwxr-xr-x  3 root  wheel  102 Jan  6  2014 jdk1.7.0_45.jdk
in /Library/Java/JavaVirtualMachines
to
drwxr-xr-x  3 root  wheel  102 Jan  6  2014 xxx-jdk1.7.0_45.jdk

Nope! Still the same nagging…

What now?

In the end, I wound up with one of Jeff Smits’s helpers.
This guy aksed me to “start SQL Developer from the commandline”. Right, but how?

So I finally found:
/Applications/SQLDeveloper.app/Contents/MacOS/sqldeveloper.sh

And that did start SQL Developer from the command-line…

But… wait… an .sh-file!! Interesting!!

And, behold… in this .sh-file lies the answer:sqldev_startup1

So the file reads:
export JAVA_HOME=`/usr/libexec/java_home -v 1.7`
Which I hacked to:
export JAVA_HOME=`/usr/libexec/java_home -v 1.8`

And, presto, error-message gone and SQL Developer now happily lives in Java 8.

Hope this helps somebody out!!

Setting up SQL Instant Client on MAC

In doing more work directly from my Macbook Air, I ran into a situation where native connectivity to an Oracle environment was needed.Connectivity over Oracle Instant Client
From experience I have always been a big fan of the Full Oracle client, just because it comes with a lot of tools and utilities for troubleshooting, which makes the actual experience a bit more pleasant.
Looking & asking around, though, I learned fairly quickly that this client is just not available for Mac OSX… Thanks to Osama Mustafa for confirming.

So, a fact, although quite a number of IT pro’s are working with Mac!

This leaves no other choise than to divert to the Oracle Instant Client 11, which then, indeed, is just an 11g Instant Client (11.2.0.4)!
It would humor me if Oracle were to bring out a 12c Full Client for Mac, as well as an instant client, if someone would so desire.
To have some more tooling around the client, I downloaded all the packages including at least SQL*Plus.

Though the install process is relatively straight forward (download the archives and unzip them in place) getting SQL*Plus to actually run is a somewhat different ballgame!
As usual, when you start a tool, you’re bombarded by messages about unfound dynamic libraries. This set me (very briefly) on a path to place these files where they were expected on my Mac.

In a place like:

/ade/b/2649109290/oracle/sqlplus/lib

for instance, you would need to place a number of these libraries.
This leaves you with the option to populate your system with all these specific libraries, which is of course just fine, but not my choice (think of the mess in ever having to clean up) and especially not when it can be avoided.

A quick search pointed me to this excellent blogpost by Casey Lucas about this exact same issue. With a tool called ‘otool’* applied as suggested, I am now able to run SQL*Plus natively on my Mac without error messages.

* otool – object file displaying tool
If you need it, call it from the command line. It will install this and other development tools on your Mac.

That is nice, but it’s just only over halfway there.

manneke stopt de stekker erin

 

Now I want something where I can just run:
sqlplus <username>@<database>
without intricate connect-strings.

 

 

This leaves one minor “hack”, or rather “edit” required, your .bash_profile needs a bit of a path addition and an environment setting:
alias ll="ls -l"
export TNS_ADMIN=/Applications/instantclient_11_2
export PATH=/Applications/instantclient_11_2:$PATH

Note: the alias was already in there 😉

To top it off, I created a small tnsnames.ora in the directory with the instant client (keeping all related files neatly tucked away together)

xesource =
(DESCRIPTION =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = 192.168.56.66)
(PORT = 1521)
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = xe)
)
)

And voila, goal acquired.

sqlplus usera@xesource
Never specify the password on the command line. Not only will it be shown, it will also be sent (most probably) unencrypted over SQL*Net

OUGN15, The “boat conference” revisited

Jan at shipsport
Reflections on OUGN

Sometimes things in life can change quickly! It is only two years ago that I came to Oslo for the first time to join the Scandinavian Oracle crew on a boat trip to Kiel.
At that time I had never actually participated in this kind of experience and I wasn’t into presenting either. Together with my good friend Philippe Fierens I discovered a whole new world back then. You could have read about these experiences in some blogpost, but this was lost in the move to my own site, sorry!

And this trip couldn’t have been more different though! With three presentations accepted the two days at sea will be a reunion with the friends I made over the last years, as well as a way to contribute to one of the most tight knit tech communities I know. And this will be in a scene that I remember vividly from being a newbie… And this is somewhat strange, believe me

After a quick and pleasant flight I touched down in Oslo, flying from Amsterdam with a decent sized crew of Dutch Oracle enthusiasts, including my good friends Patrick Barel and Alex Nuijten. Waiting in the Oslo airport for Luís Marques, I catches up with Gurcan Orhan, which was a great surprise.
Later that day we found ourselves in the Oslo harbor for the speakers dinner. You can imagine the collective amount of Oracle knowledge packed into that one restaurant!

frits
Enkitek’s Frits Hoogland on Ansible

After a somewhat restless night we arrived, on Thursday morning, at the ship Color Fantasy with the Heli Helskyaho-company, just in time for the keynotes. It was good to see Mark Rittman and James Morle made it on board too. Especially as James was up for the delivery of version 2.0 of his vibrant keynote! Next we proceeded to bring our luggage to our cabins and grab a spot of lunch on the exhibition floor down in the belly of the ship. The setup of the exhibition was quite nice and gave a good opportunity to mix and mingle.
The afternoon was spent on sessions, where I visited Frits Hoogland with the Ansible talk, and preparation for my own session at 18:00. This is the last run of this APEX presentation, as I have retired it after OUGN15. The slides will be archived here.
After finalizing the preparation for the third edition of the Standard Edition Round Table (aka “slide polishing”) with the #orclSERT team, comprising of Ann Sjökvist, Philippe and myself, it was time for the souree and for diner in the grand restaurant on board. It has been a good first day!

Diner
Dinner with the international crew on board the Color Fantasy.
Gin-tonic
Warm reception at Kiel port.

The second day of OUGN15 started with a multitude of sessions including the third edition of the Oracle Standard Edition Round Table, which was actually quite busy and interactive. We had some good discussions, and that at 09:00, so thank you, everybody.
Of course, as was declared a tradition, Björn Rost was present in the Kiel harbor. With the famous “Basil smash Gin & tonic” and sandwiches we were welcomed on German soil.
My afternoon comprised 3 sessions, starting with my own called “Okay, and now my database server crashed…” which was quite nicely received. Next Alex Nuijten on 12c new features for developer, topped off with Tim Gorman who taught us to be CSI people, in finding issues in the database.
After an enjoyable evening in the various bars and discotheques of the ship we retired the official part of the Oracle User Group Norway Vårseminar 2015, thanking the board and of course especial Øyvind Isene, for their hard work.

If you want to catch up further on the unconference communications surrounding this event, please do checkout the Twitter hashtag #OUGN15. This will also include a great set of snapshots and pictures taken along the way…

Oslo, until the next time!