All posts by Johnnyq72

About Johnnyq72

Jan has been working with Oracle since the early nineties. As an administrator, consultant and solution architect he has contributed to the ongoing development of informations systems, mainly for healthcare purposes. As European business developer, Jan recently took on the challenge of establishing a firm basis for Dbvisit Corp. Combined with this effort, his passion for the Oracle Standard Edition database helps to carry out the message for the necessity of high quality IT solutions.

The importance of meetup.com


The Oracle community convenes at the various events, SIG-meetings and gatherings that are organized by the national Oracle User Group organizations. This is, for my reckoning, one if the most important parts of the power of the Oracle user community.

During these events, local Oracle stars are joined by the travelling Rock Stars of the Oracle Tech community, together they share knowledge and experience to teach and learn about the tricks of the trace. As said many times before, by people much greater than me, this is truly a unique and powerful way to nurture and grow the combined knowledge about Oracle products and the best ways to use them. My favorite quote remains that of Monty Laitiolais: “This truly is a celebration of Tech!” where he obviously was referring to the yearly KScope happening, but which easily translates to many of the Oracle events around the globe.

Since quite a while now, the phenomenon meetup.com has emerged. It is an on-line place where people can initiate meet ups of like-minded people. Either being a small initiative with just a few people in a cafe up to bigger, or perhaps more commercially colored, happenings. Whatever the subject or idea, from travel to innovation and from hobby to profession, you can find a meet up to suit your needs.

Even though the richness and the broadness of the activities of the Oracle User Group organizations worldwide, recently I have seen more and more activities of Oracle aficionados on Meetup.com.
As far as I recall, the ever-vibrant APEX community started organizing these kinds of events under the flag of APEX-meet ups, using this platform. Gathering to share best practices and share experiences about APEX and all the various bit and pieces that adhere to this technology.
If you would look at the following list you would conclude that it even has quite a big list of meetups there… The adoption of meetup.com by the Oracle community is growing rapidly.

Is this a bad thing for the ‘regular’ Oracle user community?

I think not.

From my experiences participating in both “regular” Oracle user group events as well as in Oracle-related meet ups, I think they have a complimentary function.
The traditional user group events are usually more “speaker – audience” oriented, which is a very good format for educating and teaching. A format that is indispensable because it enables a larger group of people to gain knowledge and understanding quickly and effectively.
The meetups have a, let’s call it, more informal character, one where the interchange of information and knowledge is more of a group event. And, let’s face it, the social aspect of meetups is also a little more on the foreground, which in itself is a good thing too.
The need for this kind of contact already was there in the form of SIG’s. During the traditional events, with the emerging of the round-table phenomenon, this has been even more obvious.

Conclusive I would like to state: Let’s embrace meet ups. Go find or organize a meet up – preferably about our much-loved Oracle technology – in your neighborhood. Find and inspire people, share, learn and laugh! It is worth your time, I can tell from experience.


Why GUI sucks…

Of course we all know GUI stands for Graphical User Interface, just as CLI stands for Command Line Interface, right!
Or, rather, a GUI is this nice, flashy screen where you can easily roam with your mouse, comparable to a multiple choice quiz, where the right answer is there for the picking.
A CLI on the other hand is this dark, mysterious blinking cursor… Nothing happens unless you know more or less what you are doing. Comparable to an open questions quiz.

Sparked by a recent Twitter discussion, I decided I should probably write the umpth blog post about this to make my contribution to this lasting dispute.

Disclaimer:
This post discusses GUI in relation to system administration, not necessarily in relation to data-entry or data manipulation applications that are used in front offices all over the world. I guess CLI has no place in a world like that…

bad gui

Why GUI sucks?
I have done my fair share of installing, scripting, ad-hoc fiddling, testing and trying. And, I have found myself in the situation where I worked with younger computer geeks or even in situations where nobody had the time to figure anything out – stuff just had to be made to work.

Probably in the few lines above, we could already have the basics for this discussion!

But, why then does GUI suck?

GIU’s suck because they are limiting, labor (or rather RSI) intense and require you, the operator, to be there, physically clicking away on your computer.

Limiting
They are limiting, or at least most of the time they are, because it is often quite hard to get a visual representation of each and every function of a device / program / system etc. If you consider, for instance, a networking device and then try to imagine having to create a GUI that lets the operator configure and define each and every parameter of a specific VLAN or VPN. And then also bear in mind that the GUI has to stay crisp, clean and intuitive.
For this reason, I have seen many vendors who have created a GUI for basic setup only, relying on the professionals to find their way in the CLI. They GUI can then stay intuitive enough to at least get the basics done.

GUI’s that aim not to be limiting, of which there also are a few out there too, need to sacrifice a lot of the things that a good GUI should stand for:

  • Short click paths (3 clicks from anywhere to get where you want to be)
  • Intuitive (don’t have to guess or read a manual to use a GUI)

So, what you end up with, then, is a maze of riddles, where you can easily spend a good day setting up some new functionality. Somehow I believe this is not what the designers had set out for nor is it a valid solution for most tasks at hand.

Labor intense
I personally find GUI’s often, quite labor intense. Not just for the absence of the ability to automate tasks, though. Especially if there is a lot of specific configuration that needs to be done, you often end up left and right clicking until your hands start hurting.
And, in the end, you always end up with the eerie feeling that you missed out on that one specific setting that would really put the icing on your configuration.

Operator presence
Last, but not least… For a GUI to work, you need to be at your workstation. Period.
Anybody who has ever worked on automated testing of applications that rely on a GUI, knows about the hideous crime of having to script test-cases, either working with hidden button-labels, screen coordinates, etc. Where these scripts fail every other day because a developer moved a window to a better spot or used a new button-label. You end up coding your application just to make it testable.
No, GUI requires operator presence, making it useless for automation or scaling.

good_cliThe bliss of CLI
Okay, middle Ages… or Stone Age…
Nothing really fancy, just a black (or, if you are feeling frivoled, you may choose some nice color) square on your screen with a blinking underscore – most often. And then you say; GUI sucks?

One of the challenges in this hyper fast moving world full of smart phones, tablet PC’s and what have you, loaded with intuitive and fast apps, is to realize that actually “hard core IT” is hard core.
You need to learn your stuff first, know what you do and know about the consequences of choices you make. You will have to learn to be able to walk the walk and to talk the talk. Once you have mastered that, this blinking underscore is no longer a roadblock but a invitation! Just like after mastering a foreign language, you will know what to say and do to open up the potential at your fingertips.

And now, reality
Of course, the above is ranting is just one side of the story.
It is even just one side of the story in hard core IT!

As already stated above, sometimes there is no time to really dive into stuff and get to know the tools you need to get to work for you. I am pretty sure we all have been in a place where we needed to get a project done or some functionality realized, where we just did not have the right devices.

What are your options at such a moment?
Get a hardcore IT specialist who does “talk the talk”?
Probably it will not be cheap and probably it will be a very thorough configuration, but just not exactly as you need it to be… Though still a valid option, even in a number of cases it’s a no-go.
This… is where a good GUI comes in handy.
It will allow you, yourself, to organize that which needs organizing in an orderly fashion. Okay, the GUI will have to be accurate and well thought through, but I that goes for all interfacing, that is also true for the CLI.

Seeing this story unfold… I guess I still think GUI sucks. (sorry!)
But GUI has a place, a very well earned place in a super-fast and highly demanding world. Still I am convinced that if you are working in a highly professional environment, having to do intricate stuff on ever live environments, I would say a good script for a CLI is the only way you can create some assurance that whatever change you need to execute will actually have a predictable result.

And putting in the effort of learning how to use any CLI? Well, I guess that’s why it is called “professional IT”.

DBA_FEATURE_USAGE_STATISTICS and SE2

This blog post is inspired on work I have been doing on Standard Edition databases and the returning confusion about what is and what is not part of Standard Edition.

DBA_FEATURE_USAGE_STATISTICS is a tool in determining license usage for the Oracle database. It is good to understand the implications of each entry, know what is happening in your database and thus be able to have a substantial conversation about the usage of your license, being SE, SEO, SE2 or EE!

This list is the full list of DBA_FEATURE_USAGE_STATISTICS and I have found no source where there is a mapping of these features to database editions. As it is a lot of tedious work I call upon the community to help complete the list and make it as accurate as can be. So, if you have news, improvements, other bits of information, please send it to me and I will make sure it gets added!

WARNING: Still… with all the work that goes into these answers, it is not the law, it is a very serious interpretation of facts which will pay a part in helping you make the right decision when it comes to database licensing.

Feature Standard Edition
Active Data Guard – Real-Time Query on Physical Standby NO !
ADDM NO !
Advanced Replication NO !
Application Express YES
ASO native encryption and checksumming NO – EE option !
Audit Options NO !
Automatic Maintenance – Optimizer Statistics Gathering YES
Automatic Maintenance – Space Advisor YES
Automatic Maintenance – SQL Tuning Advisor NO !
Automatic Memory Tuning
Automatic Segment Space Management (system) YES
Automatic Segment Space Management (user)
Automatic SGA Tuning YES
Automatic SQL Execution Memory YES
Automatic SQL Tuning Advisor NO !
Automatic Storage Management
Automatic Undo Management
Automatic Workload Repository
AWR Baseline NO !
AWR Baseline Template NO !
AWR Report NO !
Backup BASIC Compression
Backup BZIP2 Compression
Backup Encryption
Backup HIGH Compression
Backup LOW Compression
Backup MEDIUM Compression
Backup Rollforward
Backup ZLIB Compression
Baseline Adaptive Thresholds
Baseline Static Computations
Bigfile Tablespace
Block Media Recovery NO !
Change Data Capture NO !
Change-Aware Incremental Backup
Character Semantics
Character Set
Client Identifier
Clusterwide Global Transactions
Compression Advisor
Crossedition Triggers
CSSCAN
Data Guard NO !
Data Mining NO – EE option !
Data Recovery Advisor
Database Migration Assistant for Unicode
Database Replay: Workload Capture NO ! 1
Database Replay: Workload Replay NO ! 1
DBMS_STATS Incremental Maintenance
Deferred Open Read Only
Deferred Segment Creation NO !
Direct NFS
Dynamic SGA
Editioning Views
Editions
EM Database Control
EM Grid Control
EM Performance Page
Encrypted Tablespaces
Exadata
Extensibility
File Mapping
Flashback Data Archive NO ! 2
Flashback Database NO !
GoldenGate NO – EE option ! 3
HeapCompression
Hybrid Columnar Compression NO !
Instance Caging NO !
Internode Parallel Execution
Job Scheduler
Label Security NO – EE option !
LOB
Locally Managed Tablespaces (system) YES
Locally Managed Tablespaces (user)
Locator YES
Logfile Multiplexing
Long-term Archival Backup
Materialized Views (User) NO !
Messaging Gateway NO !
MTTR Advisor
Multi Section Backup
Multiple Block Sizes
Object
OLAP – Analytic Workspaces NO – EE option !
OLAP – Cubes NO – EE option !
Oracle Database Vault NO – EE option !
Oracle Java Virtual Machine (system) YES
Oracle Java Virtual Machine (user)
Oracle Managed Files
Oracle Multimedia
Oracle Multimedia DICOM
Oracle Secure Backup
Oracle Text
Oracle Utility Datapump (Export)
Oracle Utility Datapump (Import)
Oracle Utility External Table
Oracle Utility Metadata API
Oracle Utility SQL Loader (Direct Path Load)
Parallel SQL DDL Execution NO !
Parallel SQL DML Execution NO !
Parallel SQL Query Execution NO !
Partitioning (system) YES
Partitioning (user) NO – EE option !
PL/SQL Native Compilation
Quality of Service Management NO !
Read Only Tablespace
Real Application Clusters (RAC) YES 4
Real-Time SQL Monitoring
Recovery Area
Recovery Manager (RMAN) YES
Resource Manager NO !
Restore Point
Result Cache NO !
RMAN – Disk Backup
RMAN – Tape Backup
Rules Manager
SecureFile Compression (system) YES
SecureFile Compression (user)
SecureFile Deduplication (system) YES
SecureFile Deduplication (user)
SecureFile Encryption (system) YES
SecureFile Encryption (user)
SecureFiles (system) YES
SecureFiles (user)
Segment Advisor (user)
Segment Shrink
Semantics/RDF NO !
Server Flash Cache
Server Parameter File
Services
Shared Server
Spatial NO – EE option !
SQL Access Advisor
SQL Monitoring and Tuning pages NO – EE option !
SQL Performance Analyzer NO !
SQL Plan Management NO !
SQL Profile
SQL Repair Advisor
SQL Tuning Advisor
SQL Tuning Set (system) YES
SQL Tuning Set (user)
SQL Workload Manager
Streams (system) YES 5
Streams (user)
Transparent Data Encryption
Transparent Gateway YES – option
Transportable Tablespace NO ! 6
Tune MView
Undo Advisor
Very Large Memory
Virtual Private Database (VPD) NO !  7
Workspace Manager
  1. Unless used for upgrade to Enterprise Edition.
  2. Unless used without history table optimization.
  3. Goldengate can also be used with Standard Edition, it is a separate product.
  4. RAC on Enterprise Edition is an option.
  5. No capture from redo.
  6. Import transportable tablespaces in all editions.
  7. Policies on XDB$ACL$xd_sp in sys.v_$vpd_policy are internal ( “out of the box”) policies that are used by XDB to control the access to certain internal tables. All the logic is implemented in the xdb.DBMS_XDBZ package and there is no way one can control / influence the way this is working.

@HrOUG_2015 in Rovinj, Croatia

In a hectic year it is good to attend and contribute to Oracle user group sessions. This adds an element of a ‘Working Holiday’ to someones schedule. I can promise you, the vacation isle of Rovinj is a perfect venue for this and especially since it is the last week of the opening of the Hotel for this season.
Of course you can find all information about contributing to these events right here!!

@HrOUG_2015, as the official twitter-account of the conference goes, brings just this!! Content combined with pleasure. Ranging from quality sessions by Rock star speakers to relaxation in the pool and late night party in “The Castle”.

Currently the biggest worry is rain… At least for the attendees. As always the (very) hard working organizers are doing their best to create a super experience for everyone attending the conference, and my personal biggest worry is that the participants will actually bring their laptop to the hands-on experience. Actually doing logical replication yourself is so much cooler than seeing it demonstrated. It will be an interesting experience anyhow.

This conference also led to another series of Oracle Hero’s I got to meet in person!

And as always there is really serious stuff going on as well. One of the main challenges or worries today is the developments surrounding Oracle Database Standard Edition Two, and the impact it brings for the development of the European market.
Eliminating this database version forces emerging projects to use the Oracle Cloud, as the super-sharp priced project startup version is no longer available. We had this with Standard Edition One. It also counters Oracles own statement, quite recently presented by Andrew Sutherland, of hybrid cloud functionality, since there is no “on-premise” equivalent for a small scale project anymore!
We are hoping for a good discussion on Friday during the Standard Edition Round Table version at HrOUG, co-hosted by Philippe Fierens, as this development is very heartfelt in Croatia as it is in many European countries.

If you want to read more about this years event in Croatia, please checkout the many tweets and facebook entries by @helifromfinland, @alexnuijten, @roelhartman (ps. Vote for Roel as member of the ODTUG board) and many more!!

Oh, and as far a basic life’s needs go… The Internet on the island is the best ever!!

The rising of the Standard Edition (#orclse2)

It was the second half of 2011 when the broader introduction of Standard Edition database security tooling was introduced in The Benelux. Dbvisit Standby was the tool and protecting data in Standard Edition databases was the deal.

I remember the first meetings vividly! The Standard Edition database? Many people had not heard of this edition or, more frighteningly, the ones that knew about it, ignored it. Standard Edition was not something to be taken seriously, let alone used to run your production system on.
Still this time marked the start of the silent (r)evolution and the rising of the Standard Edition.

Since those days many things have changed.
With the continued attention and drive for promoting Oracle Standard Edition (SE) database, the visibility of this edition has flourished.
Obviously the economic hardship of the last years have encouraged companies to review their IT budgets. The investment friendly character of SE have helped its growth, especially in such times.
During the second half of 2013 the first broader initiatives around SE started to become visible. One of the highlights of the SE uprise was the world premiere of the Standard Edition Round Table during Harmony 2014 in Helsinki Finland organized by Ann Sjökvist, Philippe Fierens and myself, the same people that lead the Standard Edition community today.

With the increased attention, worries also came. The Standard Edition and Standard Edition One editions had no cap on the number of cores per processor. This means that modern servers, running SE, equipped with huge amounts of processor cores, bring tremendous processing power at extremely low cost.
Signs of change became visible with the postponed release of Oracle database patch release 12.1.0.2.0 for SE.se2And now, in 2015, Standard Edition is a tool to distinguish yourself with. Many IT consultancy firms advertise their SE-expertise and have increased visibility in this respect. Many new initiatives have been fired up to help give Standard Edition the punch it needs for the even more serious jobs. News on Standard Edition is spread by a range of blog posts (like this one) as a result and UKOUG_Tech15 is even hosting a Standard Edition track! We have come a long way!!

And finally, with the release of Oracle Standard Edition Two, on the first of September 2015, the future of Oracle Standard Edition has been secured. The release of version 12.1.0.2.0 marks a new era for this Smart Edition.
Standard Edition Two retains many of the important advantages of Standard Edition and Standard Edition One while capping the processor core factor at a very usable level.

Yes, Oracle Standard Edition is a solid product in the Oracle stack and is still capable to help Oracle offer the most complete software operations stack, especially due to the development and deployment capabilities of APEX.
An unbeatable, endlessly scalable and super affordable solution on the market today.
We have come a long way to witness the rising of the Standard Edition!

And what about the changes?

  • Oracle Database Standard Edition 2 (SE2) will replace SE and SE1 from version 12.1.0.2 onward;
  • SE2 will have a limitation of maximum 2 socket systems and a total of 16 CPU threads*;
    • SE2 has Resource Manager hard coded to use no more than 16 CPU threads, which helps protect against noncompliance.
  • SE One and SE will no longer be available to purchase from November 10th, 2015;
  • Oracle is offering license migration scenarios from SE One and SE to SE2;
    1. SE One users pay a 20% increase in support for the migration.
    2. SE customers face no other cost increases for license or support*.
  • * Named user (NUP) minimums for SE2 are now 10 per server;
  • There are no changes in the use of visualization solutions;
  • 12.1.0.1 SE and SE1 customers will have 6 months of patching support once SE2 12.1.0.2 is released with quarterly patches still being available in October of 2015 and January of 2016.

Hope this helps!

dbms_redefinition houskeeping

dbms_redefinition actually is a nifty, but powerful little toolkit that let’s you change table-definitions without actually locking the table in such a manner that it would prevent regular operations from being interrupted.

You can read loads about it in the Oracle documentation or in the wealthy library by Mr. Tim Hall.

housekeepingOne thing I noticed, and which I want to share here has lots to do with the house keeping that is automatically done by dbms_redefinition. Actually it talks about some of the bits it didn’t brush up after itself.

dbms_redefinition works using triggers and materialized views to help switch from your current active production table, via a so-called interim table, back to your shiny new, redefined production table. You can follow this beautifully by querying the dba_segments view along the way.
For this it obviously creates this materialized view and the other required components and it removes them after you finish your redefinition-trip. After all that is done, you can just remove your interim table and be done with it.

At least, that is what happened in most of the cases and is what you would expect!

Though, in some cases… it proved impossible to drop the interim table. To me this was somewhat scary… did the redefinition not finish, or did it not finish correctly?

What happened?

There was this table that I redefined. It had referential integrity constraints (aka. foreign key constraints) pointing towards it. Of course dbms_redefinition neatly created version of these to the interim table to be sure nothing went wrong.build-in-flight

When finishing redefinition (with dbms_redefinition.finish_redef_table) most of the interim bits and pieces are cleared away and you just have to drop your interim table manually (okay, we can discuss if this actually would / could / should be automated, but let’s leave that).

But… when you are then manually dropping this interim table (in a busy production system, I tend to want to be careful and just issue ‘drop table int_<tablename>‘. That does not work. dbms_redefinition “forgets” to remove these referential integrity constraints in the other tables (which are neatly names tmp$$_<constraintname>).
This than means either issue ‘drop table int_<tablename> cascade constraints‘, which is more then the basic ‘drop table‘ or find these constraints and remove them manually first:

select 'alter table '||owner||'.'||table_name||' drop constraint '||constraint_name||';'
from dba_constraints dc
where constraint_type='R'
and r_constraint_name in
(
select constraint_name
from all_constraints
where table_name = 'INT_<tablename>'
);
alter table <schema>.<foreign table> drop constraint TMP$$_<constraint name>;

I guess, personally, I would like dbms_redefinition to do this for me…

It’s smart enough! it created them!

Just a quick and additional note, setting ddl_lock_timeout to 30 or 60 for your session can actually help and prevent a lot of non-sense on a busy system.

Hope this helps someone sometime 😉

Introducing FETCHER in a running replication process

This is no regular bit of work and it will probably (and hopefully) never hit you in a production setup…

The prerequisite is that you know how on-line data replication in general, and Dbvisit Replicate specifically, work.

The following case is true:
I had half of a replication pair running.
It means that the MINE process was running, converting REDO-log in PLOG-format. The APPLY process had not yet started because the target database was still being prepared.

dbvisit-replicate-logical-replication-made-easy-18-638-300x225The reason for this is that we needed to start converting redo-log information to PLOG information while we were setting up the target environment. The reason for that was that the setup (exporting source, copying dump to target and importing) was taking quite a bit of time, which would impact redo-log storage to heavily in this specific situation.

It was my suspicion that the MINE process was unable to get enough CPU-cycles from the production server to actually MINE more redo-log seconds than wall-clock seconds passed. In effect, for every second of redo-log information that was mined, between 1 and 6 seconds passed.

This means that the replication is lagging behind and will never be able to catch up.

To resolve this, the plan was to take the MINE process of the production server and placed on an extra server. On the production server, a process called FETCHER would be introduced. The task of this process is to act as a broker between the database and the MIN process, forwarding the requested on-line an archived redo log files.

Normally (!) you would use the nifty opportunities that Replicate offers with the setup wizard and just create a new setup. And actually, this is what I used to figure out this setup. And, if you can, please do use this…

Why didn’t I then, you would rightfully ask?

Well… The instantiation process would take to long, and did I say we were under time-pressure?

  • Setup wizard, 5 minutes
  • The famous *-all.sh script, ~ 1 hr.
  • Datapump Export, ~ 10 hrs.
  • Copy from DC old to DC new,  ~ 36 hrs.
  • Datapump Import, ~ 10 hrs.

So, totally we could spend 57:05 hrs. to try to fix this on the go…

Okay, here we go:

Note: cst-migration is the name of the replication project as you specified it in setup wizard when setting up Replication.

TIP: When setting up on-line replication, it is worth your effort to create separate tnsnames.ora entries for your project, like ‘repl-source’ and ‘repl-target’ acros all nodes.
It can get hellishly confusing if you have, as in this case, a database that is called <cst> and is called the same on the source and target server!

1. Step one:
We obviously had the ./cst-migration/config directory from our basic setup with just MINE & APPLY. This directory holds (among others) the ./cst-migration/config/cst-migration-ontime.ddc file. This file holds the Dbvisit Replicate Repository contents that is needed to run the processes.

From this setup, MINE is actually running. We actually concluded the fact that we were not catching up from this process.

2. Step two:
Now we run dbvrep -> setup wizard again and create a Replicate setup directory with FETCHER and isolate the ./cst-migration+fetcher/config/cst-migration+fetcher-onetime.ddc.

By comparing the two files, I was able to note the differences and therewith conclude the changes necessary to introduce a FETCHER process. It is a meticulous job to make sure all the paths on all the three servers are correct, that port numbers are correct and that all the individual steps are take in the right order. This is the overview.

Having these changes, it is all downhill from now.

3. Step three:
Using the Dbvisit Replicate console, the new entries and the changes were made to the DDC-information stored in the Replicate repository. You can enter these manually or execute your change-file by executing @<change-file-name> inside the console.

4. Step four:
Create the ./cst-migration directory on the system you will use for the relocated MINE process and copy the cst-migration-MINE.ddc and cst-migration-run-source-node.sh in this directory.
Rename the cst-migration-run-source-node.sh to cst-migration-run-mine-node.sh to reduce confusion.
Make sure that the paths mentioned in the cst-migration-MINE.ddc are correct for the system you are starting it on!

NOTE: Please make sure that you can reach both the source and the target database from this node using the tnsnames-entries you have created for the replication setup.

5. Step five:
Rename the cst-migration-MINE.ddc on the source node (!) to cst-migration-FETCHER.ddc and change the cst-migration-run-source-node.sh file to start the FETCHER process in stead of MINE process.

You are now ready to start your new replication processes!

NOTE: If you are running APPLY already, there are some additional things you need to be aware of.

Although it was not the case when I came across this challenge, I am happy to say that Dbvisit have verified and accepted this solutions as a supported action.

Hope this helps.

My picks, no, Agenda… for UKOUG_Tech15

I went over the agenda for UKOUG_Tech15 and took my picks & suggestions.
Then I thought, why not share these…

MONDAY

The Oracle Database In-Memory Option: Challenges & Possibilities
Christian Antognini – Trivadis AG

Standard Edition Something for the Enterprise or the Cloud?
Ann Sjökvist – SE – JUST LOVE IT

All about Table Locks: DML, DDL, Foreign Key, Online Operations,…
Franck Pachot – DBi Services

Silent but Deadly : SE Deserves Your Attention
Philippe Fierens – FCP
Co-presenter(s): Jan Karremans – JK-Consult (Having a link here would be silly, right)

Oracle SE – RAC, HA and Standby are Still Available. Even Cloud!
Chris Lawless – Dbvisit

SE DBA’s Life a Bed of Roses?
Ann Sjökvist – SE – JUST LOVE IT

Oracle Standard Edition Round Table
Joel Goodman – Oracle
Co-presenter(s): Ann Sjokvist, Philippe Fierens, Jan Karremans

TUESDAY

Watch out for #RepAttack… all day long!!
And earn your RepAttack badge-ribbon…

Advanced ASH Analytics: ASHmasters
Kyle Hailey – Delphix

Community Keynote – Dominic Giles

Oracle BI Cloud Service – Moving Your Complete BI Platform to the Cloud
Mark Rittman – Rittman Mead

Infiniband for Engineerd Systems
Klaas-Jan Jongsma – VX Company

Oracle Database In-Memory Option – Under the Hood
Maria Colgan – Oracle

Do an Oracle Data Guard Switchover without Your Applications Even Knowing
Marc Fielding – Pythian

Using Oracle NoSQL to Prioritise High Value Customers
James Anthony – RedStack tech

WEDNESDAY

HA for Single Instance Databases without Breaking the Bank
Niall Litchfield – Markit

Database Password Security
Pete Finnigan – PeteFinnigan.com

Connecting Oracle & Hadoop
Tanel Poder – PoderC LLC

Enterprise Use Cases for Internet of Things
Lonneke Dikmans – eProseed
Co-presenter(s): Luc Bors – eProseed

Bad Boys of On-line Replication – Changing Everything
Bjoern Rost – portrix Systems GmbH
Co-presenter(s): Jan Karremans – JK-Consult

RMAN 12c Live : It’s All About Recovery,Recovery,Recovery
René Antúnez – Pythian

Hopefully it will attend you to some interesting session for you!

Synology backup with CrashPlan 4.3.0

I recently upgraded to CrashPlan 4.3.0 which I use to backup my Synology to a remote location.

On Synology, you can only use CrashPlan in a headless manner, so I am running “the head”, the client, from my MacBook.
After the update to CrashPlan 4.3.0, I was unable to connect to the engine running on my Synology. And that is a pain, as I cannot control the CrashPlan setup anymore, which I needed, to do some setup-changes.
I thought to write it down as it is the combination of to pieces of forum-information with a small alteration.

Here’s how I got to fix it (I took the rigorous way as I feel a clean start is the best start & CrashPlan keeps all your settings with you account anyway):
1) remove CrashPlan from Synology (using the package manager)
2) remove CrashPlan from my MacBook
3) install CrashPlan on Synology (using the package manager)
4) install CrashPlan on my MacBook from the CrashPlan website
5) change the client ui.properties to include serviceHost=<your NAS name / IP>
6) change .ui_info on the Synology NAS (and this was the missing bit):

Synology (server) side of things:
– Edit my.service.xml, mine was located in /volume1/@appstore/CrashPlan/conf/my.service.xml. Changed from <serviceHost>localhost</serviceHost> to <serviceHost>0.0.0.0</serviceHost>. Please keep the default port <servicePort>4243</servicePort>
– Get the server user id information, check your path… You could use the command cat /Library/Application\ Support/CrashPlan/.ui_info  ; echo

MacBook (client) side of things:
– Making a backup of the client .ui_info file just in case… sudo cp /Library/Application\ Support/CrashPlan/.ui_info /Library/Application\ Support/CrashPlan/.ui_info.backup
Substituting original client .ui_info content with .ui_info coming from server: sudo vi /Library/Application\ Support/CrashPlan/.ui_info

And, presto, this is what did it for me and my Synology!

Oracle Standard Edition 2, a bright new future

Okay, it is not very much more than smoke, since Ludovico Caldara found MOS note 2027072.1 about the support of Standard Edition 12.1.0.2.0 and blogged and tweeted about it.

Despite Ludovico’s disclaimer, there is, nevertheless, some smoke… And Twitter quite quickly filled up (at least the early where I take interest). Dominic Giles stated: “More to come soon!” And Ann Sjökvist urged calmness by saying: “let’s wait for facts!” And of course she is right.

Why then this blogpost?

As one of the founders of the Oracle Standard Edition Round Table (#orclSERT), this interests me. Standard Edition One comprises the most cost effective software stack around. For a deeper dive on that statement, please visit an orclSERT session at an Oracle Usergroup event near you, or drop me a line.
Obviously some kind of news of this category has been long expected as the development of “ever more cores per socket” kept increasing. We have been eagerly awaiting this for a few years actually, hesitant to speak about it… For obvious reasons 😉

Where there is smoke, there is fire, especially when PM’s speak. And as the note states that the release of SE2 is foreseen for Q3 of 2015 (which coincidently is THIS quarter) I would like to prepare myself…

This is what we have to go by for now:

Beginning with the release of Oracle Database 12.1.0.2 Standard Edition 2 (SE2), Standard Edition (SE) and Standard Edition One (SE1) are replaced by Standard Edition 2. SE2 will run on systems with up to 2 sockets and will have the ability to support a two node RAC cluster. 12.1.0.1 was the final edition that we will produce for SE and SE1. Customers running SE or SE1 will need to migrate their licenses to SE2 to be able to upgrade to 12.1.0.2.

1. SE2 will run on systems with up to 2 sockets.
– This is not different from the current SE1 rule;
– This means that a 4 socket SE installation will have to be revised, either migrate up to EE or revamp to 2 sockets;

2. SE2 will have the ability to run a 2 node cluster.
– RAC will become available across the board in the Standard Edition realm.
– How many sockets will a full SE2 cluster be able to support? 2 sockets, if you would follow current rules, 4 sockets if the license would be optimal!

And it is always good to speculate about price… And mind you, this is smoke! The best educated guess so far, 3/4 between SE1 and SE, which could (hopefully) bring the price to round about 10k euro per socket, but… who knows? Perhaps, as Ludovico also stated, socket licensing could become history?

The great news is, Oracle Standard Edition will remain available as the alternative to the Enterprise Edition installations. For more information we will just have to hold our breath a little longer.
But, be assured, during the next session of #orclSERT we will be able to tell you (much) more!

Meanwhile I will keep preparing (and talking about it), for this change will have some impact yet…