Tag Archives: oracle database

DBA_FEATURE_USAGE_STATISTICS and SE2


This blog post is inspired on work I have been doing on Standard Edition databases and the returning confusion about what is and what is not part of Standard Edition.

DBA_FEATURE_USAGE_STATISTICS is a tool in determining license usage for the Oracle database. It is good to understand the implications of each entry, know what is happening in your database and thus be able to have a substantial conversation about the usage of your license, being SE, SEO, SE2 or EE!

This list is the full list of DBA_FEATURE_USAGE_STATISTICS and I have found no source where there is a mapping of these features to database editions. As it is a lot of tedious work I call upon the community to help complete the list and make it as accurate as can be. So, if you have news, improvements, other bits of information, please send it to me and I will make sure it gets added!

WARNING: Still… with all the work that goes into these answers, it is not the law, it is a very serious interpretation of facts which will pay a part in helping you make the right decision when it comes to database licensing.

Feature Standard Edition
Active Data Guard – Real-Time Query on Physical Standby NO !
ADDM NO !
Advanced Replication NO !
Application Express YES
ASO native encryption and checksumming NO – EE option !
Audit Options NO !
Automatic Maintenance – Optimizer Statistics Gathering YES
Automatic Maintenance – Space Advisor YES
Automatic Maintenance – SQL Tuning Advisor NO !
Automatic Memory Tuning
Automatic Segment Space Management (system) YES
Automatic Segment Space Management (user)
Automatic SGA Tuning YES
Automatic SQL Execution Memory YES
Automatic SQL Tuning Advisor NO !
Automatic Storage Management
Automatic Undo Management
Automatic Workload Repository
AWR Baseline NO !
AWR Baseline Template NO !
AWR Report NO !
Backup BASIC Compression
Backup BZIP2 Compression
Backup Encryption
Backup HIGH Compression
Backup LOW Compression
Backup MEDIUM Compression
Backup Rollforward
Backup ZLIB Compression
Baseline Adaptive Thresholds
Baseline Static Computations
Bigfile Tablespace
Block Media Recovery NO !
Change Data Capture NO !
Change-Aware Incremental Backup
Character Semantics
Character Set
Client Identifier
Clusterwide Global Transactions
Compression Advisor
Crossedition Triggers
CSSCAN
Data Guard NO !
Data Mining NO – EE option !
Data Recovery Advisor
Database Migration Assistant for Unicode
Database Replay: Workload Capture NO ! 1
Database Replay: Workload Replay NO ! 1
DBMS_STATS Incremental Maintenance
Deferred Open Read Only
Deferred Segment Creation NO !
Direct NFS
Dynamic SGA
Editioning Views
Editions
EM Database Control
EM Grid Control
EM Performance Page
Encrypted Tablespaces
Exadata
Extensibility
File Mapping
Flashback Data Archive NO ! 2
Flashback Database NO !
GoldenGate NO – EE option ! 3
HeapCompression
Hybrid Columnar Compression NO !
Instance Caging NO !
Internode Parallel Execution
Job Scheduler
Label Security NO – EE option !
LOB
Locally Managed Tablespaces (system) YES
Locally Managed Tablespaces (user)
Locator YES
Logfile Multiplexing
Long-term Archival Backup
Materialized Views (User) NO !
Messaging Gateway NO !
MTTR Advisor
Multi Section Backup
Multiple Block Sizes
Object
OLAP – Analytic Workspaces NO – EE option !
OLAP – Cubes NO – EE option !
Oracle Database Vault NO – EE option !
Oracle Java Virtual Machine (system) YES
Oracle Java Virtual Machine (user)
Oracle Managed Files
Oracle Multimedia
Oracle Multimedia DICOM
Oracle Secure Backup
Oracle Text
Oracle Utility Datapump (Export)
Oracle Utility Datapump (Import)
Oracle Utility External Table
Oracle Utility Metadata API
Oracle Utility SQL Loader (Direct Path Load)
Parallel SQL DDL Execution NO !
Parallel SQL DML Execution NO !
Parallel SQL Query Execution NO !
Partitioning (system) YES
Partitioning (user) NO – EE option !
PL/SQL Native Compilation
Quality of Service Management NO !
Read Only Tablespace
Real Application Clusters (RAC) YES 4
Real-Time SQL Monitoring
Recovery Area
Recovery Manager (RMAN) YES
Resource Manager NO !
Restore Point
Result Cache NO !
RMAN – Disk Backup
RMAN – Tape Backup
Rules Manager
SecureFile Compression (system) YES
SecureFile Compression (user)
SecureFile Deduplication (system) YES
SecureFile Deduplication (user)
SecureFile Encryption (system) YES
SecureFile Encryption (user)
SecureFiles (system) YES
SecureFiles (user)
Segment Advisor (user)
Segment Shrink
Semantics/RDF NO !
Server Flash Cache
Server Parameter File
Services
Shared Server
Spatial NO – EE option !
SQL Access Advisor
SQL Monitoring and Tuning pages NO – EE option !
SQL Performance Analyzer NO !
SQL Plan Management NO !
SQL Profile
SQL Repair Advisor
SQL Tuning Advisor
SQL Tuning Set (system) YES
SQL Tuning Set (user)
SQL Workload Manager
Streams (system) YES 5
Streams (user)
Transparent Data Encryption
Transparent Gateway YES – option
Transportable Tablespace NO ! 6
Tune MView
Undo Advisor
Very Large Memory
Virtual Private Database (VPD) NO !  7
Workspace Manager
  1. Unless used for upgrade to Enterprise Edition.
  2. Unless used without history table optimization.
  3. Goldengate can also be used with Standard Edition, it is a separate product.
  4. RAC on Enterprise Edition is an option.
  5. No capture from redo.
  6. Import transportable tablespaces in all editions.
  7. Policies on XDB$ACL$xd_sp in sys.v_$vpd_policy are internal ( “out of the box”) policies that are used by XDB to control the access to certain internal tables. All the logic is implemented in the xdb.DBMS_XDBZ package and there is no way one can control / influence the way this is working.


dbms_redefinition houskeeping

dbms_redefinition actually is a nifty, but powerful little toolkit that let’s you change table-definitions without actually locking the table in such a manner that it would prevent regular operations from being interrupted.

You can read loads about it in the Oracle documentation or in the wealthy library by Mr. Tim Hall.

housekeepingOne thing I noticed, and which I want to share here has lots to do with the house keeping that is automatically done by dbms_redefinition. Actually it talks about some of the bits it didn’t brush up after itself.

dbms_redefinition works using triggers and materialized views to help switch from your current active production table, via a so-called interim table, back to your shiny new, redefined production table. You can follow this beautifully by querying the dba_segments view along the way.
For this it obviously creates this materialized view and the other required components and it removes them after you finish your redefinition-trip. After all that is done, you can just remove your interim table and be done with it.

At least, that is what happened in most of the cases and is what you would expect!

Though, in some cases… it proved impossible to drop the interim table. To me this was somewhat scary… did the redefinition not finish, or did it not finish correctly?

What happened?

There was this table that I redefined. It had referential integrity constraints (aka. foreign key constraints) pointing towards it. Of course dbms_redefinition neatly created version of these to the interim table to be sure nothing went wrong.build-in-flight

When finishing redefinition (with dbms_redefinition.finish_redef_table) most of the interim bits and pieces are cleared away and you just have to drop your interim table manually (okay, we can discuss if this actually would / could / should be automated, but let’s leave that).

But… when you are then manually dropping this interim table (in a busy production system, I tend to want to be careful and just issue ‘drop table int_<tablename>‘. That does not work. dbms_redefinition “forgets” to remove these referential integrity constraints in the other tables (which are neatly names tmp$$_<constraintname>).
This than means either issue ‘drop table int_<tablename> cascade constraints‘, which is more then the basic ‘drop table‘ or find these constraints and remove them manually first:

select 'alter table '||owner||'.'||table_name||' drop constraint '||constraint_name||';'
from dba_constraints dc
where constraint_type='R'
and r_constraint_name in
(
select constraint_name
from all_constraints
where table_name = 'INT_<tablename>'
);
alter table <schema>.<foreign table> drop constraint TMP$$_<constraint name>;

I guess, personally, I would like dbms_redefinition to do this for me…

It’s smart enough! it created them!

Just a quick and additional note, setting ddl_lock_timeout to 30 or 60 for your session can actually help and prevent a lot of non-sense on a busy system.

Hope this helps someone sometime 😉

Introducing FETCHER in a running replication process

This is no regular bit of work and it will probably (and hopefully) never hit you in a production setup…

The prerequisite is that you know how on-line data replication in general, and Dbvisit Replicate specifically, work.

The following case is true:
I had half of a replication pair running.
It means that the MINE process was running, converting REDO-log in PLOG-format. The APPLY process had not yet started because the target database was still being prepared.

dbvisit-replicate-logical-replication-made-easy-18-638-300x225The reason for this is that we needed to start converting redo-log information to PLOG information while we were setting up the target environment. The reason for that was that the setup (exporting source, copying dump to target and importing) was taking quite a bit of time, which would impact redo-log storage to heavily in this specific situation.

It was my suspicion that the MINE process was unable to get enough CPU-cycles from the production server to actually MINE more redo-log seconds than wall-clock seconds passed. In effect, for every second of redo-log information that was mined, between 1 and 6 seconds passed.

This means that the replication is lagging behind and will never be able to catch up.

To resolve this, the plan was to take the MINE process of the production server and placed on an extra server. On the production server, a process called FETCHER would be introduced. The task of this process is to act as a broker between the database and the MIN process, forwarding the requested on-line an archived redo log files.

Normally (!) you would use the nifty opportunities that Replicate offers with the setup wizard and just create a new setup. And actually, this is what I used to figure out this setup. And, if you can, please do use this…

Why didn’t I then, you would rightfully ask?

Well… The instantiation process would take to long, and did I say we were under time-pressure?

  • Setup wizard, 5 minutes
  • The famous *-all.sh script, ~ 1 hr.
  • Datapump Export, ~ 10 hrs.
  • Copy from DC old to DC new,  ~ 36 hrs.
  • Datapump Import, ~ 10 hrs.

So, totally we could spend 57:05 hrs. to try to fix this on the go…

Okay, here we go:

Note: cst-migration is the name of the replication project as you specified it in setup wizard when setting up Replication.

TIP: When setting up on-line replication, it is worth your effort to create separate tnsnames.ora entries for your project, like ‘repl-source’ and ‘repl-target’ acros all nodes.
It can get hellishly confusing if you have, as in this case, a database that is called <cst> and is called the same on the source and target server!

1. Step one:
We obviously had the ./cst-migration/config directory from our basic setup with just MINE & APPLY. This directory holds (among others) the ./cst-migration/config/cst-migration-ontime.ddc file. This file holds the Dbvisit Replicate Repository contents that is needed to run the processes.

From this setup, MINE is actually running. We actually concluded the fact that we were not catching up from this process.

2. Step two:
Now we run dbvrep -> setup wizard again and create a Replicate setup directory with FETCHER and isolate the ./cst-migration+fetcher/config/cst-migration+fetcher-onetime.ddc.

By comparing the two files, I was able to note the differences and therewith conclude the changes necessary to introduce a FETCHER process. It is a meticulous job to make sure all the paths on all the three servers are correct, that port numbers are correct and that all the individual steps are take in the right order. This is the overview.

Having these changes, it is all downhill from now.

3. Step three:
Using the Dbvisit Replicate console, the new entries and the changes were made to the DDC-information stored in the Replicate repository. You can enter these manually or execute your change-file by executing @<change-file-name> inside the console.

4. Step four:
Create the ./cst-migration directory on the system you will use for the relocated MINE process and copy the cst-migration-MINE.ddc and cst-migration-run-source-node.sh in this directory.
Rename the cst-migration-run-source-node.sh to cst-migration-run-mine-node.sh to reduce confusion.
Make sure that the paths mentioned in the cst-migration-MINE.ddc are correct for the system you are starting it on!

NOTE: Please make sure that you can reach both the source and the target database from this node using the tnsnames-entries you have created for the replication setup.

5. Step five:
Rename the cst-migration-MINE.ddc on the source node (!) to cst-migration-FETCHER.ddc and change the cst-migration-run-source-node.sh file to start the FETCHER process in stead of MINE process.

You are now ready to start your new replication processes!

NOTE: If you are running APPLY already, there are some additional things you need to be aware of.

Although it was not the case when I came across this challenge, I am happy to say that Dbvisit have verified and accepted this solutions as a supported action.

Hope this helps.

My picks, no, Agenda… for UKOUG_Tech15

I went over the agenda for UKOUG_Tech15 and took my picks & suggestions.
Then I thought, why not share these…

MONDAY

The Oracle Database In-Memory Option: Challenges & Possibilities
Christian Antognini – Trivadis AG

Standard Edition Something for the Enterprise or the Cloud?
Ann Sjökvist – SE – JUST LOVE IT

All about Table Locks: DML, DDL, Foreign Key, Online Operations,…
Franck Pachot – DBi Services

Silent but Deadly : SE Deserves Your Attention
Philippe Fierens – FCP
Co-presenter(s): Jan Karremans – JK-Consult (Having a link here would be silly, right)

Oracle SE – RAC, HA and Standby are Still Available. Even Cloud!
Chris Lawless – Dbvisit

SE DBA’s Life a Bed of Roses?
Ann Sjökvist – SE – JUST LOVE IT

Oracle Standard Edition Round Table
Joel Goodman – Oracle
Co-presenter(s): Ann Sjokvist, Philippe Fierens, Jan Karremans

TUESDAY

Watch out for #RepAttack… all day long!!
And earn your RepAttack badge-ribbon…

Advanced ASH Analytics: ASHmasters
Kyle Hailey – Delphix

Community Keynote – Dominic Giles

Oracle BI Cloud Service – Moving Your Complete BI Platform to the Cloud
Mark Rittman – Rittman Mead

Infiniband for Engineerd Systems
Klaas-Jan Jongsma – VX Company

Oracle Database In-Memory Option – Under the Hood
Maria Colgan – Oracle

Do an Oracle Data Guard Switchover without Your Applications Even Knowing
Marc Fielding – Pythian

Using Oracle NoSQL to Prioritise High Value Customers
James Anthony – RedStack tech

WEDNESDAY

HA for Single Instance Databases without Breaking the Bank
Niall Litchfield – Markit

Database Password Security
Pete Finnigan – PeteFinnigan.com

Connecting Oracle & Hadoop
Tanel Poder – PoderC LLC

Enterprise Use Cases for Internet of Things
Lonneke Dikmans – eProseed
Co-presenter(s): Luc Bors – eProseed

Bad Boys of On-line Replication – Changing Everything
Bjoern Rost – portrix Systems GmbH
Co-presenter(s): Jan Karremans – JK-Consult

RMAN 12c Live : It’s All About Recovery,Recovery,Recovery
René Antúnez – Pythian

Hopefully it will attend you to some interesting session for you!

Kscope15, a celebration of tech…

Kscope15 promised to be a brand new experience in more than one way.

Kscope15LogoAs I start to write this report, I am flying from Düsseldorf airport to Atlanta. It will be the first time flying to the United States with a stopover, and because of Erik van Roon, I came prepared. With just carry-on luggage, I should end up at my final destination, Fort Lauderdale, Florida, together with my ‘stuff’. I am flying Delta Airlines this time, and for an airline that promises just a ‘lunch’ during the 9 hours flight, they do come up with a lot of food…

My colleagues of FOEX have already arrived at the event-venue and are setting up our booth.

On arrival at Fort Lauderdale airport, I am scheduled to meet-up with distinguished product manager for PL/SQL and EBR, Bryn Llewellyn. From there, we would travel to Hallandale Beach to check into our hotels. This plan was only hindered by sheer force of wind shear at Atlanta International, which delayed my flight.

The first day, the Sunday, started off with a boiling walk to the Diplomat hotel. Upon registration I was pleasantly surprised that FOEX had graciously upgraded my conference pass to a full pass, which is cool as I get to attend sessions! And the kind ladies of ODTUG had even attached an ACE Associate ribbon to my name-tag, of which I am kind of proud.

I had so many cool meet-ups and run-ins at Kscope. Just to name a few new friends in no particular order:

Of course I spent most of my time in the APEX and database development tracks. If you look at the momentum that APEX is generating, I think we can safely say that we are making a difference… We can say with confidence: #LetsWreckThisTogether!

The “together” bit was beautifully expressed by Joel Kallman as you could hear a pin drop when Carl Backstrom and Scott Spadafore of the APEX team were remembered…

But still there is a lot of work that has to be done to further spread the word on APEX. I guess I have had at least 4 conversations where I had the opportunity to talk about and explain APEX to people who were still oblivious. That is one of the most rewarding this to do.

Nikki beachThe week passed so quickly and most experiences are becoming great memories very quickly now. The countless meet ups with friend and heroes from the Oracle world, the white party at Nikki Beach and the after party at The Mansion and of course the Oracle content which was dished out with great quality.

Just on more thing… Travelling Über is the best! I have been doing this in San Francisco and used the service here to get back to the airport. Why would you take a taxi with this service around? Because of the way it works, the drivers I have met, have been much more friendly than regular ‘cabbies’. I would recommend this any day.

So, now I am heading home, hanging in the sky somewhere between Fort Lauderdale and Atlanta. Thinking back on Roel’s blog post on his first Kscope… will this have changed my life? Quite possibly, but on the other hand things could not get much more crazy than they have been over the last 6 to 12 months!!

If you are looking to read up on the business side of things, please check out the FOEX blog!

Please also don’t forget to check out the #Kscope15 hash tag on Twitter and remember, when you are at an Oracle conference, also use the #orclconf as additional hash tag. This will help to make it even easier to follow your favorite tech-community on-line!

Register redo-log manually with Divisit Replicate

For those of you who haven’t been working with on-line data replication; in short, it is a way to copy data from a source database to a target database and do this on-line (both databases are active) and do it near-real-time.
This means that when you enter data in you source database, you can immediately query it from your target database. This makes on-line data replication ideal for numerous tasks, like moving and / or upgrading your database while it is being used, with almost no downtime at all.

This tale is of an actual project that I conducted. I used Dbvisit Replicate as my tool of choice.

dbvisit-replicate-logical-replication-made-easy-18-638Dbvisit Replicate can use a so-called FETCHER process to act as the “long-arm” for the MINE process. Mining extracts the information from the redo-log files, but, in specific situations, this can be too much of an overhead for the source database server. By moving the MINE to a proxy server, this overhead can be significantly reduced.

In some cases it can be useful to manually transfer redo-log files to the mining stage directory of Dbvisit.
I came across this requirement when catching up a lot of redo from a RAC database. In this case, the RAC cluster creates two streams of redo. When starting the replication processes, the first thread is transferred by FETCHER from the source server to the proxy, before the second thread is transferred. This means mining will pause until the second thread successful delivers the first redo-log file of the second thread. The redo-log information from the second stream is necessary to create consistent and chronologically ordered SQL-statements for the target database. In effect, the SCN’s from first redo-log information of the first stream need to line up with the SCN’s of the second redo-log information.

In this case, this meant having to wait a day or more before mining can start. This is why I decided to copy a number of redo-log files from the source server to the proxy server, where the MINE process is running, manually.
After the copy, the files need to be registered with in the dbvrep-repository. Without this information, the MINE process has no knowledge of the files that are present and about what their contents are.

The update is an easy insert statement, but it should be handled with care, as this needs to be quite precise and it needs a bit of specific information about the redo-log files being added.
You can use the following insert statement to register the files:

insert into dbvrp.dbrsmine_redo_log_history
       (
       ddc_id
     , mine_process_name
     , sequence
     , thread
     , resetlogs_id
     , first_scn
     , next_scn
     , online_name
     , arch_name
     , read_count
     , from_fetcher
     , last_mine_start
     , last_mine_end
     , create_date
     , last_change_date
       )
values
       (
       1
     , ‘MINE’
     , 128779 -- sequence number of the copied file;
     , 2 -- assuming you are updating this thread.
     , 804864915 -- the reset-logs id from the redo-log file
     , 199910296688 -- the first scn from the redo-log file
     , 199911476897 -- the next scn from the redo-log file
     , null
     , ‘/u01/app/oracle/some-big-storage/dbvrep-mine/mine-stage/thread_2_seq_128719.1485.804864915’
       -- full path and name of the file
     , 0
     , ‘Y'
     , null
     , null
     , sysdate
     , sysdate
       )
;

And you can get the information you need about the files here:

select lh.sequence#
     , di.resetlogs_id
     , lh.first_change#
     , lh.next_change#
  from v$log_history lh
 inner join v$database_incarnation di
 using (resetlogs_change#)
 where sequence# = 128779
;

After registering the first file for the second thread, in the Replicate-console, you can watch the MINE process kick off. This process will then again halt after the first file of the second stream is processed in parallel with the first file of the first stream.

Schermafbeelding 2015-05-31 om 21.23.11

I kept adding files until the FETCHER process was able to take over, or you could do this until you test-case or PoC is over.

OUGN15, The “boat conference” revisited

Jan at shipsport
Reflections on OUGN

Sometimes things in life can change quickly! It is only two years ago that I came to Oslo for the first time to join the Scandinavian Oracle crew on a boat trip to Kiel.
At that time I had never actually participated in this kind of experience and I wasn’t into presenting either. Together with my good friend Philippe Fierens I discovered a whole new world back then. You could have read about these experiences in some blogpost, but this was lost in the move to my own site, sorry!

And this trip couldn’t have been more different though! With three presentations accepted the two days at sea will be a reunion with the friends I made over the last years, as well as a way to contribute to one of the most tight knit tech communities I know. And this will be in a scene that I remember vividly from being a newbie… And this is somewhat strange, believe me

After a quick and pleasant flight I touched down in Oslo, flying from Amsterdam with a decent sized crew of Dutch Oracle enthusiasts, including my good friends Patrick Barel and Alex Nuijten. Waiting in the Oslo airport for Luís Marques, I catches up with Gurcan Orhan, which was a great surprise.
Later that day we found ourselves in the Oslo harbor for the speakers dinner. You can imagine the collective amount of Oracle knowledge packed into that one restaurant!

frits
Enkitek’s Frits Hoogland on Ansible

After a somewhat restless night we arrived, on Thursday morning, at the ship Color Fantasy with the Heli Helskyaho-company, just in time for the keynotes. It was good to see Mark Rittman and James Morle made it on board too. Especially as James was up for the delivery of version 2.0 of his vibrant keynote! Next we proceeded to bring our luggage to our cabins and grab a spot of lunch on the exhibition floor down in the belly of the ship. The setup of the exhibition was quite nice and gave a good opportunity to mix and mingle.
The afternoon was spent on sessions, where I visited Frits Hoogland with the Ansible talk, and preparation for my own session at 18:00. This is the last run of this APEX presentation, as I have retired it after OUGN15. The slides will be archived here.
After finalizing the preparation for the third edition of the Standard Edition Round Table (aka “slide polishing”) with the #orclSERT team, comprising of Ann Sjökvist, Philippe and myself, it was time for the souree and for diner in the grand restaurant on board. It has been a good first day!

Diner
Dinner with the international crew on board the Color Fantasy.
Gin-tonic
Warm reception at Kiel port.

The second day of OUGN15 started with a multitude of sessions including the third edition of the Oracle Standard Edition Round Table, which was actually quite busy and interactive. We had some good discussions, and that at 09:00, so thank you, everybody.
Of course, as was declared a tradition, Björn Rost was present in the Kiel harbor. With the famous “Basil smash Gin & tonic” and sandwiches we were welcomed on German soil.
My afternoon comprised 3 sessions, starting with my own called “Okay, and now my database server crashed…” which was quite nicely received. Next Alex Nuijten on 12c new features for developer, topped off with Tim Gorman who taught us to be CSI people, in finding issues in the database.
After an enjoyable evening in the various bars and discotheques of the ship we retired the official part of the Oracle User Group Norway Vårseminar 2015, thanking the board and of course especial Øyvind Isene, for their hard work.

If you want to catch up further on the unconference communications surrounding this event, please do checkout the Twitter hashtag #OUGN15. This will also include a great set of snapshots and pictures taken along the way…

Oslo, until the next time!

Can you boost your Oracle database performance on HP-UX for free?

Database performance, as is true with all performance related matters, has to do with resources.

This story specifically focuses on a real life experience with Oracle database performance on HP Unix running on Intel integrity CPU’s like these:

CPUinfo

The issue with installation is the hyper-threading aka. the use of the logical processors.
When the server is booted and is running, you can do basic performance review with a default tool like top.

lcpu_attr=0

In this exact case the server is running fine and there is no need to investigate further. But, in cases where there were performance issues, it would be a good idea to be aware of the numbering of the CPU’s in this overview (0, 2). This numbering suggests there would also be a ‘1’ and, where there would be a ‘1’ there would probably also be a ‘2’…

Yes there is and it is called ‘lcpu_attr’. A HP Unix kernel parameter which is, to my taste, a bit odd, not well known or well documented…

lcpu_attr (Tunable Kernel Parameter)

When turned on, lcpu_attr activates the logical CPU’s immediately. When you run top again, this is what it’ll look like (immediately)

lcpu_attr=1

Okay! Great… but… there are some catches.
This parameter lcpu_attr is a dynamically tunable kernel parameter but… it’ll crash your databases. So you will need a minimum of planned downtime for this action.

Also, you can set hyper-threading on in the EFI boot-loader.
But then you should be aware of this!

In the end, in this real-life story, we helped the situation advance by just doing:

1. stopping Oracle database(s)
2. kctune lcpu_attr=1
3. starting Oracle databas(s)

All in all, it could be not difficult to boost your Oracle database performance on HP-UX for free!

Thanks to my good friend Gerard van der Kooij for finding the final link!

Okay, and now my database server crashed…

RTO/RPO, who has ever heard of that! That was Star Wars, right?
Storing data and never having to go without or losing any… Yes, that’s more like it.

Server Crashed

Okay, and these two have everything to do with each other!

Talking about these two fancy IT abbreviations I have raised many eyebrows and aided securing businesses!

What is it:
RTO: Recovery Time Objective, or rather, how long should it take before your database is up-and-running again!
RPO: Recovery Point Objective. How much data can you stand to lose?

It is customary to put real amounts of time for these both parameters. This is one of these true points where IT ‘meets’ business, one of those do or die SLA parameters.
How long before you can start working again after something has gone somewhat horribly wrong? Dependent on the business (and for sake of argument), you will get something like; “Oh well, if we are back in business in say an hour, I guess we’ll be fine.” Okay, so we have RTO = 1hr.
And, how much data can you afford to lose? “Losing data, what do you mean?” Well, let’s say you have been on the phone and in the field harvesting order data and putting this in the database… how much of this information can be reproduced when your environment fails? We’ll go with two scenario’s. We will presume “Oh no, NOTHING!” and “Hmmm, well, 10 minutes, if needs be!”, making respectively RPO = 0min. and RPO=10min.

  • RTO = 1 hour
  • RPO = 0 minutes or 10 minutes.

Let us investigate what this means, assuming we have a functional backup running every night and that our drama happens at 15:45 on a working day.

What do we have when we do nothing?
After establishing we have a system crash at hands we need to start working immediately to rebuild something, but do we have something to build upon?
Do we have hardware? And does it somewhat meet specs? Can we run our OS (version) on it? Do we have OS media to install with? Do we have Oracle media to install with? Can we get network, and so on…
And if we have this do we have enough expertise to get it installed?
Well, I guess it’s clear… We need to invest big-time! Few hours getting all the facts straight and getting hardware, a few hours to install and configure the OS, a few more for Oracle, getting it to resemble the former production environment and then restoring the backup!
RTO = starting at 8 hours.
Looking at our RPO? Well, okay, that’s easy! We backup at midnight (0:00) and we crash at 15:45. So we will have lost 15 hours and 3 quarters.
RPO = 15:45 hours.
Acceptable? No, not really!

It’s clear we have to do something.
The first step is to reduce RTO, we need to be able to continue work faster.
We can do this by making sure we have a second server standing by in a different location. Have it installed, have it configured and ready to jump into action. You could call this a Standby Server.
But even now there is no guarantee we make our target since restoring a backup and getting the database up and running could still easily take over 1 hour, when dealing with red-tape and decision levels. To hit the home run we need to add one more feature, we need to have not only a Standby Server, we also need to have a Standby Database. A database that can be “opened” or “activated” in mere minutes.

  • Are you running Enterprise Edition Database then you can use Oracle Data Guard, included in your database license.
  • Are you running Standard Edition Database then you can get the Smart Alternative from Dbvisit.

With Standby Database in place:
RTO = 5 minutes!!

Now we need to tackle RPO!
Or… do we still?
RPO = 10 minutes, actually is tackled by the Standby Database implementation.
Because of the characteristics of Standby Database, we do not only have an RTO of mere minutes, we also have an RPO of a configurable duration.
Data is transferred to the Standby Database environment by means of archived Redo Log files and this mechanism is influenced by manual switching of log files and if you do this with small enough intervals (less than our target of 10 minutes) we make sure that age of the data in the Standby Database meets the target “Recovery Point Objective”!
RPO = 0 minutes
Well, okay, this is something else. And if we think about this a little, it’s something completely different!
Recovery Point Objective, the amount of data we can stand to lose, is 0 (nothing!). Actually meaning we have to create a Standby database setup which is kept up to date with the primary environment. This kind of Standby Database environment allows you to switch to this second environment within seconds and continue your business operation without delay!

And, with your Active-Active Standby Database solutions in place:
RPO = 0 minutes!

So, now you know about RTO/RPO to secure your data and know this guy is something else.

r2-d2

Increasing the reach of your SE database license

Imagine the following situation…

Since a few years your business has been investing in centralizing valuable business information. After some research in the market you have found the Oracle database to be the best fit for your requirements.
Using the free Oracle Application Express (APEX) framework, helping you to rapidly develop the web-applications needed to support both internal and external users, was a premium. Making this installation available based on the Oracle Standard Edition One database, you have created this solution against the lowest possible investment!

As many great projects go, the use and the number of APEX applications is growing. With the addition of ready to use applications to inspire you, many cool plug-ins to ever increase the usability and integration possibilities you get caught up in the data growth dogma!
With an ever increasing user population and expansion of data-reporting for ever faster business reporting your initial system is starting to fail, showing ever more frequent performance lags or system unavailability. These problems form a risk for your business, a risk you need to eliminate as soon as possible!
The standard advise here would be to upgrade your environment, the standard advise here would be to upgrade to a bigger machine and to an Enterprise Edition database. This is what your investment would be then…

  • Medium Oracle Sun Server X2-4 with 4 x 10 core CPU’s at € 42,500
  • (40 cores x 0,5 core-factor **) 20 Oracle Database Enterprise Edition licenses               at € 914,800

Without rendering your application infrastructure worthless by the required investment, a more reasonable step would be to migrate to Oracle Database Standard Edition.

  • Medium Oracle Sun Server X2-4 with 4 x 10 core CPU’s at € 42,500
  • 4 Oracle Database Standard Edition licenses at € 67,400

Still requiring a total investment of more than a hundred thousand Euro and leaving you with the old server and licenses to be decommissioned.

In many implementations, not data entry but data-mining or information aggregation are the costly processes. So probably this will be true in this situation too. With a little investigation it is possible to separate a number of functions that will only query data and not necessarily modify data. Especially in this situation you can also increase your application performance by moving these specific processes to a new environment.

But… how…

The information in the new environment needs to be real-time consistent with the “production” or primary environment. Here we introduce a real-time data replication solution like Dbvisit Replicate which will create just this real-time consistent query environment for you! This makes for the following investment:

  • Medium Oracle Sun Server X4-2 with 2 x 8 core CPU’s at € 19,500
  • 2 Oracle Database Standard Edition One licenses at € 11,200
  • 4 Dbvisit Replicate XTD at € 16,180

With this installation you add another € 50 k. of licensing in stead of € 100 k. with the Standard Edition migration. With this choice, you separate your time-critical data-entry process from the query environment, making sure a mis-fired query will not influence the availability of your data-entry process environment, which is a cool extra advantage!

* All prices are based on list-prices, excluding VAT and including 1 year of support.
** Based on the Oracle Processor Core Factor Table.