Why don't applications which switch to Oracle Real Application Server (RAC) simply scale-up as expected? Generally you will find that a single instance database with a baseline performance of 100% under heavy load does not give you the expected 200% when you add a second server such as you would in a RAC environment. You will probably find that performance falls way short of expectations with an actual reduction (under the same heavy load) to 70%. This is often the reason why a twin-server RAC is configured to run in Hot-Standby thus only utilizing one server until a fail-over becomes necessary. But that means wasting half of your resources most of the time as with a HAC solution. So the question is, can you get a higher performance whilst retaining high availability thus not having to pay off one against the other? The answer is yes. But you don't get it by chance you have to be aware of some important issues.

The first and most important aspect is we must have some kind of logical separation within our application. That could by online and batch services or private and business services or anything else which gives you the ability to differentiate the work to be done. If we find something which breaks-down even lower say to 10 or 20 aspects that's ok too because we can simply group them together as needed. What we are aiming to achieve is to specifically map the work to the number of nodes we have in our Real Application Cluster and we don't want to leave that to chance. We want to direct certain work to specific nodes and not have the scan-listener do its standard load-balancing of distributing connections across the available nodes. That in itself can get a bit tricky to control but there a various ways of achieving it. The second and most important thing is to try and make sure the data buffers our work needs are held by that node. Whilst data not held on that node will be obtained from another node via interconnect or via direct-io we want to avoid that most of the time. In order to achieve that we should have our tables partitioned by our logical separation criteria. Although that sounds pretty simple it can be difficult to implement but the rewards are stunning.

We should find that we get very close to the expected 200% performance increase. If we later need to handle an even higher workload we should be able to scale-up to 3 or 4 database nodes and divide our logical workload accordingly. The key to success is getting our table partitions divided in line with our logical workload. The partitions (and their buffers) will be mastered on the node where the most work is being done. It may sound like we are forcing data blocks to be held on specific nodes but that is not really true. We are preferring data to be held where it is currently needed. If a node is added or removed the data is simply relocated to a different node. What we don't want at all costs is for hot-buffers to be constantly parsed between nodes. Whilst the exchanging of data across the interconnect is very fast, hot-buffers which are dirty (ie. have pending commits) will be synced to storage prior to the handing over of the block to the other instance. Instead of an interconnect block access on an old block (which is not dirty) that takes maybe 3ms it may take 103ms because the hot-buffer has to be synced to the SAN (costing 100ms) first.

Whilst there are many more standard Oracle features which you can leverage to help scale an application in a RAC environment this is probably the most significant.

The basic paradigm for High Performance RAC development is "Don't leave anything to chance and allways build to scale".

During the upgrading of an Oracle database from 10g to 10.2.0.4 under Solaris I ran into a number of problems. So this is a log of what went wrong and how to get around the problems. This log refers to a single instance non-Rac database environment. All tasks should be performed under the oracle user except where otherwise stated.

1). You first need to download and unzip the p6810189_10204_Solaris-64.zip to /tmp for example.

2). Shutdown the database instance and check that all oracle processes have terminated.

3). Now start the runInstaller in the unzipped tmp directory in Disk1 ie. /tmp/oracle/Disk1

4). When prompted run the root.sh script as root.

5). Startup the database with sqlplus as sys in upgrade mode "startup upgrade".

6). Execute the upgrade assistant "dbua" and follow the instructions.

 

Possible problems are:

1). You can get an ORA-01507 during the dbua execution. Under solaris that resulted from a discrepancy between the ORACLE_HOME and the oratab entry. Correcting that was tricky for two reasons. Firstly you need root rights to change the oratab and secondly I found two versions of oratab in /etc and /var/opt/oracle.

2). Because the ORACLE_HOME was not setup correctly I needed to alter the root.sh script. Also the rdbms/install/rootadd.sh and the oraInst.loc in ORACLE_HOME had an incorrect path. Although the paths actually mapped to the same physical directories they used alternate linked paths and where as such not identical this caused the dbua to throw an ORA-01507 saying that the "'Dbua thinks this is a Rerun operation" and the database is not mounted - which was actually all rubbish.

Oracle Flashback Schema - do-it-yourself

| No Comments | No TrackBacks

Oracle's Flashback functionality is a really great feature and it comes in all shapes and sizes except "schema". If you are writing a test suite and want exactly the same data conditions per test, it would be fantastic to build the schema and fill it with initial data then run 100 different tests, which always reset to the initial data to the original timestamp. That would be "Flashback Schema", which neither 10g nor 11g have on offer. But hey folks, let's do-it-ourselves!

The following code is a simple implementation of Flashback Schema with some restrictions regarding the kind of objects, which can be flashed:


CREATE OR REPLACE procedure Flashback_Schema( p_flashback2date in DATE ) AS

--

-- Author: Paul Lewin / 20th March 2009

-- Version 1.1 / 25th March 2009

-- Description:  Flashback an Oracle schema to a past point-in-time

-- Field of use: Not for production but for testing and simulation (QA)

--

-- Setup the instance:

--  Set the retention period for the instance before use (in seconds) 86400 = 24 Hrs

--  Make sure the UNDO-Tablespace is large enough to handle all changes for 24 Hrs

--  ALTER SYSTEM SET UNDO_RETENTION = 86400;

--  ALTER TABLESPACE UNDOTBS1 RETENTION GUARANTEE;

--

-- Setup the user (as System):

--  Grant execute on DBMS_FLASHBACK to username;

--  Grant FLASHBACK ANY TABLE to username;

--  Grant execute any procedure to username;

--  Grant SELECT ANY TRANSACTION to username;

--

-- Calling:

--  Use the following call in each schema to reset the data to a specific point-in-time

--  EXEC Flashback_Schema_To_Timestamp( TO_DATE('18.03.2009 10:50','DD.MM.YYYY HH24:MI') );

--

-- Restrictions:

--  This will not work on schemas, which include materialized views.

--  Truncating tables will make this functionality in-operable.

--  Using alter table or tablespace commands will also make this functionality in-operable.

 

-- Cursor builds a list of execute immediate commands to enable row movement

CURSOR c_cmd_move_enable is

  select

    'alter table '||table_name||' enable row movement'

  from

    user_tables

  where

    temporary='N'

  and

    row_movement='DISABLED';

 

-- Cursor builds a list of execute immediate commands to revoke previously enabled row movement

CURSOR c_cmd_move_disable is

  select

    'alter table '||table_name||' disable row movement'

  from

    user_tables

  where

    temporary='N'

  and

    row_movement='DISABLED';

 

-- Get a list of tables in the current schema

CURSOR c_cmd_flashback_list is

  select

    table_name

  from

    user_tables

  where

    temporary='N'

  and

    table_name not in (select mview_name from user_mviews)

  Order By

    table_name;

 

v_cmd varchar2(250 char);

v_cmd_flashback varchar2(32000 char) := 'flashback table ';

v_flashback_date date;

v_table_cnt number := 0;

v_reset_enable := 0;

 

Begin

               /* Default flashback by 1 hour */

  if p_flashback2date is NULL

  then

    v_flashback_date := sysdate - (1/24); /* flashback 1 hour */

  else

    v_flashback_date := p_flashback2date;

  end if;

 

  Open c_cmd_move_enable;

  Open c_cmd_move_disable;

  Open c_cmd_flashback_list;

 

  /* All tables must have row movement allowed! */

  loop

    FETCH c_cmd_move_enable into v_cmd;

    EXIT WHEN c_cmd_move_enable%NOTFOUND;

--  dbms_output.put_line( 'EXECUTE IMMEDIATE ' || v_cmd );

    EXECUTE IMMEDIATE v_cmd;

    v_reset_enable := 1;

  end loop;

 

  /* Get a list of tables in this schema */

  /* in a comma separated list with 6 names per line */

  loop

    FETCH c_cmd_flashback_list into v_cmd;

    EXIT WHEN c_cmd_flashback_list%NOTFOUND;

   

    if v_table_cnt > 0

    then

                              v_cmd_flashback := v_cmd_flashback || ', ' || v_cmd;

      if mod(v_table_cnt,6) = 0

      then

                              v_cmd_flashback := v_cmd_flashback || chr(13);

      end if;

    else

                              v_cmd_flashback := v_cmd_flashback || v_cmd; -- first table doesn't need a comma

    end if;

   

    v_table_cnt := v_table_cnt + 1;

  end loop;

  v_cmd_flashback := v_cmd_flashback || ' TO TIMESTAMP :1';

                             

  /* Flashback all tables in one transaction */

--dbms_output.put_line( 'EXECUTE IMMEDIATE ' || v_cmd_flashback || ' using parameter ' || to_char(v_flashback_date, 'DD.MM.YYYY HH24:MI') );

  EXECUTE IMMEDIATE v_cmd_flashback using v_flashback_date;

 

  /* restore row movement to all tables which previously had row movement allowed */

  loop

    FETCH c_cmd_move_disable into v_cmd;

    EXIT WHEN c_cmd_move_disable%NOTFOUND;

    v_reset_enable := 0;

--  dbms_output.put_line( 'EXECUTE IMMEDIATE ' || v_cmd );

    EXECUTE IMMEDIATE v_cmd;

  end loop;

 

  Close c_cmd_flashback_list;

  Close c_cmd_move_disable;

  Close c_cmd_move_enable;

 

EXCEPTION

    WHEN OTHERS THEN

        IF c_cmd_flashback_list%ISOPEN

        THEN

            CLOSE c_cmd_flashback_list;

        END IF;

        IF c_cmd_move_disable%ISOPEN

        THEN

            CLOSE c_cmd_move_disable;

        END IF;

        IF c_cmd_move_enable%ISOPEN

        THEN

            IF v_reset_enable = 1

            THEN

               /* restore row movement to all tables which previously had row movement allowed */

              loop

                FETCH c_cmd_move_disable into v_cmd;

                EXIT WHEN c_cmd_move_disable%NOTFOUND;

             

--              dbms_output.put_line( 'EXECUTE IMMEDIATE ' || v_cmd );

                EXECUTE IMMEDIATE v_cmd;

              end loop;

            END IF;

            CLOSE c_cmd_move_enable;

        END IF;

        RAISE;

End;

/

When using Oracle 10g features around the flashback technology it's important to know the differences between the individual options and their respective restrictions. The Flashback Table, Flashback Query, Flashback Transaction Query and Flashback Version Query are all based on undo (or rollback) data option, which primarily provides read consistency and rolling back transactions in Oracle 10g. However, Flashback Database is based on before images in the flashback logs and Flashback Drop is based on another 10g feature called the Recycle Bin, which is there to manage dropped database objects until their space is required to store new data.

Oracle Metalink has some very useful documents relating to these features:

  • Note 249319.1 - Configure flashback database
  • Note 270060.1 - Use Flashback Table Feature and Resolve errors
  • Note 270535.1 - Restrictions on Flashback Table Feature
  • Note 317499.1 - 10G Oracle Flashback Transaction Query - Introduction and usage
  • Note 369755.1 - Flashback Logs-Space management
  • Note:435998.1 - What Do All 10g Flashback Features Rely on and what are their Limitations

 

 

Oracle 11g gets case-sensitive passwords

| No Comments | No TrackBacks

Finally with Oracle Release 11g we get case sensitive passwords on user accounts. What sounds so simple and normal has taken decades to arrive in the functionality list of Oracles RDBMS. For those who never expected Oracle to jump that gap, there is a compatibility mode, which allows you to use password naming conventions compatible with previous versions ie. non-case-sensitive. As usual major changes don't have to be adopted immediately.

Note: Even if a major change is more of a major impact than anything else. ;-)

Comparing Oracle Database Features

| No Comments | No TrackBacks
If you are trying to track down a problem that only occurs on one system, say between a production system and am integration system, you want to be sure your systems can behave identically before you get into any application analysis work. You can check the basic system configuration with the host operating system diagnosis tools but what about the databases?

Oracle Enterprise Manager has a neat facility for helping with the database comparison. Using the "Deployments" register simple perform a "Compare Database Configuration" specifying the database instances you want to verify. Take a look at the Oracle 10.2 online documentation for details.

OWB 10.2.0.4 Windows Client download from Metalink

| No Comments | No TrackBacks

Back in September Oracle released the 10.2.0.4 version of Oracle Warehouse Builder for Windows and other platforms. If you go looking for the client download, don't look for a Windows XP client just head directly for the patch on metalink (https://metalink.oracle.com/). The patch-id is 7005587 and you will need the 32 bit client for windows.

Further details are here on the Oracle OWB blog.

Installing or re-installing JVM under 10g

| No Comments | No TrackBacks
If you already have a 10g database setup and running without the JVM (Java Virtual Machine) installed and later descide you want to add say OWB (Oracle Warehouse Builder) to you instance then you need to get JVM working first. It's not that complicated realy but follow these steps laid out in Metalink (Doc-ID:276554.1) if you want to get it right first time.

Oracle tuning without the Enterprise Manager

| No Comments | No TrackBacks
A friend pointed me at this interesting blog (The Go-Faster Oracle Blog), which has some useful links to performance tools. Analyzing an Oracle database without having the Enterprise Manager either installed or maybe not available to you can be a nightmare. He has developed some Excel sheets, which access the AWR data and output graphics and more right down to individual session analysis. Very useful tool-set, so take a look.
If you are trying to setup an Oracle instance for running OWB then you will need to set the remote_login_passwordfile to EXCLUSIVE in order for that to work. Using SQLPLUS connect to the instance and change the setting like this:

> sqlplus /nolog

SQL> connect / as sysdba

SQL> alter system set remote_login_passwordfile=EXCLUSIVE scope=spfile;


that should do the trick although you will need to bounce the instance before it will take effect.

You may also need to re-generate the sys password file like this:

SQL> orapwd file=orapwinst password=pwd123 force=y

Find recent content on the main index or look in the archives to find all content.

Recent Comments

  • Andrew L: Salutation read more
  • Dominik Küster: Hi Paul, bin gerade über Deine Site gestolpert. Freut mich read more
  • Anil S. Rnka: I am trying to configure maxdb and jdeveloper. It seams read more
  • mundi: but in reaity bc4j and mysql do not work at read more
Powered by Movable Type 4.21-en