Recently in Performance Category

Why don't applications which switch to Oracle Real Application Server (RAC) simply scale-up as expected? Generally you will find that a single instance database with a baseline performance of 100% under heavy load does not give you the expected 200% when you add a second server such as you would in a RAC environment. You will probably find that performance falls way short of expectations with an actual reduction (under the same heavy load) to 70%. This is often the reason why a twin-server RAC is configured to run in Hot-Standby thus only utilizing one server until a fail-over becomes necessary. But that means wasting half of your resources most of the time as with a HAC solution. So the question is, can you get a higher performance whilst retaining high availability thus not having to pay off one against the other? The answer is yes. But you don't get it by chance you have to be aware of some important issues.

The first and most important aspect is we must have some kind of logical separation within our application. That could by online and batch services or private and business services or anything else which gives you the ability to differentiate the work to be done. If we find something which breaks-down even lower say to 10 or 20 aspects that's ok too because we can simply group them together as needed. What we are aiming to achieve is to specifically map the work to the number of nodes we have in our Real Application Cluster and we don't want to leave that to chance. We want to direct certain work to specific nodes and not have the scan-listener do its standard load-balancing of distributing connections across the available nodes. That in itself can get a bit tricky to control but there a various ways of achieving it. The second and most important thing is to try and make sure the data buffers our work needs are held by that node. Whilst data not held on that node will be obtained from another node via interconnect or via direct-io we want to avoid that most of the time. In order to achieve that we should have our tables partitioned by our logical separation criteria. Although that sounds pretty simple it can be difficult to implement but the rewards are stunning.

We should find that we get very close to the expected 200% performance increase. If we later need to handle an even higher workload we should be able to scale-up to 3 or 4 database nodes and divide our logical workload accordingly. The key to success is getting our table partitions divided in line with our logical workload. The partitions (and their buffers) will be mastered on the node where the most work is being done. It may sound like we are forcing data blocks to be held on specific nodes but that is not really true. We are preferring data to be held where it is currently needed. If a node is added or removed the data is simply relocated to a different node. What we don't want at all costs is for hot-buffers to be constantly parsed between nodes. Whilst the exchanging of data across the interconnect is very fast, hot-buffers which are dirty (ie. have pending commits) will be synced to storage prior to the handing over of the block to the other instance. Instead of an interconnect block access on an old block (which is not dirty) that takes maybe 3ms it may take 103ms because the hot-buffer has to be synced to the SAN (costing 100ms) first.

Whilst there are many more standard Oracle features which you can leverage to help scale an application in a RAC environment this is probably the most significant.

The basic paradigm for High Performance RAC development is "Don't leave anything to chance and allways build to scale".

Oracle tuning without the Enterprise Manager

| No Comments | No TrackBacks
A friend pointed me at this interesting blog (The Go-Faster Oracle Blog), which has some useful links to performance tools. Analyzing an Oracle database without having the Enterprise Manager either installed or maybe not available to you can be a nightmare. He has developed some Excel sheets, which access the AWR data and output graphics and more right down to individual session analysis. Very useful tool-set, so take a look.
Christian Antognini has made a number of presentations on the topic of Oracle's 11g Optimizer. He has posted his presentation and demo, which he last held at the DOAG Conference in Nürnberg, Germany. He also has a whole load of useful other publications available for viewing.

Oracle Optimizer and default row counts

| No Comments | No TrackBacks
This article is basically looking at using multiple values in bind variables but it strikes upon an interesting point regarding the CBO. There are situations where the optimizer assumes a value of 8168 rows simply as a good guess.

The Pitfalls of Hints

| No Comments

When tuning a database you have to contend with a vast number of possibilities as to how to go about it. Whilst anything Oracle supplies in the way of tools and techniques can be regarded as a valid means of correcting some unwanted behavior there are some tools which should be avoided in production.

The use of "Hints" in particular should regarded as an exploratory method for diagnosing what could be done to improve things or what might happen when a database gets loaded with large amounts of data and performance starts to degrade. Using hints in a production environment should only be done as a last resort and as a temporary work-around until some more robust correction can be installed. This article 'Rules for Hinting' by Jonathan Lewis demonstrates why that is.

So be warned - there are almost always better solutions, which could be used instead of hints.

Execution Plan Stability

Okay, so we all know that problem of SQL-Statements suddenly going AWOL and using a completely different execution plan from what the database has used in the past. Most of the time the database will take the right route by changing the execution plan - but, alas, not always. By using outlines you can force the Oracle database to use the same execution plan regardless of what has changed in the way of statistics or indexes. This entry neatly explains how it works and when you might want to use outlines.

Performance Tuning - How not to do it!

These videos are really great for demonstrating why things often go wrong in a database. It's very easy to understand how good technology can be used badly and the results are disastrous.
The videos are also quite funny, so enjoy...

Part1...

Part2...

Part3...

Understanding the Cost-Based-Optimizer

This OTN article explains why it is sometimes difficult for people to grasp the paradigm of writing good SQL which the Cost-Based-Optimized (CBO) can break down effectively. Often SQL-Statements are still written with the Rule-Based-Optimizer (RBO) in mind and that makes life very difficult for the CBO. This article helps to bring you up to date with what you should be thinking of when writing SQL statements for Oracle10g.