Mike Ault's thoughts on various topics, Oracle related and not. Note: I reserve the right to delete comments that are not contributing to the overall theme of the BLOG or are insulting or demeaning to anyone. The posts on this blog are provided “as is” with no warranties and confer no rights. The opinions expressed on this site are mine and mine alone, and do not necessarily represent those of my employer.

Tuesday, February 28, 2012

Using KEEP and RECYCLE Pools

We are all familiar with using DB_CACHE_SIZE either directly or by proxy using the SGA* or MEMEORY* automatic memory management settings. We know that generally speaking, adding memory to DB_CACHE_SIZE will improve performance until we have satisfied the memory requirements supporting single block, recursive reads. What happens if we add in KEEP and RECYCLE areas?

To refresh your memories, the KEEP area is a memory cache used to store object blocks that you don’t want to have aged out of the cache. The RECYCLE area is a cache area designed to hold frequently changing or blocks that you don’t want to keep.

In order to effectively use the keep and recycle pool areas, database objects such as tables, indexes and clusters must be assigned to the appropriate pool. The default, understandably, is the DB_CACHE_SIZE controlled (SGA_*, MEMORY_* controlled) pool. For these tests the following objects in a TPC-C schema where assigned as shown:

Contains: C_CUSTOMER table
Contains: C_DISTRICT table
Table: C_HISTORY – RECYCLE (This table is partitioned)
Contains: C_ITEM
Contains: C_NEW_ORDER table
Contains: C_STOCK table
Contains: C_WAREHOUSE table

These assignments were made based on how the table was used and if it is frequently changed. Using the cache advisors in the AWR reports from test runs the sizes of the special caches were optimized. Tests were run with the caches turned on and turned off to determine what affects the caches had on performance. With the caches turned off the memory they utilized was returned to the default cache. Using a maximum allocation of 40 gb for Oracle cache areas the basic settings for the memory areas were:



WITH_K_R_2: (As advised by the cache advisors)

This figure shows the effects of use of the keep and recycle pools on performance.

Figure: TPS verses GB in the Recycle and Keep Caches

NOTE: You cannot use either DB_KEEP_CACHE_SIZE or DB_RECYCLE_CACHE_SIZE with the DB_FLASH_CACHE_SIZE or DB_FLASH_CACHE_FILE settings, if you set up to use both keep and recycle and flash, the flash doesn’t appear to get used.

So, if you have memory available and are in a situation where db file sequential reads are generating waits, assigning appropriate tables and indexes to the KEEP pool can boost performance. If you have large tables that you only want to use a working set from and then discard it, the RECYCLE cache can help.

Friday, February 24, 2012

Don't Forget Optimizer Parameters!

One factor that dramatically affects the processing of Oracle queries is the setting of the OPTIMIZER_MODE initialization parameter. By default this parameter is set to ALL_ROWS which will attempt to optimize the SQL statements to return all rows form the queries in the minimum amount of time. ALL_ROWS will choose full scan options such as sort-merge-join and hash-join over paths that return initial rows the fastest such as a nested-loop. While ALL_ROWS may be a good setting for decision support or data warehouse databases in online transaction processing (OLTP) systems most queries will return very few rows, so one of the settings of the FIRST_ROWS_(n) value for the OPTIMIZER_MODE should be utilized. The (n) settings are 1, 10, 100 and 1000. In my testing using a TPC-C-OLTP type benchmark (using Benchmark Factory), varying the FIRST_ROWS setting from 100 down to 1 changed the results from 4500 up to 6900 tps, nearly a 53% improvement. From ALL_ROWS to a setting of FIRST_ROWS_1 the performance improved from 4200 to 6900 tps, a 64% improvement. This shows that OPTIMIZER_MODE must be set correctly to get the best performance for your system. In this case the TPC-C is a pure OLTP system so a FIRST_ROWS_1 Setting makes sense. The following graph shows the effects of switching the OPTIMIZER_MODE settings.

Figure : Effects of changing OPTIMIZER_MODE

All of these results were generated using a single RamSan630 for all database files. Utilizing a 24-10K rpm disk array the best result obtained was around 600 tps.

Wednesday, February 22, 2012


A recent problem with the phone company here in Georgia brought to mind that basics are called basics for a reason. I was traveling for a couple of days and the wife was staying with her mother while I was gone. I called to pick up any voice mails that might have been left as I was expecting some important calls. Imagine my shock when I received a message saying “We are sorry, this number has been disconnected” for both the home and home office numbers.

I called the technical support line and after nearly an hour (mostly on hold) I was told the issue would be resolved within 24 hours. Of course you know what happened, in 24 hours I was back on the phone with a different support person who told me the same thing and again the next 24 hours later. On the third call they said there was an order that no one could see that was preventing any further orders from being processed on the account.

In the old days we would have called this order that no one can see a hanging tuple or an orphaned record. This means a record was inserted into the ORDERS table without a corresponding entry in the CUSTOMER table (just taking a swag here). Now when someone tries to insert a new record into the ORDERS table the primary key (probably containing a composite of phone number and something else) keeps them from inserting a new value.

Now, I happen to know this phone company is using Oracle as their support database that logs the support requests and tracks them. In a properly designed database referential integrity between the CUSTOMER and ORDERS would prohibit an entry into the ORDERS table without a corresponding entry residing in the CUSTOMER table. This is if the referential integrity is defined at the database level and enforces with optimized, internalized database triggers.

A few years ago, in tandem with the whole object oriented design paradigm and tied in with the rapid design methodologies and with the “Let’s make it generic so if we have to change databases we can” movement, the referential integrity was moved into the application code. What this did was open the database to the possibility of entering bad data if you didn’t use the application, or, if the application has an error. Now, this is all supposition on my part but it appears either there is a glitch in the application logic (but why would it only effect one customer?) or someone did some manual inserts, updates or deletes on the database bypassing the application logic.

This is what happens when you ignore basics and try to rewrite things into the application layer that should be left to the database engine. So now I am on a full week without home or home office phones. What is that little gizmo you can plug into the computer and get phone service through? I think I have their website here somewhere…