Mike Ault's thoughts on various topics, Oracle related and not. Note: I reserve the right to delete comments that are not contributing to the overall theme of the BLOG or are insulting or demeaning to anyone. The posts on this blog are provided “as is” with no warranties and confer no rights. The opinions expressed on this site are mine and mine alone, and do not necessarily represent those of my employer.

Friday, March 02, 2012

Using Flash Cache

In this final test series we will compare the best result from the memory only, Keep and Recycle and first rows tests with the best result with a flash cache set. Tests were completed comparing using a slightly faster server mounted PCIe flash cache to a flash based san so the results will not be as dramatic as when testing server mounted PCIe flash cache against disk based storage.

The flash cache was sized at the suggested 2X the database cache size (90 gb) and then a run with the flash cache set to zero was run. Note that for the first run appropriate tables and indexes were assigned to be kept in the flash cache, other tables where set to default. Figure 1 shows the results from use of the Smart Flash Cache with Flash as storage.



Figure 1: TPS verses GB in the Flash cache

At least for our testing with the database on a RamSan630 SSD and the flash cache being placed on a RamSan70 PCIe card, the results are not encouraging towards the use of the flash cache with a flash based SAN. Review of the AWR results showed that the flash cache was indeed being used but, due to the small difference in overall latency between the RS630 with IB interfaces and the RS70 in the PCIe slot, the overall effect of the flash cache was negligible. The next figure shows the AWR Top Five Events listing both with and without Flash Cache set.

AWR Results
AWR Results
Flash Cache set at 90 GB:
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
DB CPU 25,339 53.7
log file sync 11,775,317 13,146 1 27.8 Commit
db flash cache single block ph 3,991,869 3,745 1 7.9 User I/O
db file sequential read 6,192,796 3,588 1 7.6 User I/O
latch: cache buffers chains 169,292 251 1 .5 Concurrenc

Flash Cache Set at 0 GB:
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
DB CPU 26,414 53.9
log file sync 12,139,355 14,138 1 28.9 Commit
db file sequential read 11,548,371 7,373 1 15.0 User I/O
enq: HW - contention 23,317 859 37 1.8 Configurat
latch free 7,922 402 51 .8 Other

Figure 2: AWR Results with and without Flash cache

Disk with Flash Cache
Since the test with a flash utility against an internal PCIe Flash card proved inconclusive we decided to have the lab hook up some disks and re-run the tests using a disk array containing 24-10k 300gb disks for the tables and indexes. The DB_CACHE_SIZE was increased to 50gb and the DB_FLASH_CACHE_SIZE was set to 300gb. Figure 3 shows the results for a disk array with and without a 300gb flash cache.



Figure 3: Disk verse Disk plus Flash Cache Performance

As you can see from reviewing the graph, the Flash cache definitely helped performance at the all levels of our user range. It also showed that with the same hardware the sustained performance increase could be extrapolated to a larger number of users so in the case of using flash cache with disks, yes, performance is gained.

While running this test I had indication that over 160 gigabytes of data blocks were cached in the flash cache. Figure 4 shows the SQL script used to determine flash usage for a single user and Figure 5 shows an example of its output during test runs.

set lines 132 pages 55
col object format a45
select owner||'.'||object_name object,
sum(case when b.status like 'flash%' then 1 end) flash_blocks,
sum(case when b.status like 'flash%' then 0 else 1 end) cache_blocks,
count(*) total_cached_blocks
from v$bh b join dba_objects o
on (objd=object_id)
where owner = upper('&owner')
group by owner, object_name
order by owner,4 asc;

Figure 4: SQL Script to see cached objects for an owner

OBJECT FLASH_BLOCKS CACHE_BLOCKS TOTAL_CACHED_BLOCKS
------------------------- ------------ ------------ -------------------
TPCC.C_CUSTOMER_I1 15249 0 15249
TPCC.C_STOCK_I1 15863 0 15863
TPCC.C_NEW_ORDER_I1 15875 18108 33983
TPCC.C_ORDER_I1 37838 6308 44146
TPCC.WARECLUSTER 63562 450 64012
TPCC.DISTCLUSTER 59511 4504 64015
TPCC.NORDCLUSTER_QUEUE 45764 56100 101864
TPCC.ORDR_UK 94404 40801 135205
TPCC.C_ORDER 123514 67081 190595
TPCC.C_CUSTOMER_I2 202994 51896 254890
TPCC.C_ORDER_LINE_I1 383833 26284 410117
TPCC.C_ORDER_LINE 873325 64108 937433
TPCC.ORDL_UK 1073711 38760 1112471
TPCC.CUSTCLUSTER 1940874 124103 2064977
TPCC.STOKCLUSTER 5508278 3055117 8563395

Figure 5: Example use of Flash Cache
Just to put things in perspective, let’s put the top pure-Flash database results against these disk and Flash cache results. Look at Figure 6.



Figure 6: Flash only, Disk Only and Disk plus Flash Cache Results

In reviewing Figure 6 you should first note it is a logarithmic plot, which means that for each change on the left axis there is a factor of 10 change. This figure shows that using pure flash far outperforms even the best we can expect from a combination of flash and disk. In this case by nearly a factor of 7. The peak performance we obtained from our disk combined with a Flash cache was 1024 TPS, while the peak we obtained in our flash tests (see next section) was over 7000 TPS. Even in previous testing with larger disk arrays (90+ 10K drives), the peak performance I obtained from disk arrays was only in the 2000 TPS range, again showing that SSD technology is superior to any equivalent disk array.

No comments: