Mike Ault's thoughts on various topics, Oracle related and not. Note: I reserve the right to delete comments that are not contributing to the overall theme of the BLOG or are insulting or demeaning to anyone. The posts on this blog are provided “as is” with no warranties and confer no rights. The opinions expressed on this site are mine and mine alone, and do not necessarily represent those of my employer.

Saturday, November 08, 2008

From Mike at 30,000 feet

Sorry I haven’t been blogging much the last few weeks. Between various conferences and presentations I have been on the road and when not playing road warrior I have been configuring the lab to run a 300 gigabyte TPCH. Unfortunately during my first tries of the TPCH protocols I tripped over a bug in Oracle11g dealing with parallel query and partitioning, both of which are a requirement if you are to complete a 300 gigabyte TPCH. Hopefully I will find a patch waiting once I get back to the office so I can really dig in and see what I can wrest from the RamSan 500, 400 and my 4 node configuration.

I have also been busy creating a demonstration database to show a side-by-side comparison of the differences in performance between disk and RamSan, after all seeing is believing as the old saw goes. I am writing this entry while flying over the mountains of Virginia and North Carolina on my way home from giving a one day tuning seminar in Stirling, Virginia at Oracles offices there under the auspices of the NatCap Oracle group. I leave again next week to give the same seminar to the Dallas Oracle User Group at the Oracle office in Plano Texas and from there go on to the Super Computing conference in Austin and then finally home again to Alpharetta for the Thanksgiving holiday.

I am also compiling a set of statistics to show the IOPS/gigabyte needed by Oracle databases of various sizes, in this task I will be gathering what historical data I have from past clients and if any of you have that type of data for your systems I would love to have it to add to the mix. If I have time I will try to come up with a query that shows this for a system and will post it. I imagine it will involve a rather simple sum of gigabytes (maybe just the active ones from dba_segments) and a v$sysstat capture of read and write IOPS. Here is a simple first cut, no doubt someone can make it simpler:

set serveroutput on
col tod new_value now noprint
select to_char(sysdate, 'ddmonyyyyhh24mi') tod from dual;
TTITLE 'IOs Per Second'
col null noprint
select null from dual;
spool io_sec&&now
cursor get_io is select
nvl(sum(a.phyrds+a.phywrts),0) sum_io1,nvl(sum(b.phyrds+b.phywrts),0) sum_io2
from sys.gv_$filestat a,sys.gv_$tempstat b;
cursor get_gig is select
sum(bytes)/(1024*1024*1024) from dba_segments;
gig number;
now date;
elapsed_seconds number;
sum_io1 number;
sum_io2 number;
sum_io12 number;
sum_io22 number;
tot_io number;
tot_io_per_sec number;
fixed_io_per_sec number;
temp_io_per_sec number;
iopsgig number;
open get_io;
fetch get_io into sum_io1, sum_io2;
open get_gig;
fetch get_gig into gig;
close get_io;
close get_gig;
select sum_io1+sum_io2 into tot_io from dual;
select sysdate into now from dual;
select ceil((now-max(startup_time))*(60*60*24)) into elapsed_seconds from gv$instance;
dbms_output.put_line('Elapsed Sec :'to_char(elapsed_seconds, '9,999,999.99'));
dbms_output.put_line('Fixed IO/SEC :'to_char(fixed_io_per_sec,'9,999,999.99'));
dbms_output.put_line('Temp IO/SEC :'to_char(temp_io_per_sec, '9,999,999.99'));
dbms_output.put_line('Total IO/SEC :'to_char(tot_io_Per_Sec, '9,999,999.99'));
dbms_output.put_line('Total Used Gig:'to_char(gig, '9,999,999.99'));
dbms_output.put_line('Total IOPS/Gig:'to_char(iopsgig, '9,999,999.99'));
spool off
ttitle off
set feedback on

Here is what the output show look like and as you can see it will generate a report as well:

Fri Nov 07 page 1
IOs Per Second

Elapsed Sec : 132,900.00
Fixed IO/SEC : 3.93
Temp IO/SEC : .01
Total IO/SEC : 3.94
Total Used Gig: 1.34
Total IOPS/Gig: 2.94

Well, I will sign off for now, hope to see some of you at the Dallas Bootcamp seminar and more at the Supercomputing conference.