Mike Ault's thoughts on various topics, Oracle related and not. Note: I reserve the right to delete comments that are not contributing to the overall theme of the BLOG or are insulting or demeaning to anyone. The posts on this blog are provided “as is” with no warranties and confer no rights. The opinions expressed on this site are mine and mine alone, and do not necessarily represent those of my employer.

Tuesday, February 17, 2009

RMOUG Notes

Well, I am on my way home from the Rocky Mountain Oracle Users Group Trainings Days event. I presented a paper titled “Is Oracle Tuning Obsolete,” a copy of which can be found on the http://www.rmoug.org/ site or at http://www.superssd.com/. While I was there I attended two presentations on the Oracle/HP Exadata Database Machine, one by Kevin Closson and another by Tom Kyte, both of Oracle.

My only complaint about both presentations was that when they presented the user test results they neglected to show the full (or even partial) configurations of the servers and disk systems they had tested against. Rather like saying my car is 10 times faster than Joe’s and telling you mine is a 1995 Dodge Avenger and failing to mention Joe’s is a Stanley Steamer. Be that as it may, I still enjoyed the presentations and the best take away was from Kevin’s presentation when he said that “If your current system is fully tuned, has adequate disk resources, and is performing well, the Exadata has nothing to offer you.” An example from kevin would be a 128 CPU Superdome with 128 4GFC HBAs that were being fed by ample XP storage as that would be 51GB/s ingest-capable. Also during Tom’s presentation he admitted the primary target of the Exadata was those shops with row-after-row of Oracle servers followed by a single Netezza or Teradata server or servers.

Essentially the Exadata Database Machine is targeted at the larger (several terabytes) data warehouse that would otherwise be placed on a Netezza or Teradata machine and I couldn’t agree more. However, it would be a fun test to replace the disks in an Exadata cell with a RamSan-500 and see what (if any) additional performance could be gained. After all, the disks are still the limiting factor in the performance of the system. For example, a single Exadata cell tops out at around 2,700 IOPS, according to white papers on the Oracle site; a single RamSan-500 can sustain 100,000 mixed read/write IOPS and 25,000 pure write IOPS with minimal response times. As far as I can tell, no additional smarts are built into the Exadata disk drives in the place of special firmware, such as is supposedly done with EMC systems, so replacing the drives with a single RamSan-500, either set up as 12 LUNs, or as a single large LUN, should be easy.

Another interesting discussion I had during this time frame was with our (Texas Memory Systems) own Matt Key, one of our Storage Applications Engineers, about why adding the Enterprise Flash Drives (EFDs) to arrays produces little if any benefit for large levels of writes. Turns out there is an upward limit on the bandwidth a single disk tray can handle and with the EFDs instead of disk drives the disk tray tops out at around 3000 (between 1600 and 3200) or so IOPS (based on a 64K stripe) so you actually need several trays (with a max of only 4 drives to a tray because of other limits) to get significant write IOPS. For comparison, the RamSan-500 can handle 25,000 sustained write IOPS. Now don’t get me wrong, the EFDs can improve the performance of certain types of loads when compared to a standard array with no EFDs, but if you are write-heavy you may wish to consider other technologies. Note: The calculations are based on a 200 megabyte/second FC-AL bandwidth with 64K writes, since RAID6 is used there are 2-64K writes for each write, 200MBS/64K=3200 IOPS, 200MBS/128K=1600 IOPS. These limitations apply to all array-based EFDs.

The RamSan-500 makes an excellent complement to any enterprise array, especially if you use the preferred read technology to read from the RamSan-500 while writing to both, for example, when you are using array-based replication, such as SRDF, to provide geo-mirroring of the frame to a remote site. By offloading the reads, the number of writes that can be supported by the array can be increased as a factor of the percent of reads in the work load, thus increasing the performance of the entire system. As an example, if you have an 80/20 read/write workload and you offload the 80 percent of reads to the RamSan, this frees up the array to handle a factor of 4 more writes, up to the actual maximum IOPS of the array. This is a 4X increase in I/O with 0-impact to infrastructure or BCVs.

Oh, on February 24-25 I’ll be in Charlotte, NC presenting at the Southeast Oracle Users Convention (SEOUC). My two presentations are: “My Ideal Data Warehouse System” and “Going Solid: Use of Tier Zero Storage in Oracle Databases.” I hope I see you there!

As I digest more of the information I obtained this week, I will try to write more blog entries. So for now I will sign off. Good bye from 37,000 feet over Colorado!

Mike

No comments: