Mike Ault's thoughts on various topics, Oracle related and not. Note: I reserve the right to delete comments that are not contributing to the overall theme of the BLOG or are insulting or demeaning to anyone. The posts on this blog are provided “as is” with no warranties and confer no rights. The opinions expressed on this site are mine and mine alone, and do not necessarily represent those of my employer.

Wednesday, July 02, 2008

Lies, Damn Lies and SSD Technology

As I am now employed by a company that manufactures solid state drives for use by databases and other applications that require high speed, high bandwidth access to data. I have taken to reading articles, blogs and other sources whenever I see they discuss SSD related topics. Through this practice I have found there are woefully uninformed folks making very many untrue statements about solid state drive technology.

Let’s look at some of the highlights:

1. Solid state drive technology is very expensive
2. Solid state devices are best when directly attached to the internal bus architecture
3. Solid state drives will only be niche players
4. You can get the same IO rate from disks as from SSD

First, the myth that solid state drives are expensive was, like many myths involving Oracle and computers, true at one time, however, times change. The huge leap in demand for flash memory with the advent of I-pods, digital cameras and video recorders has created a memory glut. You can get a 4 gigabyte flash memory stick or card for under a hundred dollars for your camera or other flash device. In fact memory prices promise to plunge even farther as mass production techniques and miniaturization technology improves. The cost for a gigabyte of enterprise class disk storage is around $84 at last count, for the most current version of the Texas Memory System RAMSAN SSD technology, using flash memory and regular memory, the cost is around $100 per gigabyte, with further decreases in memory costs, RAMSAN SSD prices will fall even further.

Second, in a recent article a producer of both disk and solid state technology seemed to indicate it works best when hooked directly into the internal bus for the computer and really wasn’t efficient when attached as a SAN would be attached. I am not sure where he is getting his information (other than his company is trying to shoe-horn solid state drive technology into their existing SAN infrastructure) but it has been my experience that rarely if ever do users flood the fibre channels, they may overload a couple of the disk drives, but generally the SAN connections are not the source of the bottleneck when it comes to SAN technology. Using standard fibre channel connections and standard host bus adapters Texas memory Systems achieves over 400,000 IOPS from a single 4U RAMSAN SSD. To get the equivalent IOPS using regular disk technology you would need over 6000 or more individual disk drives, the racks to hold them and the controllers to control them, not to mention the air conditioning and electrical power needed for that many disks.

Next, solid state drives will only be niche players, this is a ridiculous statement. Most clients of RAMSAN SSD technology use them just as they would disk arrays. The RAMSAN SSD technology will replace disks as we know it in the near future and disks will be relegated to second tier storage and backup duties, replacing tapes. Many experts are talking of the tier 0 level of storage and specifically mentioning SSD when they do so. When you can place a single 4U sized RAMSAN SSD into your system and replace literally hundreds or thousands of disks the idea that they will only be niche devices is foolish. This is especially true when you consider the decreasing costs, the ease of administration and the performance gains that you get when SSD technology is properly deployed.

Finally, the myth that with disks you can get the same IOPS as with SSD. Yes, you can, however, you would need X/(IOPS/disk) number of disks where X is the desired IOPS to achieve it, double that number for RAID10 or RAID01. Even high speed 15K drives can only deliver around 100 to 130 IOPS per second of random reads due to the mechanical nature of disk drives, as the late Scotty on the Federation Starship Enterprise used to say (about every other episode): “Ya cannot change the laws of physics.” Disks, without prohibitive cooling technologies, cannot exceed certain maximum rotational speeds, read heads, mounted on mechanical arms can only move so fast and the magnetic traces can only be packed so close on the disk surface. To get 400,000 IOPS you would need at least (400,000/130)*2= 6153 drives in a RAID10 array. At 18 drives per tray that is 341 trays of disk drives at 8 trays per rack that is almost 43 racks needed to hold the drives. Now, even with the largest caches available you still require anywhere from a millisecond to several (up to 5 with minimal loads, higher with large loads or more than single block reads) milliseconds to do each IO, this latency will always be there in a disk based system, the latency on SSD based systems such as RAMSAN are in the hundreds of nanoseconds range (fractional milliseconds).

So, what have we determined? We have found that SSD technology is comparable in cost with enterprise level disk systems and will soon beat the cost of enterprise level disk systems. We have also seen that SSD technology when properly designed and implemented (not shoe-horned into a disk-based SAN) will fulfill the promise of fibre technology and allow use of the bandwidth currently squandered by disk technology. We have also seen that far from being a niche technology, SSD is becoming the tier 0 storage for many companies and will soon supplant disks as the primary storage medium in many applications. Finally, while it is possible to achieve the same level of IOPS using disk technology that SSD technology provides, it would be cost prohibitive to do so, and, even if you did achieve the same level of IOPS, each IO would still be subject to the same disk based latencies.

I am not afraid to say it: SSD technology is here, it is ready for prime time and it is only a matter of time before disks are relegated to second tier storage. Disks are dead, they just don’t know it yet.

8 comments:

Brent Ozar said...

How's the power requirements and density compare with enterprise drives? For example, how do the SSD's compare with the typical 146-300gb drives on a power and space basis? I'm not asking because I know the answer, either - I have no clue, but I'm betting you would be in a position to find out.

Miguel said...

Hi Mike,

Very insightful commentary on the industry... well said. I just had to comment because the phrase you referenced in your title, "Lies, damned lies, and statistics" is actually accredited (by some) to our CEO's grandfather, Holloway Halstead Frost. Maybe you already knew this, but I thought the coincidence was classic!

Michael
Sr. Software Engineer
Texas Memory Systems

Mike said...

For the 26-10K drive set of arrays I have (one 8 disk and one 18 disk) I use 1650 Watts so about 54 watts per disk at 6000 drives (to support redundancy and IO) that is 38 kilowatts of electricity. The one terabyte 4U rack mount RAMSAN SSD runs about 350-400 watts so double to provide mirroring gives 700-800 watts. So less than 1 KW verses 38 KW.

Mike said...

Miguel,

No I didn't realize that and in High School I read that book!

Mike

Mike said...

Actually the new RAMSAN 500 gives 2terabytes in a 4U unit and consumes 300 watts. So to get equivilent IOPS you would need to use over 72 times the amount of power with disk drives, not to mention the cooling and floor space needs. Imagine a complete data warehouse in one rack using several 2 terabyte RAMSANS, and a set of blade servers.

LarchOye said...

I apologize ahead of time for sounding completely uninformed, but um

2tb of storage takes up 4U of rack space?!

I was under the impression that SSD's were significantly smaller than regular hard disks... and um, the article actually mentions this at the very beginning:

"E-Disk® Altima™ 4Gb Fibre Channel 3.5" Solid State Drive
* Up to 640GB of storage per disk on 1" drives.
* 1.6TB on 3.5" drives
* 800 MB/sec Full Duplex Burst RateSCSI"

Mike said...

If you just want a pile of memory chips on a board , yes, you can make it pretty small, if you just want a couple of flash chips, yes that can be pretty small, but when you want internally RAIDed, autobacked up, redundent power supply, battery backed up with onboard disk to backup for added redundency, well, it gets a bit bigger! This is enterprise level storage not a gimmick for PCs.

Brent Ozar said...

Congrats on winning the SQL Server Magazine gold award for storage!