Mike Ault's thoughts on various topics, Oracle related and not. Note: I reserve the right to delete comments that are not contributing to the overall theme of the BLOG or are insulting or demeaning to anyone. The posts on this blog are provided “as is” with no warranties and confer no rights. The opinions expressed on this site are mine and mine alone, and do not necessarily represent those of my employer.
Wednesday, July 02, 2008
Lies, Damn Lies and SSD Technology
Let’s look at some of the highlights:
1. Solid state drive technology is very expensive
2. Solid state devices are best when directly attached to the internal bus architecture
3. Solid state drives will only be niche players
4. You can get the same IO rate from disks as from SSD
First, the myth that solid state drives are expensive was, like many myths involving Oracle and computers, true at one time, however, times change. The huge leap in demand for flash memory with the advent of I-pods, digital cameras and video recorders has created a memory glut. You can get a 4 gigabyte flash memory stick or card for under a hundred dollars for your camera or other flash device. In fact memory prices promise to plunge even farther as mass production techniques and miniaturization technology improves. The cost for a gigabyte of enterprise class disk storage is around $84 at last count, for the most current version of the Texas Memory System RAMSAN SSD technology, using flash memory and regular memory, the cost is around $100 per gigabyte, with further decreases in memory costs, RAMSAN SSD prices will fall even further.
Second, in a recent article a producer of both disk and solid state technology seemed to indicate it works best when hooked directly into the internal bus for the computer and really wasn’t efficient when attached as a SAN would be attached. I am not sure where he is getting his information (other than his company is trying to shoe-horn solid state drive technology into their existing SAN infrastructure) but it has been my experience that rarely if ever do users flood the fibre channels, they may overload a couple of the disk drives, but generally the SAN connections are not the source of the bottleneck when it comes to SAN technology. Using standard fibre channel connections and standard host bus adapters Texas memory Systems achieves over 400,000 IOPS from a single 4U RAMSAN SSD. To get the equivalent IOPS using regular disk technology you would need over 6000 or more individual disk drives, the racks to hold them and the controllers to control them, not to mention the air conditioning and electrical power needed for that many disks.
Next, solid state drives will only be niche players, this is a ridiculous statement. Most clients of RAMSAN SSD technology use them just as they would disk arrays. The RAMSAN SSD technology will replace disks as we know it in the near future and disks will be relegated to second tier storage and backup duties, replacing tapes. Many experts are talking of the tier 0 level of storage and specifically mentioning SSD when they do so. When you can place a single 4U sized RAMSAN SSD into your system and replace literally hundreds or thousands of disks the idea that they will only be niche devices is foolish. This is especially true when you consider the decreasing costs, the ease of administration and the performance gains that you get when SSD technology is properly deployed.
Finally, the myth that with disks you can get the same IOPS as with SSD. Yes, you can, however, you would need X/(IOPS/disk) number of disks where X is the desired IOPS to achieve it, double that number for RAID10 or RAID01. Even high speed 15K drives can only deliver around 100 to 130 IOPS per second of random reads due to the mechanical nature of disk drives, as the late Scotty on the Federation Starship Enterprise used to say (about every other episode): “Ya cannot change the laws of physics.” Disks, without prohibitive cooling technologies, cannot exceed certain maximum rotational speeds, read heads, mounted on mechanical arms can only move so fast and the magnetic traces can only be packed so close on the disk surface. To get 400,000 IOPS you would need at least (400,000/130)*2= 6153 drives in a RAID10 array. At 18 drives per tray that is 341 trays of disk drives at 8 trays per rack that is almost 43 racks needed to hold the drives. Now, even with the largest caches available you still require anywhere from a millisecond to several (up to 5 with minimal loads, higher with large loads or more than single block reads) milliseconds to do each IO, this latency will always be there in a disk based system, the latency on SSD based systems such as RAMSAN are in the hundreds of nanoseconds range (fractional milliseconds).
So, what have we determined? We have found that SSD technology is comparable in cost with enterprise level disk systems and will soon beat the cost of enterprise level disk systems. We have also seen that SSD technology when properly designed and implemented (not shoe-horned into a disk-based SAN) will fulfill the promise of fibre technology and allow use of the bandwidth currently squandered by disk technology. We have also seen that far from being a niche technology, SSD is becoming the tier 0 storage for many companies and will soon supplant disks as the primary storage medium in many applications. Finally, while it is possible to achieve the same level of IOPS using disk technology that SSD technology provides, it would be cost prohibitive to do so, and, even if you did achieve the same level of IOPS, each IO would still be subject to the same disk based latencies.
I am not afraid to say it: SSD technology is here, it is ready for prime time and it is only a matter of time before disks are relegated to second tier storage. Disks are dead, they just don’t know it yet.