Many times I am asked the question “Should I buy solid state devices for my system?” and each time I have to answer “It depends.” Of course the conversation evolves beyond that point into the particulars of their system and how they are currently using their existing IO storage subsystem. However, the question raised is still valid; do you need SSD technology in your IO subsystem? Let’s look at this question.
SSD Advantages
SSD technology has one big advantage over your typical hard disk based storage array: SSD does not depend on physical movement for retrieval of data. Being non-dependent on physical movement for data retrieval means that you can significantly reduce the latency involved with each data retrieval operation, usually on the order of a factor of 10 (for Flash-based technology) to over 100 (for RAM-DDR -based technology.) Of course cost increases as latency decreases with SSD technology, with Flash running about a quarter of the cost of RAM-DDR technology.
SSD Costs
Flash and DDR-based SSD technology are usually on a par with, or can be cheaper than, IO equivalent SAN based technology. Due to the much lower latency of SSD technology you can get many more input-output operations per second (IOPS) from them than you can from a hard disk drive system. For example, from the “slow” Flash-based technology you can get 100,000 IOPS with an average latency of 0.20 milliseconds worse case. From the fastest DDR based technology you can achieve 600,000 IOPS with a latency of .015 milliseconds.
To achieve 100,000 IOPS from hard drive technology you would need around 500 or more 15K rpm disks at between 2 and 5 milliseconds latency, giving a peak IOPS of around 200 per disk drive for random IO, regardless of their storage capacity. A 450 gigabyte 15K rpm disk drive may have up to 4 or more individual disk platters with 12 read-write heads (one for each side of the disks); however, these read-write heads are mounted on a single armature and are not capable of independent positioning. This limits the latency and IOPS to that of a single disk platter with two heads, so an 146 gigabyte 15K rpm drive will have the same IOPS and latency as a 450 gigabyte 15K rpm drive from the same manufacturer (http://www.seagate.com/docs/pdf/datasheet/disc/ds_cheetah_15k_6.pdf.)
Given that the IOPS and latency are the same regardless of storage capacity for a 146 to 450 gigabytes array of disk drive sizes why not pick the smallest drive and save money? The reason is that to get the best latency you need to be sure not to fill the various disks in the disk drive (from 2 to 4) more than 30% hence the need for so many drives. So to get high performance from your disk based IO subsystem you need to throw away 60-70% of your storage capacity!
Do I Need That Many IOPS?
Many critics of SSD technology state that most systems will never need 100,000 IOPS, and in many cases they are correct. However, in testing using a 300 gigabyte TPC-H (data warehouse) type test load using SSD I was able to get peak loads of over 100,000 IOPS using a simple 4 node Oracle11g Real Application Clusters-based setup. Since many systems are considerably larger than 300 gigabytes and have more users than the 8 users with which I reached 100,000 IOPS, it is not inconceivable that given the capability to achieve 100,000 IOPS of throughput many current databases would easily exceed that value. It must also be realized that the TPC-H system I was testing utilized highly optimized indexing, partitioning, and parallel query technology; eliminate any of these capabilities and the IOPS required increases, sometimes dramatically.
Questions to Ask
So now we reach the heart of the question, do you need SSD for your system? The answer depends on several questions which only you can answer:
1. Is my performance satisfactory? If yes then why are you asking about SSD?
2. Have you maximized use of memory and optimization technologies built into your database system? If no, then do this first.
3. Has my disk IO subsystem been optimized? (Enough disks and HBAs?)
4. Is my system spending an inordinate amount of time waiting on the IO subsystem?
If the answer to question 1 is no, and questions 2, 3 and 4 are yes then you are probably a candidate for SSD technology. Don’t get me wrong, if I had my choice I would skip disk based systems altogether and use 100% SSD in any system I bought, given the choice. However, you are probably locked into a disk-based setup with your existing system until you can prove it doesn’t deliver the needed performance. Let’s look closer at the 4 questions.
Question 1 is sometimes hard to answer quantitatively. Usually the answer to 1 is more of a gut reaction than anything that can be put on paper. The users of the system can usually tell you if the system is as fast as they need it to be. Another consideration in question 1 is: performance is fine now, but what if you grow by 25-50%? If your latency is at 3-5 milliseconds on the average now, adding more load may drive it much higher.
Question 2 will require analysis of how you are currently configured. An example from Oracle is that a wait on db file sequential reads can indicate that not enough memory has been allocated to cache data blocks read based on index reads. So, even if the indexes are cached, the data blocks are not and must be read into the cache on each operation. A sequential read is an index-based read followed by a data table read and usually should be cached if there is sufficient memory. Another Oracle wait, db file scattered reads indicates full table scans are occurring. Usually full table scans can be mitigated by use of indexes or partitioning. If you have verified your memory is being used properly (perhaps everything that can be allocated has been) and you have utilized the proper database technologies and performance is still bad, then it is time to consider SSD.
A key source of wait information is of course the Statspack or AWR report for Oracle based systems. One additional benefit to the Statspack or AWR reports is that they both can contain a Cache Advisory sub-section that is used to actually determine if adding memory will help your system. By examining waits and looking at the cache advisory section of the report you can quickly determine if adding memory will help your performance. Another source of information about the database cache is the V$BH dynamic performance view. The V$BH view contains an entry for every block in the cache and with a little SQL against the view you can easily determine if there are any free blocks or if you have used all available and are in need of more. Of course use of the automated memory management features in 10g and 11g limit the usefulness of the V$BH view. In Oracle Grid and Database control interfaces (providing you have the proper licenses) you also get performance advisories which will tell you when you need more memory. Of course if you have already maximized the size of your physical memory, most of this is moot.
Question 3 may have you scratching your head. Essentially if your disk IO subsystem has reached its lowest latency and the number of IO channels (as determined by the number and type of host bus adapters) is such that no channel is saturated, then your disk-based system is optimized. Usually this is shown by latency being in the 3-5 millisecond range and still having high IO waits with low CPU usage.
Question 4 means you look at you system CPU statistics and you see that your CPUs are being under utilized and the IO waits are high, indicating the system is waiting on IO to complete before it can continue processing. A Unix or Linux based system in this condition may show high values for runqueue even when the CPU is idle.
SSD Criticisms
Other critics of SSD technology cite problems with reliability and possible loss of data with Flash and DDR technologies. In some forms of Flash and DDR they are correct; if the Flash isn’t wear leveled properly or the DDR is not properly backed-up. However, as long as the Flash technology utilizes proper wear leveling and the RAM-DDR system uses proper battery backup with permanent storage on either a Flash or hard disk based subsystem, then those complaints are groundless.
The final criticism of SSD technology is usually that the price is still too high compared to disks. I look at an advertisement from a local computer store and I see a terabyte disk drive for $99.00; it is hard for SSD to compete with that low base cost. Of course, I can’t run a database on a single disk drive. Given our 300 gigabyte system, I was hard-pressed to get reasonable performance placing it on 28 – 15K high performance disk drives; most shops would use over 100 drives to get performance. So on a single disk to SSD comparison yes, this cost would appear to be an issue; however, you must look at other aspects of the technology. To achieve high performance most disk-based systems utilize specialized controllers and caching technology and spread IO across as many disk drives as possible. This is known as short-stroking the drive so that only 20-30% of each disk drive is ever actually used. The disks are rarely used individually, instead they are placed in a RAID array (usually RAID 5, RAID 10, or some exotic RAID technology). Once the cost of additional cabinets, controllers, and other support technology is added to the base cost of the disks, not to mention any additional firmware costs added by an OEM, the costs soon level between SSD and standard hard drive SAN systems.
In a review of benchmark results the usual ratio between needed capacity and capacity utilized to achieve performance is 40-50 to 1, meaning our 300 gigabyte TPC-H system would require at least 12 terabytes of storage to provide adequate performance spread over at least 200 or more disk drives. To contrast that, an SSD based system would only need a factor of 2 to 1 (to allow for the indexes and support files).
In addition to the base equipment costs, most disk arrays consume a large amount of electricity which then results in larger heat loads for your computer center. In many cases the SSD technology only consumes a fraction of the energy and cooling costs of regular disk based systems, providing substantial electrical and cooling cost savings over their lifetimes. SSD by its very nature is green technology.
When Doesn’t SSD help?
SSD technology will not help CPU bound systems. In fact, SSD may increase the load on overworked CPUs by reducing IO based waits. Therefore it is better to resolve any CPU loading issues before considering a move to SSD technology.
In Summary
The basic rule for determining if your system would benefit from SSD technology is that if your system is primarily waiting on IO then SSD technology will help mitigate the IO wait issue.
Mike Ault's thoughts on various topics, Oracle related and not. Note: I reserve the right to delete comments that are not contributing to the overall theme of the BLOG or are insulting or demeaning to anyone. The posts on this blog are provided “as is” with no warranties and confer no rights. The opinions expressed on this site are mine and mine alone, and do not necessarily represent those of my employer.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment