So gas has gone up nearly double in less than year. Makes me wonder how much we have increased the cost of food we are selling overseas. I believe we need to tie the cost of a bushel of wheat, corn or other food stuff directly to the cost of a barrel of oil from the particular country we get the oil from. Gas goes up 10%, their cost for food goes up 10%.
It would be an interesting study to look at the average cost of a barrel of oil and tie it to the average cost at the pump. Maybe we can look closer to home for some of the problems as well. Make the cost of a gallon of gas be a ratio to the cost of a barrel of oil.
Let’s do that with all medical supplies as well. It is pretty bad when many times folks in countries hostile to us can get medicine cheaper than we can…from a US company!
Last time I checked most food can be converted to alcohol, sure some is more efficient (corn) but just about anything we can digest and has sugar in it can be digested by yeast to produce alcohol, cars will run on that, as the oil companies have been saying ad-nauseam. As far as I know, it is rather more difficult to convert oil to food. Seems we have a bigger stick, not much grows in the desert, especially when there is no food to feed the workers.
Of course, we would have to harden ourselves to the site of hungry and starving children, that will be the first weapon they would use, pictures of their children. We would have to stifle the bleeding hearts out there. Perhaps if we stopped feeding the world for a loss they would start taking better care of their people. Most countries are only three-square meals away from revolution.
Tie it to the cost of electronics and other high tech items as well. Put a large tariff on technical talent, most of the oil wells out there wouldn’t be running without American know-how.
It is time for America to get tough. Time for us to harden our hearts a bit. As a Christian this hurts for me to say, but I believe we have fulfilled the 40X4 slaps required by the bible and then some, as well as carrying the load for these countries for the extra mile. Jesus turned the money-lenders from out of the temple, it is time for use to turn out the oil sellers. It is high time to use the economic might that is the USA for it’s citizens benefit.
Mike Ault's thoughts on various topics, Oracle related and not. Note: I reserve the right to delete comments that are not contributing to the overall theme of the BLOG or are insulting or demeaning to anyone. The posts on this blog are provided “as is” with no warranties and confer no rights. The opinions expressed on this site are mine and mine alone, and do not necessarily represent those of my employer.
Sunday, April 30, 2006
Tuesday, April 18, 2006
A Pet Peeve
When will disk manufacturers join the rest of the industry? When you look at their specification sheets they do things like “320 Gigabyte Capacity (unformatted)” then you read further and in the footnotes it says “gigabyte is defined as 1000000000 bytes” so what does this really mean?
If you do the math, for the rest of the computer industry, a gigabyte is 1024 bytes cubed or 1073741824 bytes. This means the unformatted capacity of the drive is about 298 gigabytes or less. Assuming you only lose about 10% for formatting, this leaves you with 268 gigabytes, doesn’t sound near as impressive as 320 does it?
And how about stated transfer rates? On one manufacturers site they state that their disk can transfer data at 200 Mbytes per second on a fibre channel loop and about 320 Mbytes per second on a SCSI connection. Of course these numbers are really the transfer rates of the interface itself. When you look at the manuals it gives the true details, the maximum sustained transfer rate of the drive is actually 76 Mbyte/sec (with M being 1000000) so to the rest of the industry this is actually 72.5 Mbyte/sec. So to actually achieve the 200 Mbytes/sec (of course this is real Mbytes) you would need 3 of the drives. Since most systems will read a megabyte at a time, this 72.5 Mbyte/sec is roughly 73 IO/second.
Is it any wonder I go to site after site with IO issues? No wonder folks are confused. When you figure disk capacity in the last 20 years has gone from 30 megabytes on a hard drive to 300-500 gigabytes (a factor of 17067 increase) while disk transfer rates have only gone up by a factor of 20 or so it isn’t hard to see why people have difficulty specifying their disk systems in a meaningful way.
For example, I ran a report at a client that shows the Oracle system was performing an average of 480 IO/sec (taking the IO statistics from the v$filestat and v$tempstat views and the elapsed seconds since startup) realizing this is an average, I double this value to get a peak load (I know, that is probably too low) of 960 IO/sec. From our previous calculations if we use the 320 gig (right) disk, we will need 960/73 or 14 disks to support this systems peak IO load. Currently they use 4 drives and as load increases IO read times go from 2-3 milliseconds to over 20 milliseconds. The amount of data the system has is just less than a terabyte so in order to sustain the needed peak IO rate they need to buy 3.752 terabytes of disk, not even allowing for RAID10, or RAID5.
Kind of like having a huge dump truck with a Volkswagen beetle engine isn’t it? Unfortunately the disk manufacturers are coming up against the laws of physics, someone needs to tell them bigger is not better. We end up buying much more capacity than is needed just to get the IO rates we require.
Yes, I know there is caching both at the disk level and usually the array level, but many times this is only a few gigabytes. Shoot, anymore the reference tables in a large database will fill up the cache area and then you are back to disk speeds for access times.
With 500 (419 usable) gigabyte drives many of my client systems would fit on one drive if all we had to consider was volume, however you and I both know there are two sides to the capacity issue. You need to look at both disk volume and disk IO capacity. Another wrench in the works is the needed number of disks to support concurrent access. Believe me, while you can put a terabyte database on 3 of these huge drives you won’t support more than a couple of concurrent users before performance suffers.
So when you do your next disk purchase consider true formatted size and actual IO speed and compare that to your real IO requirements. Generally if you meet your needed IO and concurrent access requirements, you will more than meet the needed disk volume needs for your application.
If you do the math, for the rest of the computer industry, a gigabyte is 1024 bytes cubed or 1073741824 bytes. This means the unformatted capacity of the drive is about 298 gigabytes or less. Assuming you only lose about 10% for formatting, this leaves you with 268 gigabytes, doesn’t sound near as impressive as 320 does it?
And how about stated transfer rates? On one manufacturers site they state that their disk can transfer data at 200 Mbytes per second on a fibre channel loop and about 320 Mbytes per second on a SCSI connection. Of course these numbers are really the transfer rates of the interface itself. When you look at the manuals it gives the true details, the maximum sustained transfer rate of the drive is actually 76 Mbyte/sec (with M being 1000000) so to the rest of the industry this is actually 72.5 Mbyte/sec. So to actually achieve the 200 Mbytes/sec (of course this is real Mbytes) you would need 3 of the drives. Since most systems will read a megabyte at a time, this 72.5 Mbyte/sec is roughly 73 IO/second.
Is it any wonder I go to site after site with IO issues? No wonder folks are confused. When you figure disk capacity in the last 20 years has gone from 30 megabytes on a hard drive to 300-500 gigabytes (a factor of 17067 increase) while disk transfer rates have only gone up by a factor of 20 or so it isn’t hard to see why people have difficulty specifying their disk systems in a meaningful way.
For example, I ran a report at a client that shows the Oracle system was performing an average of 480 IO/sec (taking the IO statistics from the v$filestat and v$tempstat views and the elapsed seconds since startup) realizing this is an average, I double this value to get a peak load (I know, that is probably too low) of 960 IO/sec. From our previous calculations if we use the 320 gig (right) disk, we will need 960/73 or 14 disks to support this systems peak IO load. Currently they use 4 drives and as load increases IO read times go from 2-3 milliseconds to over 20 milliseconds. The amount of data the system has is just less than a terabyte so in order to sustain the needed peak IO rate they need to buy 3.752 terabytes of disk, not even allowing for RAID10, or RAID5.
Kind of like having a huge dump truck with a Volkswagen beetle engine isn’t it? Unfortunately the disk manufacturers are coming up against the laws of physics, someone needs to tell them bigger is not better. We end up buying much more capacity than is needed just to get the IO rates we require.
Yes, I know there is caching both at the disk level and usually the array level, but many times this is only a few gigabytes. Shoot, anymore the reference tables in a large database will fill up the cache area and then you are back to disk speeds for access times.
With 500 (419 usable) gigabyte drives many of my client systems would fit on one drive if all we had to consider was volume, however you and I both know there are two sides to the capacity issue. You need to look at both disk volume and disk IO capacity. Another wrench in the works is the needed number of disks to support concurrent access. Believe me, while you can put a terabyte database on 3 of these huge drives you won’t support more than a couple of concurrent users before performance suffers.
So when you do your next disk purchase consider true formatted size and actual IO speed and compare that to your real IO requirements. Generally if you meet your needed IO and concurrent access requirements, you will more than meet the needed disk volume needs for your application.
Subscribe to:
Posts (Atom)