Some days things have the feel of a ‘techno groundhog day’. Once again I set up a computer. Once again it has a considerable sized disk system. It used to be that 30Megabyte (no typo) Winchester disk. Today it’s that 8TB raid. And the problem remains the same: The tools choke on the size. I forget what it was twenty years ago. It was not as easy as it should have been. And that did not change. To cut to chase of the technical knowledge that might be helpful now and will certainly be laughing stock in the future (30MB to big: hahaha):
Getting a 3ware 9550… with 16x500GB drives is a good idea. Fits in one nice case and in a Raid 50 config you end up with 6.3TB usable capacity. Historically it needs to run Fedora Core 4. Which is happy to find the array after the installer has been launched with linux dd and a proper floppy drive (!!) has been inserted with the 9550 drivers. The next mistake one can make (and I sure did) is to let the installer automatically partition the drive it found. Knowing that big disk systems can be trouble to start the OS from I already had seperated a 80GB boot partition in the 3ware bios. The Installer went along, formatted the whole thing and did it’s install. Which take some 6 hours I would guess.
Only problem was, that the poor thing could not boot from what it had made. The automatic partition manager was utterly confusde by the size of drives it found, but didn’t let that stop it from trying and failing hours later anyway.
Manual partition of the 80GB boot drive got me over that part. Having an OS to boot: priceless.
The data partition only started working after using parted and a crucial ‘mklabel gpt’. Only then it would accept the size of the partition correctly. Otherwise it was silently reducing it, and then would fail to mount after a reboot.
Sofar the gory technical details.
The bigger problem is:
Disks have become bigger. Ever since computers are around. Everybody knows this, is exposed to this, and benefits from it. The big question is, how can you write a software that deals with the nuts of bolts of disk systems and not be freaking prepared for that? Of course that 30MB harddrive I dealt with 20 years ago would have been a bit overwelmed to run a partition scheme that would be ready to hold 6 Terrabytes. First question is: Would it be really? Sometime people are scared of wasting 3% but waste the future of something. This side of the equation can be argued with.
There can not be ANY execuse for the way systems fail on bigger hard drive: Numbers roll over, systems report -1600% free space. Shit like this is unacceptable. Tremendously stupid. If you code like that, then you should not code. Period.
Disks will be bigger tomorrow. Deal with it. At least create an error message along the lines of “Can not create partition bigger than 2TB” etc. Fail gracefully. You might have not the money to buy enough disks to test it, but you CAN put in checks for these limits. Nobody will slip in an extra 10% ‘integer boost’ to help your code out. The limits are what they are today. Shame on the authors of the tools for the lack of imagination. If physical harddrives can catch their code only after a few years like they do I am actually surprised that y2k did so little damage …