the accidental screamer

confessions of a pixel pusher linux technology

Needed to build a new NAS server with safe raid storage. It’s more or less a near line storage solution, so I tried to go for best price per Terrabyte. Just before it dissapears into what will be hopefully years of uninterupted service I snatched it’s keys and took it for a spin on the weekend. So to say. I am still tweaking things, but right now I get just a hunch more than 600 MBytes a second sustained writes xfs.

Which is actually quiet awesome, considering that there is not a single SCSI disk to be found in the case. We paid a very reasonable price for the net 6Terrabytes we got. In theory this machine could record 3 streams of 1920x1080x23.98 10bit dpx frames. For 3 hours.

driving around with the handbrake firmly engaged

linux technology

What do I know about computers? I mean, really. So I build this rather big machine, to read along a couple of million weblogs. Needs storage. Sure. I get a 3ware raid controller. Works like a charm btw. There are more blogs, there is more spam, nothing surprising or new. The machine start to have a load of a solid 95-100%. Well, linux can deal with that, and it can. Today I tune another server, and look at parameters. One of them is the scheduler that is used to do the actual IO. The default for my kernel was anticipatory. I changed that, so that
cat /sys/block/sda/queue/scheduler

reads now

noop anticipatory [deadline] cfq

And, what a surprise, the IO load starts to decline and the CPU is idle for 10-15% again! Of course this makes only sense on a server with lots of IO and database activity. That poor machine had to do stupid things for years.

As I said: what do I know about computers? Academic, yet interesting question would be how many CPU cycles are actually wasted on things like this, how many are needed?

limits: disk size and imagination

history linux technology

Some days things have the feel of a ‘techno groundhog day’. Once again I set up a computer. Once again it has a considerable sized disk system. It used to be that 30Megabyte (no typo) Winchester disk. Today it’s that 8TB raid. And the problem remains the same: The tools choke on the size. I forget what it was twenty years ago. It was not as easy as it should have been. And that did not change. To cut to chase of the technical knowledge that might be helpful now and will certainly be laughing stock in the future (30MB to big: hahaha):
Getting a 3ware 9550… with 16x500GB drives is a good idea. Fits in one nice case and in a Raid 50 config you end up with 6.3TB usable capacity. Historically it needs to run Fedora Core 4. Which is happy to find the array after the installer has been launched with linux dd and a proper floppy drive (!!) has been inserted with the 9550 drivers. The next mistake one can make (and I sure did) is to let the installer automatically partition the drive it found. Knowing that big disk systems can be trouble to start the OS from I already had seperated a 80GB boot partition in the 3ware bios. The Installer went along, formatted the whole thing and did it’s install. Which take some 6 hours I would guess.
Only problem was, that the poor thing could not boot from what it had made. The automatic partition manager was utterly confusde by the size of drives it found, but didn’t let that stop it from trying and failing hours later anyway.
Manual partition of the 80GB boot drive got me over that part. Having an OS to boot: priceless.
The data partition only started working after using parted and a crucial ‘mklabel gpt’. Only then it would accept the size of the partition correctly. Otherwise it was silently reducing it, and then would fail to mount after a reboot.

Sofar the gory technical details.

The bigger problem is:

Disks have become bigger. Ever since computers are around. Everybody knows this, is exposed to this, and benefits from it. The big question is, how can you write a software that deals with the nuts of bolts of disk systems and not be freaking prepared for that? Of course that 30MB harddrive I dealt with 20 years ago would have been a bit overwelmed to run a partition scheme that would be ready to hold 6 Terrabytes. First question is: Would it be really? Sometime people are scared of wasting 3% but waste the future of something. This side of the equation can be argued with.

There can not be ANY execuse for the way systems fail on bigger hard drive: Numbers roll over, systems report -1600% free space. Shit like this is unacceptable. Tremendously stupid. If you code like that, then you should not code. Period.
Disks will be bigger tomorrow. Deal with it. At least create an error message along the lines of “Can not create partition bigger than 2TB” etc. Fail gracefully. You might have not the money to buy enough disks to test it, but you CAN put in checks for these limits. Nobody will slip in an extra 10% ‘integer boost’ to help your code out. The limits are what they are today. Shame on the authors of the tools for the lack of imagination. If physical harddrives can catch their code only after a few years like they do I am actually surprised that y2k did so little damage …

bsd vs linux

linux

Maybe someday I will find the time to read this interesting yet long article about the difference of BSD and linux.

slim and cheap server

confessions of a pixel pusher linux technology

Of course 1.50 US$ a GB is ridicolous

But the whole concept of a 1U quad drive cheap-o system seems intruiging: Raid cards are still expensive. They certainly deliver in many cases the best solution. But 3TB (4 x 750) cheap ‘scratch space’ for data that can be recreated could certainly exist in a 1U box for a pretty sweet price point. Sacrificing 25% storage and you have save space.

And as long Moors Law keeps deflating disk and system prices it is still the best strategy to buy as little storage as late as possible. To paraphrase Einstein just not to late or to little.

sudo

free of any reason linux

the sudo command

wikipedia’s entry for sudo

open

communication history linux M$ media technology

Microsoft likes more people to develop games for their consoles. In their press release it sounds like a Windows XP machine is all you will need to develop games for their consoles.

The range of impact goes from ‘flash in the pan’ to ‘Sony is finished’. It all depends on the details of the implementation and capabilities. Nobody has ever opened game consoles to a wider development community. It might or might not take off. Trying it is a bold and innovative move.

Microsoft is a funny companies these days: Some of their divisions do all the right things, while others are as stupid as the Ottoman empire in 1907.

Trolltech makes a phone now. Trolltech got big with a toolkit for graphical user interfaces called “Qt”. I used it years ago, and it is not bad. Now they make a phone that runs embedded linux, and their user interface on top of it. In other words it is an open source phone.

From the pure aspect of technology these developments had to happen. The very interesting question is, what will come out of it. Content is a very tricky thing to predict. Hollywood survived despite constant failures in this area. As long the movie industry existed they tried to mechanize and control creativity and content creation, so that they can churn out products like a nuts and bolts manufacturer. And it never worked.

One the other side of the argument one could see Microsoft and Trolltech shipping typewriters to a million monkeys.

And, of course reality will fall somewhere in between. And once the revolution happened, it will be so clear why it did. Same in the other outcome.

Games could really use some injection of innovation. Roaming the show floor of what was the last E3 of it’s kind I was pretty surprised how alike most games looked. I don’t play. But I care about the technology and business side of this industry. There are racing games and first person shooters. Lot’s of those.
With production costs high new content development is tricky. That’s why I liked Rockstar’s Table Tennis.

Tetris was written by a russian programer when there was still a country called “Soviet Union”.

The situation with phones is similar. They don’t suck, but I never saw a phone that made just sense. Of course all Apple fan boys hope that Steve Jobs will come down Moses like with a phone on his arm. They hope so, since phones are ok, but definetely not as useful as we want them to be. And as they could be. If open software can fix this is to be seen.

return path exim4

linux technology

I moved Method Software to a new server. Licenses always could be generated automatically and sent by email.
It was seven years ago that I did set up the original host. Things have changed. So nothing worked. Thanks to all that spam it is a bit trickier to set up a mail server in the internet in 2006 than it was 1999.

First thing to get right is to have reverse DNS set up right. Otherwise you get something like:


SMTP error from remote mail server after initial connection:
host actual.name.removed.com [1.2.3.4]: 5actual.name.removed.com You Must have reverse DNS setup in order to relay mail.

In other words: The ip address you are sending from must resolve to something. With

whois ip-address

you should get a domain. This is something that your hosting provider can set up for you. They have the authority over the IP range that they gave you one from. Took only an hour with my hosting provider. Another sign that they are decent.

Even after this fix the return-path was set to something stupid like ‘www-data@hostname-I-gave-the-machine’
I took a bit of googling and a couple of pointless detours to /etc/hosts and dpkg-reconfigure exim4-config before I found this blog entry that pointed to /etc/email-addresses which indeed did the trick.

suse sucks

linux

Moving a development project form OS X to a Suse linux machine. Trying to install XML::LibXML.
Of course there is not the right lirbary. Which in itself is not a big deal. But Novell still is stuck in the last century: they have a page for the lib in question. But they have no download link for it! They really point to the CD. Which simply means one thing: Whenever I have the chance to recommend a linux distro it will not be Suse. Sorry, but there is neither the room nor the time for stupid crap like this page. What a tease! They say they have it, just that there is no way to get to it. Crap.

switchn’ distros

linux

Somehow I ended up being a ‘redhat boy’. Just happened, in my former job, that lastest almost as long as linux was on the rise, it was nothing that I needed to do: Install and configure linux. I ‘just’ wrote software for it. Being freelance I know get to pick what I want to do and learn. Which is very nice. For the next two machine that will to clients I have decided to switch to debian. It’s all different, but ‘the head is round so that the thoughts can change direction’. At least that’s what Picabia said.

Debian appeared on my horizon once I had to move a site of a client to a hosting solution of their choosing, which happened to be Debian. They had to drag me there kicking and screaming. Everything was different. /etc/httpd became /etc/apache and so forth. It’s too early to tell if I really like debian. But things that are different seem to be better. Of course I missed


chkconfig

But a quick


apt-get install sysv-rc-conf

took care of that.

I actually have been bouncing around debian quiet a bit, and did horrible things to it (like compiling kernels that ought not to run, messing with raid, initrd and so forth, and so far it has been remarkably robust.
With redhat I would have not gotten that far so quick, and would have cursed allot more.

to be continued …