looking up memory chips

linux technology

If a machine sports edac then


find /sys/devices/system/edac/ \( -name mem_type -o -name size_mb -o -name mc_name \) -exec cat {} \;

will display quickly what kind of memory modules are visible to the kernel, and in which state they are in.

getting shells in the same path

Command Line linux technology

Often I work with a couple of shells simultaneously in the same directory. One may be the editor with a program in it, and the other one running it.
When I add the following lines to .bashrc


alias sd='pwd > /tmp/ddd'
alias d='cd `cat /tmp/ddd`; pwd'

I just need to type ‘sd’ (for Set Directory) in a shell that is already in the right directory. When I then log in with the other shell a simple ‘d’ gets me where the other shell already is. Extra benefit: When I want to continue where I was last I just type ‘d’ again. Just a little thing. But the world is made out of little things. Lots and lots of them.

osx wrtg54 connection reset ssh

linux OSX

When using via the wrtg54 ssh connections timed out after a while.
Which is was mildly annoying. The problem that with mildly annoying things is they are mildly annoying.
So one does not go and fix them soon enough. In this case it was terribly easy to cure errors like:


Read from remote host 1.2.3.4: Connection reset by peer
Connection to 1.2.3.4 closed.

All that it needed was to create a file called .ssh/config in the home directory and add something like these lines:

ServerAliveInterval 60
ServerAliveCountMax 5000

Nice that it didn’t require any changes on the other end.

gmail backup

google linux

Over the last years I accumulated quiet a bit of mail in Gmail. It works, and I find it very inspiring to see its features grow while I keep all my data. But I also grew worried: What would happen if my mail should go away? I have paid google exactly zero for keeping all my email. There would be nothing I could do.

Turns out that it is possible to make a copy. Googles own Matt Cutts described it well

I found that these getmail parameters worked well for me:


[retriever]
type = SimpleIMAPSSLRetriever
server = imap.gmail.com
username = EMAIL@gmail.com
password = PASSWORD
mailboxes = ("[Gmail]/All Mail",)

[destination]
user = getmail
type = Maildir
path = /root/.getmail/

[options]
read_all = false
verbose = 2
received = true
delivered_to = true
message_log = /root/.getmail/gmail.log

It took a while. Actually days. It seems that you only get mail out at a slow data rate. Then there is a bandwidth limit. getmail failed after a while with:


getmailOperationError error (IMAP error ([ALERT] Account exceeded bandwidth limits. (Failure)))

Just waiting a couple of hours took care of this. Having had the mail not backed up for 5 years it was quiet alright to wait 5 hours.

Another error occured with 5 mails. Getmail for instance would end with:


getmailOperationError error (IMAP error (command FETCH ('3049', '(RFC822)') returned NO))

And it would do so repeatedly with the same number. I assumed that something had gone awry with those mails. After pretending that the mail already had been retrieved via the oldmail-imap file getmail soldiered on.

Tragically at some point my connection went away. I had downloaded around 120,000 mails during that session.
Getmail updates the oldmail-imap file only when done (or cancelled via ctrl-C). So the next time it started I went to download the same mails again.

Even with that glitch things worked out. And I feel pretty good about having a copy of my mail now.

Having a secure copy of your data is never a bad idea.

iptraf

internet linux

just found

iptraf

and it is real nice and handy tool so see what is going on the network ports.

Very helpful.

ssh prime agent

linux

(sorry if this should not make any sense to you. This is a note for me to go back to. Even though I bring new machines online regurlarly I forget the exact steps for this)

on X
ssh-keygen -t dsa

add content of .ssh/id_dsa.pub in the end of .ssh/authorized_keys2 on Y. Only thing we need to do on Y.

after boot of X run

ssh_info_file=~/.ssh-agent-info-`hostname`
ssh-agent >$ssh_info_file
chmod 600 $ssh_info_file
source $ssh_info_file
ssh-add ~/.ssh/id_dsa

before login from X to Y
source /root/.ssh-agent-info-hostname

machine memory afterburner

linux

sar from the sysstat modules is nice. I think it keeps about a weeks worth of history around. I’d like to have more than that. There might even be a command lines switch to do that. But often it is just faster to write what you need when you can type with reasonable speed. This script will copy all sa files into a directory called /var/log/allsa in the form saYEARMONTHDATE. So today’s sa file I can access forever via


sar -f /var/log/allsa/sa20090822

The script only cares about files that are older than a day. So it will take between 24 and 48 hours that the files appear in their final destination.


#!/usr/bin/perl

#
# This will keep all daily sa files readable via saw.
# It seems to be a shame to # throw them away.
# A year worth of sa files is about 113 MB for my machines
#
# This script is meant to run daily. It probably needs root permissions.
#
# use as much as you like. No Warranties or promises. Your problem if it eats your machine.
# Andreas Wacker, 090822

use strict ;

my $sourcedir = "/var/log/sa";

my $targetdir = "/var/log/allsa";

if (! -d "$sourcedir"){
die "can not find directory $sourcedir for sa files";
}

if (! -d "$targetdir"){
system ("mkdir -p $targetdir");
if (! -d "$targetdir"){
die "was unable to create $targetdir. $0 would need it to proceed ";
}
}

opendir (INDIR , $sourcedir) or die "unable to read directory $sourcedir";

my @allfiles = readdir (INDIR);

close (INDIR);

foreach my $file (@allfiles){
if ($file =~ /^sa[0-9]+$/){
my $completefilepath = "$sourcedir/$file";
my $mtime = (stat $completefilepath)[9];
my $dayage = (time() - $mtime ) / ( 3600 * 24 ) ;
if ($dayage > 1){
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime ( $mtime);
my $datestring = sprintf ( "%d%0d%02d" , $year + 1900, $mon + 1, $mday);
my $targetfilepath = "$targetdir/sa$datestring";
if (! -f "$targetfilepath"){
#print "$file $dayage $datestring\n";
system ("cp -p $completefilepath $targetfilepath");
if (! -f "$targetfilepath"){
die "tried to copy from $completefilepath to $targetfilepath and it did not work. This is a very bad sign!";
}
if (((stat $completefilepath)[7]) != ((stat $targetfilepath)[7])){
die "file sized for $completefilepath and what should have been a copy $targetfilepath did not match. Not good!";
}
}
}
}
}

learning from history

linux

shells keep a history. The default seems to be to keep 1000 lines. I have not found a reason to make this huge. And while at it time stamp it as well:


HISTFILESIZE=2000000
HISTSIZE=100000
HISTTIMEFORMAT='%F %T '
export HISTTIMEFORMAT HISTSIZE HISTFILESIZE

in your .bashrc will keep 100,000 lines (2MB “omg” ) of history around and will also time stamp it nicely. That and the grep command make for some nice shortcuts on memory lane.

configure: error: Kerberos libraries not found.

linux

When I wanted to build php with imap it did complain about:


configure: error: Kerberos libraries not found.

Turns out that this some fallout from lib64 vs. lib differences. So I found
this blog explaining exactly what was going on. Very helpful. Especially the


sh -x ./configure ...options go here...

trick can be very very helpful in the future.

vsftpd 500 OOPS error

internet linux

If your ftp client reports

500 OOPS: bad bool value in config file for: config_name_here

then that might mean that you have a “YES ” or “NO ” with a trailing space in your config file. Easy enough to fix. I got lucky and found it quickly. Just in case somebody needs to google for this.