Older blog entries for yeupou (starting at number 145)

Using RAM for transient data

When a system have lots of I/O, trouble may arise. If an optical hard drive is über-solicited, quite easily you may get many kinds of failures, high CPU load, just because of I/O errors. In such case, using RAM as disk, aka RAM disk, may be a good option, as it allows way more I/O than an optical hard drive. Solid State Drive (SSD) addresses partly this issue, but it seems to, still, have way higher access time and latency than RAM. RAM disk, on the other hand,  is non persistent (unlike SSD, though), quite an annoying drawback so even if you write some scripts to save data, you will loose some in case of power failure.

RAM disk is, actually, especially appropriate for temporary data, like /var/run, /var/lock or /tmp. Linux >= 2.4  supports tmpfs, some kind of RAM disk, that (as far I understand) does not reserve blocks of memory (meaning: it does not matter if you have a big tmpfs, unused memory in the tmpfs will still be available to the whole system).

Most of my computers have more than 1 Gb RAM. And, most of the time, they never use the Swap space. For instance (relevant lines are si and so, as swap in, swap out):

bender:$ vmstat
 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0      0 4146984 674704 1309432    0    0     6     9    3   34  2  1 97  0

nibbler:$ vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0      0 862044  23944  84088    0    0    10     0   42   22  0  0 99  0

moe:$ vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 1  0      0 280552 166884 1297376    0    0     7    58   73   12  8  2 90  1

So they are good candidates to use tmpfs whenever possible. Do so with Debian GNU/Linux is fast-forward. Just edit /etc/default/rcS as follows (for /var/run & /var/lock):

RAMRUN=yes
RAMLOCK=yes

and add, in /etc/fstab (for /tmp):

tmpfs             /tmp     tmpfs     defaults    0    0

Next time you boot, diskfree should provide you with something like:

  $ df
Filesystem           1K-blocks      Used Available Use% Mounted on
tmpfs                  1033292         0   1033292   0% /lib/init/rw
varrun                 1033292       648   1032644   1% /var/run
varlock                1033292         0   1033292   0% /var/lock
tmpfs                  1033292         4   1033288   1% /dev/shm
tmpfs                  1033292         0   1033292   0% /tmp

Syndicated 2012-01-30 13:57:36 from # cd /scratch

Booting on a SATA drive with ASUS K8N4-E Deluxe mainboard

I noticed the  ASUS K8N4-E Deluxe mainboard, provided with Phoenix Technologies ACPI BIOS Revision 1008 simply won’t boot with a SATA driver while it does fine with a PATA (IDE) one. Not sure what specific part is at fault here.

 00:06.0 IDE interface: nVidia Corporation CK804 IDE (rev f2)
 00:07.0 IDE interface: nVidia Corporation CK804 Serial ATA Controller (rev f3
 00:08.0 IDE interface: nVidia Corporation CK804 Serial ATA Controller (rev f3)
 05:0a.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02)

It boots with the following linux kernel options: irqpoll nolapic apm=power_off. I added them into /etc/defaut/grub before regenerating grub.cfg:

GRUB_CMDLINE_LINUX_DEFAULT="quiet irqpoll nolapic apm=power_off"

Note that, however, the shutdown does not halt physically the system. I’ll look into that later.


Syndicated 2012-01-27 19:09:20 from # cd /scratch

Avoiding Spams with SPF and greylisting within Exim

A year ago, I posted an article describing a way to slay Spams with both Bogofilter and SpamAssassin embedded in exim. This method was proven effective for my mailboxes:  since then, during a timespan of one year, Bogofilter caught ~ 85 % of actual spams, SpamAssassin (called only if mail not already flagged defavorably by Bogofilter) caught ~ 15 %. Do the math, I had almost none to flag by hand.

Why would I change such setup? For fun, obviously :-)

Actually, I made no change, I just implemented SPF (Sender Policy Framework) and greylisting.

I noticed that plenty of spams were sent to my server @thisdomain claiming to be sent by whoever@thisdomain. These dirty spams were easily caught by the duo Bogofilter / SpamAssassin, but still, it annoyed me that @thisdomain was misused. SPF allows, using DNS records, to list which servers/computers are allowed to send mails from addresses @thisdomain.   SPF checks are predefined in Exim out of the box, so I’ll skip its configuration. The relevant DNS record (with bind9), allowing only two boxes (primary and secondary mail servers) designated by their IP to send mails @thisdomain, looks like:

thisdomain. IN  TXT  "v=spf1 ip4:78.249.xxx.xxx ip4:86.65.xxx.xxx -all"

Result: Since I implemented SPF on my domains, there was no change in the number of spam caughts. However, during this period, my primary server list of temporary bans dropped from 200/100 IPs to 40/20 IPs. I cannot pinpoint with certainty the cause of this evolution because the temporary bans list depends on plenty of things. But, surely, pretending to send mails @thesedomainsgrilledbySPF surely lost some interest for spambots. Implementing SPF is actually not about helping ourselves directly but indirectly: reducing effectiveness of spambots helps everybody.

I use greylisting on my secondary mail server since a while and I noticed over years that this one almost never had to ban IPs. Not that he never received spam, but that he almost never received mails from very obvious spam sources identified at STMP time.  Seems that most very obvious spam sources never insist enough to pass through greylisting. I guess that most spambots are coded to skip any mail server that does not immediatly accept a proper SMTP transaction, because it has no time to waste, considering how little is the percentage of spams sent actually reaching someone real.

This greylisting use the following files (an assumes memcached and libcache-memcached-perl are properly installed):

So I gave a try using greylist my primary mail server, but with a very short waiting time, because 5 minutes, for example, to receive mail from a not-yet-known  source is not acceptable. So I edited the relevant conf.d/main/ file to GREY_MINUTES = 0.5 and GREY_TTL_DAYS = 25.

Result: no changes regarding the number of caught spams. However, like on the secondary mail server, the number of banned IPs is near to none. Looks like most obvious spam sources don’t wait even only 30 seconds – actually, it’s a very acute choice as they would be anyway banned if they did.


Syndicated 2011-12-09 01:21:43 from # cd /scratch

Automounting NFS shares using if-up.d/if-down.d

I enjoy NFS since many years. But, with laptops, by essence that are not always connected to the same local netwok, it’s quite a pain in the ass.

Editing /etc/fstab each time you connect is not really an option. AutoFS seemed a great idea. It took me a while to get the damn thing running and it failed to work after a reboot. After spending plenty of time googling around, fooling around, I eventually reached the conclusion that I was not able to set it up in a reliable fashion. So I dropped the idea.

Then I had some hopes in regard of the DHCP client. It provides hooks, with plenty of variables useful to determine to which network you are connected to. I gave it a try. Worked well: a script in dhclient-exit-hooks.d to mount NFS shares after the interface is brought up on the LAN, the counterpart in dhclient-enter-hooks.d to umount NFS shares just before the interface is going down.

Then I realized that one of the two laptops supposed to make use of this script is running Ubuntu -not mine. And Ubuntu use by default a very very nasty software called NetworkManager, the kind of well thought user interface that stores configuration in anything but the standard stuff that worked finely before it even existed. Yeah, it literally makes a litter of /etc/network/interface. So, no, obviously, handling properly /etc/dhcp/dhclient-*-hooks.d/ scripts is not an option for NetworkManager, it’s so much better to reinvent the wheel with a poorly designed (And What’s The Deal With These Upper Cases?) /etc/NetworkManager/dispatcher.d.

Plenty of people complained already about obvious limits of NetworkManager. Sure, there’s is room for improvements and it’s better to contribute than just to rant. But considering the kind of replies provided to NetworkManager people about bugs reports (cause, really, not handling dhcp-*-hooks.d is a regression), I think I’ll pass. Funny links though: “After discussing with a few folks we found that pre-up will not come back … please provide detailed infos for your use-case as we have to find other means to achieve this.” (hum, they… found that useful working features will not come back but havent found any better alternative yet ?), “if the resolvconf abilities are not enough you can also stuff in a NM dispatcher.d script (see: /etc/NetworkManager/dispatcher.d/)” (please, have fun writing new scripts to replace the ones that worked just fine). In fact, when developers deal with such issue like this “Changed in network-manager (Fedora): status: Confirmed → Won’t Fix”, best is just to find a workaround that absolutely not relies on their stuff that is sure to be broken some other day – no doubt that if a new trend comes, they’ll ask you to one more time rewrite scripts just to do the same frickin thing you were able to do years ago with simple dhclient-*-hooks.d.

So I finally came up with /etc/network/if-up.d and /etc/network/if-down.d scripts. Its quite standard and, oh!, NetworkManager got a “dispatcher” that run-parts on this dir. The obvious drawback is the fact it cannot be used to properly unmount the NFS shares because it’s unclear whether NetworkManager will run if-down.d before or after having brought down the network interface and, also, because it’s way to more painy to determine whether the loss of the current interface means loosing the relevant network where shares are (if you loose the Wifi, clearly, you may still be properly connected to the LAN). And I’m off trying to guess how behave and how will behave in 6 month this piece of software.

Instead of hardcoding the list of NFS shares in one more script, considering that initscripts already provides a well-thought /etc/network/if-up.d/mountnfs, I figured I would simply rely on /etc/fstab. My /etc/network/if-up.d/01prepmountnfs (that must run before initscript’s mountnfs) simply goes through /etc/fstab, looks for NFS shares that are in noauto mode (so, not configured to be mounted automatically when the box starts), find out if the server exists on the current LAN. If so, it removes the noauto option and then initscript’s mountnfs does its magic. On Ubuntu, there’s no /etc/network/if-up.d/mountnfs, but the following is enough to replace it:


echo "#!/bin/sh
mount -a" > /etc/network/if-up.d/mountnfs
chmod a+x /etc/network/if-up.d/mountnfs

The /etc/network/if-down.d/unprepmountnfs counterpart only reverts /etc/fstab to its previous state. Yes, if you loose connection to the NFS server, your X session will probably get frozen. For the reasons previously stated, for now it will have to do.


Syndicated 2011-09-15 21:23:49 from # cd /scratch

Wished I bought an Android/iOS based phone (4-2cal Bada OS’s widget)

I don’t care much about phones. I don’t give a toss about eye-candy fancy stuff. It’s a tool and only phone related features matters to me. Also, I tend to break things: obviously because I’m careless (yes, I don’t care much about phones, as already stated). So when picking a phone, I go for the easiest to carry: lightweight and small. If it’s robust, it’s a plus.

But so-called smartphones seem to get bigger and bigger over years. So when my last phone died, I bought the cheapeast and smallest smartphone that was available to me through my phone provider. It’s a Samsung wave 723. Not bad hardware-wise. Sure, there is better hardware around, but every piece of hardware is or will be bested soon in one way or another: as long as it’s good enough, then it’s good. No, the obvious drawback is the OS it came with.

The OS is called Bada OS and seems to be one more Linux based OS, developed in-house by Samsung. It depends on Libre Software but it’s not obvious that it is composed of such software. Well, two weeks after getting this phone, Samsung announced they will no longer provide OS software upgrade for the Samsung wave 723 and several others of the serie. Brand new and already obsolete, the wave 723 is stuck with Bada OS 1.1 while Bada OS 1.2 provides T9 trace, which is not exactly what I consider frivolous.
Clearly, I’ll take into account this policy from outer space next time I’ll buy a phone.

4-2cal / Bada OS

Nonetheless, now I’m stuck with Bada OS and I needed a specific calendar app (highlighting 6 days work weeks). Bada OS provides “apps” and “widgets”, it’s unclear to me how pertinent is this distinction. Whatever. I needed something that sticks on the phone desktop, so it’s what is called here “widgets” and is in fact some kind of HTML/javascript packaged in a zip with the .wgt extension.
I wrote 4-2cal.wgt and I can, now, also acknowledge I’ve never seen a so poorly documented OS/environment since a very long time ago. Quite time consuming to be forced to second guess the specifics/cracks of the javascript implementation.
So, next time, I’ll definitely think twice before wasting time with a Bada OS-based phone.

Syndicated 2011-07-08 14:47:37 from # cd /scratch

Minimalistic BitTorrent client over NFS/Samba: with Transmission 2.x

I previously released a script to use transmission (BitTorrent client) over NFS/Samba.

This script was written for transmission 1.x. I updated to use transmission 2.x. It’s a hack more than anything else, it’s just a wrapper for transmission-remote, the official RPC client.

It works as before. You put $file.torrent in a watchdir, the script runs (cronjob) and create $file.trs (containing infos about the download) and starts the download. Rename it $file.trs- to pause it, remove the $file.trs to stop it. When the download is finished, you get a mail (if cron is properly set up).

Due to progress made by transmission devs, the install process is even simpler.

1) Set up watch and download dirs as before.

2) Install/Upgrade to transmission 2.x (packages cli and daemon).

3) [this make senses only if you used the previous version] Debian now starts transmission with the debian-transmission user. Trying to keep using torrent cause the init script to fail and, in the long run it’s anyway best to use the user debian maintainers provides. To easily switch to this new user, I removed the new debian-transmission entries in /etc/passwd and /etc/group and then replaced torrent by debian-transmission (except /home path obviously) in both these files (also updated /etc/cron.d/torrent/). Finally I ran chown debian-transmission.debian-transmission /var/lib/transmission-daemon /etc/transmission-daemon/settings.json.

4) Update transmission-daemon config. Make sure the daemon is down before, otherwise your changes won’t stay. So edit /etc/transmission-daemon/settings.json. I changed:

"blocklist-enabled": true,
"download-dir": "/home/torrent/download",
"message-level": 0,
"peer-port-random-on-start": true,
"port-forwarding-enabled": true,
"rpc-authentication-required": false,

5) Install the script torrent-watch.pl, test it:

cd /usr/local/bin
wget http://yeupou.free.fr/torrent2/torrent-watch.pl
chmod a+x torrent-watch.pl
su debian-transmission
torrent-watch.pl
cat status

6) Set up cronjob and log rotation:

* * * * * debian-transmission cd ~/watch && /usr/local/bin/torrent-watch.pl

/etc/logrotate.d/torrent:

/home/torrent/watch/log {
weekly
missingok
rotate 2
nocompress
notifempty
}

Then you should be fine :-)


Syndicated 2011-05-05 16:52:36 from # cd /scratch

Switching from NFSv3 to NFSv4

Today, I switched over NFSv4. I guess there published it for some reason and people claim it could increase file transfert rate by 20%.

In my case, to get it working properly, I…

Modified /etc/default/nfs-kernel-server on server side to have

NEED_SVCGSSD=no

Modified /etc/default/nfs-common on both clients and server side to have

NEED_STATD=no
NEED_IDMAPD=yes
NEED_GSSD=no

Modified /etc/exports on server side to have something starting by


/server 192.168.1.1/24(ro,fsid=0,no_subtree_check,async)

/server/temp 192.168.1.1/24(rw,nohide,no_subtree_check,async,all_squash,anonuid=65534,anongid=65534)

[...]

It forces you to set a root for the NFS server, in my case /server (which I had already in my NFSv3 scenario, so…), aka fsid=0.
You also need to specify nohide for any exports.

Modified /etc/fstab on clients side to set mount type to nfs4 and to remove the /server part from the paths, no longer necessary as path are relatives to fsid=0 which is /server. It gives entries like:

[...]
gate:/temp /stockage/temp nfs4 nolock 0 0
[...]

I had an export which was a symlink to somewhere in /home. NFSv4 is stricter than NFSv3 and there is no way to export something outside from fsid=0. So I made a bind, adding to /etc/fstab on server side:

[...]
/home/torrent/watch /server/torrent/watch none bind 0 0
[...]

After restarting nfs-kernel-server on the server side and nfs-common on both sides, umount NFS partitions and doing a mount -a on the client side, everything seems fine.


Syndicated 2011-01-06 18:03:19 from # cd /scratch

Package: amarok 2.4beta1 for Debian testing

Amarok is a nice music player for KDE. Inspired by iTunes interface, it features a clever random mode, provides Wikipedia/lyrics/photos pages for the currently played song, handles mtp devices and works with lastfm so you can have online stats of what you listen to and find people that listen to the same crap too. That’s definitely a nice software. But, unfortunately, it’s quite buggy.

This morning, Amarok would not start, no matter what. Well, it’s started once I erased .kde/share/config/amarok and .kde/share/apps/amarok. Then, shortly afterwards, it failed to start once more. I’m not quite frankly prepared to remove my Amarok config twice per day. So I decided I would just give a shot to a more recent version and the first beta for 2.4 around seemed a good pick.

Here’s amarok 2.4beta1 (2.3.90) packages for Debian testing amd64.

It was built with amarok Debian experimental directory (a new entry added in debian/changelog, usr/share/doc/kde/HTML/* removed from debian/amarok-common.install, usr/lib/strigi* removed from debian/amarok.install, usr/lib/kde4/*.so and usr/lib/*.so* added to debian/amarok.install, target override_dh_shlibdeps: added to debian/rules and all patches removed from debian/patches/series) and amarok official latest source package (renamed amarok_2.3.90.orig.tar.bz2) using the command dpkg-buildpackage -rfakeroot inside the amarok-2.3.90 directory (source tarball decompressed) containing also the debian directory.


Syndicated 2010-12-30 15:51:55 from # cd /scratch

Keeping the dpkg installed software database clean

The system on my workstation was installed in 2008, December. Actually, I installed Debian AMD64 version over an i386 version on the same box, which was installed around 2003.

Debian ships tools that makes it easy to keep a clean system. For instance, debfoster allows to easily get rid of all no-longer necessary libraries and al: you just have to select the important pieces and it will remove any software that is not required by one of these. And apt-get, nowadays, just like deborphan used to, even warns you when some software is no longer required and provides you with the autoremove command line argument that do the job automatically.

(debfoster is, supposedly, deprecated, like apt-get is in favor of aptitude. Well, I like debfoster)

That being said, if I run dpkg –list | grep ^r | nl | tail -n 1 on this box, after only one year, I get 617 lines about removed software I do not care about. Mostly, they were kept in the dpkg database because I (or me using the system) modified their conffiles. The following will clean this: for package in `dpkg –list | grep ^r | cut -f 3 -d ” “`; do dpkg –purge $package; done && debfoster


Syndicated 2010-12-26 23:30:40 from # cd /scratch

Using partitions labels

Recent linux versions (yes, I’m talking kernel here – linux is not an operating system) introduce new IDE drivers. It implies a device naming convention change. Instead of hda, hdb, etc, you get sda, sdb, etc, just like SCSI drives.

I have three hard disks on my main workstation – plenty of partitions. So in my case, it makes sense to use a unique identifier for each partition so nothing breaks up whenever I add/remove a drive or boot on an older kernel with the previous IDE drivers.

There are already uniques ids for each partition available using the command blkid. It returns unbearables and meaningless, but very uniques, ids like af8485cf-de97-4daa-b3d9-d23aff685638.

So it is best, for me at least, to label partitions properly according to their content and physical disposition, which makes for uniques id too in the end.

For ext3 partitions, I just did:

e2label /dev/sda2 sg250debian64
e2label /dev/sda3 sg250home

For the swap, e2label cannot help, so we set the label with mkswap, recreating it:

swapoff /dev/sda1
mkswap -L sg250swap /dev/sda1
swapon -L sg250swap

For ntfs partitions, I did:

apt-get install ntfsprogs
ntfslabel /dev/sdb1 hi150suxor
ntfslabel /dev/sdb2 hi150suxor2

Then, /etc/fstab must be edited as:


LABEL=sg250swap none swap sw 0 0

LABEL=sg250debian64 / ext3 errors=remount-ro 0 1
LABEL=sg250home /home ext3 defaults 1 2

LABEL=hi150suxor /mnt/suxor ntfs-3g defaults,user,noauto 0 0
LABEL=hi150suxor2 /mnt/suxor2 ntfs-3g defaults,user,noauto 0 0

Finally, grub (or any other boot loader) config should be updated to reflect that. However, unless I’m mistaken, with grub2 as shipped by debian, everything is generated usings scripts that does not seem to handle labels :(


Syndicated 2010-11-11 23:38:24 from # cd /scratch

136 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!