Older blog entries for yeupou (starting at number 150)

Cleaning up ogg/mp3 collection (tags, filenames) with lltag

Over years, my music collection started to get annoyingly inconsistent (file names, tags, etc). I wrote two scripts to clean it up, in the form maindir/MusicGenre/Band/Album/songs. The first one identifies albums from files, the second one does the actual job, as lltag wrapper. The point of doing it in two distinct scripts is to separate the part where user input is needed and the part that requires none but takes most CPU time.

Considering there’s an initial directory that contains a subdirectory for each music album that must be sorted out :

  • cleanup-music-directory-01-identify.pl writes a import file (containing style|band|year|album, only the year being optional) in each subdirectory, according to your input. You’ll notably have to select a music genre.
  • cleanup-music-directory-02-rename.pl reads import files and then uses lltag to do the actual job – renaming and updating tags. Best is to run in –debug mode first that will only show the proposed changes without altering anything yet; if some of your files lack the TITLE tag, it can get messy.

These two scripts must be edited first (paths to the collection and user supposedly to retain ownership of the files).


Syndicated 2012-04-18 09:32:45 from # cd /scratch

Using a laptop as alarm clock

My alarm clock died long ago. Since then, I use my cellphone to wake me up. Works ok, except that my current cellphone is total crap and, among numerous issues, its alarm software some morning just stays idle while, the others days when it actually works, a simple movement shuts it off. Believe me, I checked everything, made plenty of test, it’s just bad design and poorly coded software.

Not to mention that I usually wake up with no alarm; so when I use one, it means that I must wake up early, probably with not enough sleep at all. I need the real deal, high sound level and no shortcut to kill it, to actually get up.

Whenever I needed an alarm, I ended up running, on my laptop not to far from my bed, some `sleep XXh XXm && mplayer /path/to/a/song`, check sound volume, followed by CTRL-C in the morning.

Two days ago, I was über-tired, I needed to wake up early next morning and calculating tomorrow waking up time  – current time just pissed me off, not to mention checking the volume level, mute setting and such. It pissed me enough to decide me to write a script to fix the problem. Here comes wakey.pl:

  • it takes as argument the time you’d like to wake up in the form HH:MM or HHh MMm ;
  • it can run as timer (as sleep), useful if you want to take 20min nap, with -t or –timer ;
  • it wakes you up playing a random song picked in ~/.wakey ;
  • it uses mplayer to play the song, so it can be in any format your mplayer supports ;
  • it raises progressively the sound volume when trying to wake you up (you can set –volume-max, in case 100% on Master mixer is too loud) and reset properly mixer settings when finished ;
  • it won’t stop playing the music until you type a 3 to 5 characters word randomly taken from the defaut dictionary installed on your system (/usr/share/dict/words).

I wanted it to deal with any powersave setup to make sure to forbid the laptop to sleep or hibernate, but I found not portable and clean way to do it (my laptop uses KDE with PowerDevil). I’d be happy to hear about any clue/lead in the regard.

# (this assume wakey.pl is executable and in $PATH)
# wakes you up next time its 6 in the morning:
wakey.pl 06:00

# the same
wakey.pl 6h

# wakes you up exactly in 15 minutes
wakey.pl -t  :15

# the same
wakey.l 15m --timer

# the same but make sure the sound volume wont exceed 70%
wakey.l 15m -t -v 70

To run it, make sure you have debian packages libfile-homedir-perl and libterm-readkey-perl installed. You’ll also need mplayer and amixer properly set up.


Syndicated 2012-02-22 10:32:21 from # cd /scratch

Moving a live system from one hard disk to another

Ever found yourself in the situation where you want to move your GNU/Linux from an old hard disk to a new one. Well, it can be done quite easily :-)

First, set up new partitions with parted then mkswap, mkfs (using proper labels). Yes, I assume you’re familiar with these (RTFM).

Mount the new root partition somewhere, like /mnt/tmp in this article.

Create in this new partition all the directories that it would not make sense to copy from the original system (in my case: home being on another partition, stockage containing only NFS mounts):

cd /mnt/tmp
mkdir dev  home  proc  stockage  sys  tmp mnt

Shut down any daemon/service that is up (cron, etc), to avoid copying stuff in an incoherent state.

Then, actually copy the system:

for dir in /*; do if [ ! -e /mnt/tmp/$dir ]; then cp -ax $dir /mnt/tmp/; fi ; done

Edit /mnt/tmp/etc/fstab to use the newly created partitions.

Chroot in the new system to make it bootable with grub:

mount --bind /dev /mnt/tmp/dev
mount --bind /sys /mnt/tmp/sys
mount proc -t proc /mnt/tmp/proc
chroot /mnt/tmp
grub-mkdevicemap
update-grub
# (you can run blkid to check the root's unique id of this
# new system shows up in the new system /boot/grub/grub.cfg)
grub-install /dev/XX  # where XX is the new disk, like /dev/sdc or whatever

Reboot on the new system (stating the obvious: change boot drive order in the BIOS). If everything is fine, then copy /home from the old disk to the new partition, without login in with any system (CTRL-ALT-F2 to quite X server and log in as root, for example).

After removing the old device, re-run update-grub so it’ll no longer show up. The end.


Syndicated 2012-02-20 15:35:24 from # cd /scratch

Getting accurate temperature reading for the CPU

On my main workstation, lm-sensors provides apparently contradictory temperature reading for the CPU, depending on the sensor:

radeon-pci-0200
Adapter: PCI adapter
GPU Temperature:  +62.0°C  

k10temp-pci-00c3
Adapter: PCI adapter
CPU Temperature:  +17.0°C  (high = +70.0°C)
                           (crit = +70.0°C, hyst = +68.0°C)

atk0110-acpi-0
Adapter: ACPI interface
[...]
CPU FAN Speed:          1890 RPM  (min =    0 RPM)
[...]
CPU Temperature:         +32.0°C  (high = +90.0°C, crit = +125.0°C)
MB Temperature:          +42.0°C  (high = +45.0°C, crit = +90.0°C)

17°C, as reported by the CPU sensor, seems very low especially as the temperature of the room the computer is running inside is at least 17°C already. Clearly, the Motherboard sensor (atk0110 / IT8716F chip) readings, same as what the BIOS reports, are more sensible.

There’s actually lot of misinformation on the web. For instance, CoreTemp author, a proprietary software for MS Windows to provide CPU temperature readings, states on his front page that “all major processor manufacturers have implemented a DTS (Digital Thermal Sensor) in their products. The DTS provides more accurate and higher resolution temperature readings than conventional onboard thermal sensors”. Possibly, probably, right: k10temp may be more accurate than atk0110. However, when the same author reply, on his forum, to a user asking about inconsistencies in CPU temperature readings, clearly interested in real and not relative temperature (he wrote :”I’m running water cooling and the temps aren’t high during load but just wondering about the accuracy”, high temperature is meaningless on an undefined relative scale), that “I’d say that Core Temp is more accurate, especially at higher temperatures. The ASUS programs sensors are based on the motherboard and depend on an external chip. The sensors Core Temp reads are located in the CPU itself and the values are read directly from the CPU registers.”, he clearly shows misunderstanding of what superior accuracy CPU sensors really mean.

As documented by AMD mentioned in the k10temp linux module doc, “[k10temp] is the processor temperature control value, used by the platform to  control cooling systems, [...] is a non-physical temperature on an  arbitrary scale measured in degrees, [...] does not represent an actual  physical temperature like die or case temperature. Instead, it specifies  the processor temperature relative to the point at which the system must  supply the maximum cooling for the processor’s specified maximum case  temperature and maximum thermal power dissipation”.

I was about to publish this article without paying attention to Intel sensors, but a quick search lead to me even worse: a comment about Core Temp in a doc titled CPU Monitoring with DTS/PECI stating : “These tools provide a convenient way to see the temperature variation reported by the sensor [...] There are several issues with these tools. First the assumed value for Tj may not be correct and thus impact the accuracy of actual temperature reporting. Secondly the DTS is only accurate when in the adjacency of Tj. Not knowing the intention and effective range of DTS, the tools try to compensate with the inaccuracy of low temperature reading, which may not be a correct interpretation.”

However accurate they may be, relative readings provided by CoreTemp for AMD K10 are almost meaningless to an end user (while great for the system for fancontrol and such), likely expecting to be able to compare them to other (motherboard|hard disk|etc) readings. In my case, surely k10temp mean something (17°C is low) but it makes no sense to compare it to the room (20°C), GPU (62°C), PATA Hard Disk (39°C), Motherboard (42°) or any other else temperature. In short, except if you know exactly what you’re doing, use Motherboard sensors and if you’re looking for an alternative to CoreTemp, try Open Hardware Monitor.


Syndicated 2012-02-18 15:28:31 from # cd /scratch

RSS feeds: new layout for rawdog

Almost two years ago, I posted an article describing how I use rawdog, a minimalist RSS aggregator to get, on my webserver, an HTML output of my Akregator aggregated feeds. Since then, I changed the layout:

  • articles are no longer shown in four columns,
  • articles descriptions are provided directly on the page and no longer on mouse over the title,
  • there are now several indexes pages, one per day (as many necessary to reach the article limit, set to 950) using plugin dated-output.

I won’t re-describe the whole setup, the relevant files to set up this new rawdog layout are here. On my webserver, it goes in /home/rawdog, using the user rawdog (group www-data). Obviously crontab is actually /etc/cron.d/rawdog and should be edited to refer to proper local users.

I won’t harm, by the way, even if by default unnecessary (could prove useful if, by any chance, your server is configured to interpret perl .pl ou python .py files), to restrict access to rawdogs subdirectories that contains scripts, for instance by adding, for nginx, such statements in the server config:

    location /rss/scripts { deny  all; }
    location /rss/plugins { deny  all; }

Syndicated 2012-02-03 08:27:58 from # cd /scratch

Using RAM for transient data

When a system have lots of I/O, trouble may arise. If an optical hard drive is über-solicited, quite easily you may get many kinds of failures, high CPU load, just because of I/O errors. In such case, using RAM as disk, aka RAM disk, may be a good option, as it allows way more I/O than an optical hard drive. Solid State Drive (SSD) addresses partly this issue, but it seems to, still, have way higher access time and latency than RAM. RAM disk, on the other hand,  is non persistent (unlike SSD, though), quite an annoying drawback so even if you write some scripts to save data, you will loose some in case of power failure.

RAM disk is, actually, especially appropriate for temporary data, like /var/run, /var/lock or /tmp. Linux >= 2.4  supports tmpfs, some kind of RAM disk, that (as far I understand) does not reserve blocks of memory (meaning: it does not matter if you have a big tmpfs, unused memory in the tmpfs will still be available to the whole system).

Most of my computers have more than 1 Gb RAM. And, most of the time, they never use the Swap space. For instance (relevant lines are si and so, as swap in, swap out):

bender:$ vmstat
 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0      0 4146984 674704 1309432    0    0     6     9    3   34  2  1 97  0

nibbler:$ vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0      0 862044  23944  84088    0    0    10     0   42   22  0  0 99  0

moe:$ vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 1  0      0 280552 166884 1297376    0    0     7    58   73   12  8  2 90  1

So they are good candidates to use tmpfs whenever possible. Do so with Debian GNU/Linux is fast-forward. Just edit /etc/default/rcS as follows (for /var/run & /var/lock):

RAMRUN=yes
RAMLOCK=yes

and add, in /etc/fstab (for /tmp):

tmpfs             /tmp     tmpfs     defaults    0    0

Next time you boot, diskfree should provide you with something like:

  $ df
Filesystem           1K-blocks      Used Available Use% Mounted on
tmpfs                  1033292         0   1033292   0% /lib/init/rw
varrun                 1033292       648   1032644   1% /var/run
varlock                1033292         0   1033292   0% /var/lock
tmpfs                  1033292         4   1033288   1% /dev/shm
tmpfs                  1033292         0   1033292   0% /tmp

Syndicated 2012-01-30 13:57:36 from # cd /scratch

Booting on a SATA drive with ASUS K8N4-E Deluxe mainboard

I noticed the  ASUS K8N4-E Deluxe mainboard, provided with Phoenix Technologies ACPI BIOS Revision 1008 simply won’t boot with a SATA driver while it does fine with a PATA (IDE) one. Not sure what specific part is at fault here.

 00:06.0 IDE interface: nVidia Corporation CK804 IDE (rev f2)
 00:07.0 IDE interface: nVidia Corporation CK804 Serial ATA Controller (rev f3
 00:08.0 IDE interface: nVidia Corporation CK804 Serial ATA Controller (rev f3)
 05:0a.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02)

It boots with the following linux kernel options: irqpoll nolapic apm=power_off. I added them into /etc/defaut/grub before regenerating grub.cfg:

GRUB_CMDLINE_LINUX_DEFAULT="quiet irqpoll nolapic apm=power_off"

Note that, however, the shutdown does not halt physically the system. I’ll look into that later.


Syndicated 2012-01-27 19:09:20 from # cd /scratch

Avoiding Spams with SPF and greylisting within Exim

A year ago, I posted an article describing a way to slay Spams with both Bogofilter and SpamAssassin embedded in exim. This method was proven effective for my mailboxes:  since then, during a timespan of one year, Bogofilter caught ~ 85 % of actual spams, SpamAssassin (called only if mail not already flagged defavorably by Bogofilter) caught ~ 15 %. Do the math, I had almost none to flag by hand.

Why would I change such setup? For fun, obviously :-)

Actually, I made no change, I just implemented SPF (Sender Policy Framework) and greylisting.

I noticed that plenty of spams were sent to my server @thisdomain claiming to be sent by whoever@thisdomain. These dirty spams were easily caught by the duo Bogofilter / SpamAssassin, but still, it annoyed me that @thisdomain was misused. SPF allows, using DNS records, to list which servers/computers are allowed to send mails from addresses @thisdomain.   SPF checks are predefined in Exim out of the box, so I’ll skip its configuration. The relevant DNS record (with bind9), allowing only two boxes (primary and secondary mail servers) designated by their IP to send mails @thisdomain, looks like:

thisdomain. IN  TXT  "v=spf1 ip4:78.249.xxx.xxx ip4:86.65.xxx.xxx -all"

Result: Since I implemented SPF on my domains, there was no change in the number of spam caughts. However, during this period, my primary server list of temporary bans dropped from 200/100 IPs to 40/20 IPs. I cannot pinpoint with certainty the cause of this evolution because the temporary bans list depends on plenty of things. But, surely, pretending to send mails @thesedomainsgrilledbySPF surely lost some interest for spambots. Implementing SPF is actually not about helping ourselves directly but indirectly: reducing effectiveness of spambots helps everybody.

I use greylisting on my secondary mail server since a while and I noticed over years that this one almost never had to ban IPs. Not that he never received spam, but that he almost never received mails from very obvious spam sources identified at STMP time.  Seems that most very obvious spam sources never insist enough to pass through greylisting. I guess that most spambots are coded to skip any mail server that does not immediatly accept a proper SMTP transaction, because it has no time to waste, considering how little is the percentage of spams sent actually reaching someone real.

This greylisting use the following files (an assumes memcached and libcache-memcached-perl are properly installed):

So I gave a try using greylist my primary mail server, but with a very short waiting time, because 5 minutes, for example, to receive mail from a not-yet-known  source is not acceptable. So I edited the relevant conf.d/main/ file to GREY_MINUTES = 0.5 and GREY_TTL_DAYS = 25.

Result: no changes regarding the number of caught spams. However, like on the secondary mail server, the number of banned IPs is near to none. Looks like most obvious spam sources don’t wait even only 30 seconds – actually, it’s a very acute choice as they would be anyway banned if they did.


Syndicated 2011-12-09 01:21:43 from # cd /scratch

Automounting NFS shares using if-up.d/if-down.d

I enjoy NFS since many years. But, with laptops, by essence that are not always connected to the same local netwok, it’s quite a pain in the ass.

Editing /etc/fstab each time you connect is not really an option. AutoFS seemed a great idea. It took me a while to get the damn thing running and it failed to work after a reboot. After spending plenty of time googling around, fooling around, I eventually reached the conclusion that I was not able to set it up in a reliable fashion. So I dropped the idea.

Then I had some hopes in regard of the DHCP client. It provides hooks, with plenty of variables useful to determine to which network you are connected to. I gave it a try. Worked well: a script in dhclient-exit-hooks.d to mount NFS shares after the interface is brought up on the LAN, the counterpart in dhclient-enter-hooks.d to umount NFS shares just before the interface is going down.

Then I realized that one of the two laptops supposed to make use of this script is running Ubuntu -not mine. And Ubuntu use by default a very very nasty software called NetworkManager, the kind of well thought user interface that stores configuration in anything but the standard stuff that worked finely before it even existed. Yeah, it literally makes a litter of /etc/network/interface. So, no, obviously, handling properly /etc/dhcp/dhclient-*-hooks.d/ scripts is not an option for NetworkManager, it’s so much better to reinvent the wheel with a poorly designed (And What’s The Deal With These Upper Cases?) /etc/NetworkManager/dispatcher.d.

Plenty of people complained already about obvious limits of NetworkManager. Sure, there’s is room for improvements and it’s better to contribute than just to rant. But considering the kind of replies provided to NetworkManager people about bugs reports (cause, really, not handling dhcp-*-hooks.d is a regression), I think I’ll pass. Funny links though: “After discussing with a few folks we found that pre-up will not come back … please provide detailed infos for your use-case as we have to find other means to achieve this.” (hum, they… found that useful working features will not come back but havent found any better alternative yet ?), “if the resolvconf abilities are not enough you can also stuff in a NM dispatcher.d script (see: /etc/NetworkManager/dispatcher.d/)” (please, have fun writing new scripts to replace the ones that worked just fine). In fact, when developers deal with such issue like this “Changed in network-manager (Fedora): status: Confirmed → Won’t Fix”, best is just to find a workaround that absolutely not relies on their stuff that is sure to be broken some other day – no doubt that if a new trend comes, they’ll ask you to one more time rewrite scripts just to do the same frickin thing you were able to do years ago with simple dhclient-*-hooks.d.

So I finally came up with /etc/network/if-up.d and /etc/network/if-down.d scripts. Its quite standard and, oh!, NetworkManager got a “dispatcher” that run-parts on this dir. The obvious drawback is the fact it cannot be used to properly unmount the NFS shares because it’s unclear whether NetworkManager will run if-down.d before or after having brought down the network interface and, also, because it’s way to more painy to determine whether the loss of the current interface means loosing the relevant network where shares are (if you loose the Wifi, clearly, you may still be properly connected to the LAN). And I’m off trying to guess how behave and how will behave in 6 month this piece of software.

Instead of hardcoding the list of NFS shares in one more script, considering that initscripts already provides a well-thought /etc/network/if-up.d/mountnfs, I figured I would simply rely on /etc/fstab. My /etc/network/if-up.d/01prepmountnfs (that must run before initscript’s mountnfs) simply goes through /etc/fstab, looks for NFS shares that are in noauto mode (so, not configured to be mounted automatically when the box starts), find out if the server exists on the current LAN. If so, it removes the noauto option and then initscript’s mountnfs does its magic. On Ubuntu, there’s no /etc/network/if-up.d/mountnfs, but the following is enough to replace it:


echo "#!/bin/sh
mount -a" > /etc/network/if-up.d/mountnfs
chmod a+x /etc/network/if-up.d/mountnfs

The /etc/network/if-down.d/unprepmountnfs counterpart only reverts /etc/fstab to its previous state. Yes, if you loose connection to the NFS server, your X session will probably get frozen. For the reasons previously stated, for now it will have to do.


Syndicated 2011-09-15 21:23:49 from # cd /scratch

Wished I bought an Android/iOS based phone (4-2cal Bada OS’s widget)

I don’t care much about phones. I don’t give a toss about eye-candy fancy stuff. It’s a tool and only phone related features matters to me. Also, I tend to break things: obviously because I’m careless (yes, I don’t care much about phones, as already stated). So when picking a phone, I go for the easiest to carry: lightweight and small. If it’s robust, it’s a plus.

But so-called smartphones seem to get bigger and bigger over years. So when my last phone died, I bought the cheapeast and smallest smartphone that was available to me through my phone provider. It’s a Samsung wave 723. Not bad hardware-wise. Sure, there is better hardware around, but every piece of hardware is or will be bested soon in one way or another: as long as it’s good enough, then it’s good. No, the obvious drawback is the OS it came with.

The OS is called Bada OS and seems to be one more Linux based OS, developed in-house by Samsung. It depends on Libre Software but it’s not obvious that it is composed of such software. Well, two weeks after getting this phone, Samsung announced they will no longer provide OS software upgrade for the Samsung wave 723 and several others of the serie. Brand new and already obsolete, the wave 723 is stuck with Bada OS 1.1 while Bada OS 1.2 provides T9 trace, which is not exactly what I consider frivolous.
Clearly, I’ll take into account this policy from outer space next time I’ll buy a phone.

4-2cal / Bada OS

Nonetheless, now I’m stuck with Bada OS and I needed a specific calendar app (highlighting 6 days work weeks). Bada OS provides “apps” and “widgets”, it’s unclear to me how pertinent is this distinction. Whatever. I needed something that sticks on the phone desktop, so it’s what is called here “widgets” and is in fact some kind of HTML/javascript packaged in a zip with the .wgt extension.
I wrote 4-2cal.wgt and I can, now, also acknowledge I’ve never seen a so poorly documented OS/environment since a very long time ago. Quite time consuming to be forced to second guess the specifics/cracks of the javascript implementation.
So, next time, I’ll definitely think twice before wasting time with a Bada OS-based phone.

Syndicated 2011-07-08 14:47:37 from # cd /scratch

141 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!