Older blog entries for yeupou (starting at number 160)

Accessing video/audio from a computer with a Freebox (using UPnP)

Nowadays, you would think having a network device like a TV box accessing data from your unix-based system easy going. I mean, we have NFS, Samba all-right, how hard could that be to devise an interface to access at least one of the many network shares, provided a user/password or based on it’s IP?

But no, it won’t work that way. It’s supposedly too complex, so instead, people promotes zeroconf and such, stuff supposed to work out-of-the-box that actually may not work at all. For instance, that to access movies/music from your computer with your Freebox HD (TV box) v5, and I assume it’s the same with many similar boxes from others ISP, you can forget about using NFS/SMB/http or whatever protocol you already had working or think easy to put in motion. No, you’ll have to use UPnP, standing for Universal Plug and Play, words that truly often refers to Plug and Pray instead.

From the computer…

So, let’s go getting our hands dirty. The setup I’m working with is quite simple: a Freebox HD v5, a single computer with a single users having some videos, some with subtitles (mostly .srt), and audio files. It’s basic but it took me a while to figure it out.

I did test plenty of UPnP servers. I tried Mediatomb but it did not work – plus the gothic interface seemed weird. I tried XBMC and it worked nicely but only to show empty folders, and no obvious way to have it up without the CPU consuming interface. Then I installed mythtv and I did not even understood how it is supposed to work in regard of UPnP.

So I tried minidlna, the lightweight one I avoided from the start because it’s not know to properly support subtitles files. And, tada!, it actually works almost out-of-the-box. Yeah, almost. That’s the funny thing, even if you claim to aim zeroconf, when it comes to share files, at some point you’re still in need to list what you actually wanna share. Whatever. So I did apt-got minidlna. Then I edited /etc/default/minidlna:


As there’s only one legit user on this box, I wanted the daemon to be able to access his files with no specific consideration for privileges/ownership. I implied doing then:

chown thisguy.thisguy /var/lib/minidlna -Rv

Then I modified /etc/minidlna.conf as follows:



… to the box

I restarted the daemon (and made sure it’s included in /etc/rc2.d). And that’s all (I modified also the firewall setup but I’m not sure that’s relevant considering that UPnP implies that it advertises itself to other devices when it’s up and not the other way around – so firewall is an issue only if you have one that blocks connections from the inside to the outside).

By that’s all, I meant: it was enough to get access to movies on the Freebox HD but not with the subtitles.

I googled around: the minidlna version I had was supposed to properly give access to .srt along with videos. The Freebox HD itself supports .srt files with the same name of the video, when you access video over an USB device. But apparently plenty of implementations of UPnP have no consideration for subtitle files and the Freebox one is probably one such. So having separate .srt or .sub or whatever is a no-go.

Then, I gave a try to Matroska files (.mkv) despite the fact that I always had bad experience with it. Most notably, I usually implies videos costing tons of CPU time to decode and render and videos players usually fail to properly keep video in sync with audio – yeah, that’s really nasty. But Matroska allows to embed subtitles in the file without touching the video stream, which is neat. So I did that. Long story short, Matroska files, 9 times out of 10, freeze the Freebox HD: and I’m talking about Matroska files that are not bigger than the original .avi that run well on the very same Freebox, and I’m talking about Matroska files that run well on the computer with VLC or mplayer. So that’s a no-go too.

So I ended with the worse solution: altering original files, with mencoder to incrust subtitles within. Yeah, it’s kind of definitive and if you don’t want to spend hours of CPU time to do it, it implies quality loss. But, at least, it works. So here it goes, I wrote the following script to ease the process, assuming that video files along with .srt where originally on an USB device called HERMES and then copied to thisguys home:



# go thru the list of videos
find $ORIG -name "*.avi" -or -name "*.mpg" -or -name "*.mp4" |
while read video; do
  # find out basename
  endname=`basename "$basename"`-WS.$format

  echo $endname

  # use french subtitles in priority over english
  if [ -r "$basename"_en.srt ]; then subtitle="$basename"_en.srt; fi
  if [ -r "$basename"_fr.srt ]; then subtitle="$basename"_fr.srt; fi
  if [ -r "$basename".srt ]; then subtitle="$basename".srt; fi

  # no valid subtitle found at this point? skip the video
  if [ ! -r "$subtitle" ]; then continue; fi

  # now create to relevant directory if missing
  enddir=`dirname "$basename" | sed "s@${ORIG}@${DEST}@g;"`
  if [ ! -d "$enddir" ]; then mkdir -pv "$enddir"; fi

  # proceed only if the file is missing
  if [ -r "$enddir/$endname" ]; then continue; fi

  # if we reach this, go for it
  mencoder "$video" -subpos 92 -sub "$subtitle" -o "$enddir/$endname" -oac copy -ovc lavc


Yeah. Plug and play my ass.

Syndicated 2012-10-14 10:17:17 from # cd /scratch

Locking KDE (plasma) desktop

(Not talking about the whole desktop environment, just the desk, actually) You have plasmoids and a consistent layout. Nice. Then you have end-users, a bit clueless. And after a few month, their desktop is an absolute mess and they don’t even know why, it’s not even like they wanted to change anything to it. But you had set “lock plasmoids”. So you’re obviously locking for a way to remove the “unlock plasmoid” option.

You can do so following this advice:

you’ll have to add [$i] in the first (blank) line into the plasma-desktop-appletrc

Great! Except it looks like a dirty hack so I wonder if it will still work in the long run. I’d gladly take any further advice.

Syndicated 2012-10-06 14:57:09 from # cd /scratch

The GNU/Linux desktop wasn’t killed by MDI’s failure with GNOME/Mono/HelixCode/Ximian/…

How Apple Killed the Linux Desktop titles ./ today. And it discuss Miguel De Icaza (MDI) latest thoughts.

Flashback: That was the guy promoting to-be-coded Evolution and Nautilus versus actually-running Balsa and many others decent GNOME 1.x apps. Eazel, some kind of company created by guys mainly from the proprietary software world, was alone in charge of Nautilus and this file manager was set to be GNOME 2.x file manager without even one frickin pre-release. Proprietary development model all along: release (too) late, release rarely (never?). Aside from Eazel, GNOME was in the hands of Helix Code, MDI’s own company, renamed later Ximian. Nice icons, nice website, yeah. Aside from that, it’s funny enough to picture the GNU desktop project being in the hands of the same people that created and promoted Mono, considering FSF (I think correct) opinion on Mono. The Wikipedia page don’t mention it, but Ximian authored some proprietary software also.

So, now, we should care about MDI’s latest thoughts of GNU/Linux and desktop? If GNOME is failure, it all started when he really tooks charge. If GNOME is failure, it does not mean that KDE and others are, and while he may be entitled to concede defeat for GNOME, he’s definitely not entitled to do so in the name of GNU/Linux (or Linux as he calls it, even if a kernel have really little to do with the desktop). This guy invented thousand of ways to fail, to show considerable lack of oversight and very low attachement to the idea of libre software. Now he feels entitled, one more time, to say what we should care about, that is not freedom apparently? Please, give us a break

Syndicated 2012-08-29 14:08:44 from # cd /scratch

Switching from one to another soundcard using PulseAudio in KDE

Ever found yourself in the situation where you have some kind of home cinema connected to your mainboard souncard and headphones connected through their specific USB soundcard? It gets tremendously painy to handle if you regularly want to switch from the home cinema to the headphones without really unswitching anything.

I wrote a very basic script that, from two active souncards, it keeps only one active and switch from one to another each time it is called. Note the script will not handle more than two cards. If you have more, you’ll have to hardcode them with the hash $cards_to_ignore.

Obviously, you can easily add keyboard shorcuts adding a new entry with kmenuedit.

Syndicated 2012-07-24 13:34:20 from # cd /scratch

Modifying preinst and postinst scripts before installing a package with dpkg

Ever found yourself in the situation where you’d like to ignore or edit a postinst or preinst script of a Debian package?

As Debian froze Wheezy I decided it would be a good time for me to upgrade my home server, to help catching bugs and because it’s Sandy Bridge based not well supported regarding its sensors by Squeeze’s kernel. Unfortunately, I had weird stuff regarding EGLIBC, namely I had the 2.13 version installed from scratch, unknown to the dpkg database, while dpkg only knew about the cleanly installed 2.11. So the upgrade failed with:

A copy of the C library was found in an unexpected directory:
It is not safe to upgrade the C library in this situation;
please remove that copy of the C library or get it out of
'/lib/x86_64-linux-gnu' and try again.

dpkg : erreur de traitement de libc6_2.13-33_amd64.deb (--install) :
 le sous-processus nouveau script pre-installation a retourné une erreur de sortie d'état 1
Des erreurs ont été rencontrées pendant l'exécution :

Nasty. EGLIBC/GLIBC is a major piece of the system, you cannot simple “remove” it or “get it out” and expect the system to continue to work. Moreover, in this specific case, these files we’re not truly an issue: they were about to be replaced during the upgrade process. But dpkg does not provide any mean to ignore configure scripts (and will probably never do). So one easy workaround is to uncompressed, edit, rebuild and install the package as follows:

aptitude download libc6
dpkg-deb --extract libc6_2.13-33_amd64.deb libc
dpkg-deb --control libc6_2.13-33_amd64.deb libc/DEBIAN

Then we can edit libc/DEBIAN/preinst (I commented out the exit 1 after the safety warning)

dpkg-deb --build libc
dpkg -i libc.deb

Yes, it’s fast :-)

Syndicated 2012-07-21 21:55:03 from # cd /scratch

Booting over the network to install the system

Do you still have CD/DVD players installed on your boxes? Well, I mostly don’t; why would I anyway?

Actually, apart from system installation or access to the rescue mode of the system installation, there’s nothing you cannot do without and nothing is not best to do without (nothing is slower and noisier on  nowadays computers). But that’s not even really true anymore, now most mainboards include an ethernet card capable of network booting even if hidden behind confusing names like NVDIA Boot Agent for instance.

Usually, it supports the Preboot Execution Environment (PXE) which combines DHCP and TFTP. That’s nice because it’s then easy with GNU/Linux to ran DHCP and TFTP servers. So here comes my PXE setup, using ISC DHCPD and TFTPD-HPA, both shipped by Debian.

As described in the README, on the server (you have a home server, right? *plonk*), put this PXE directory somewhere clever, like /srv/pxe for instance (yes, that’s what I did; but you can put it in /opt/my/too/long/path/i/cannot/remember if you really really want).

Run the gnulinux/update.sh script to get kernels and initrds. By default, it fetches debian and ubuntu stuff. If it went well, you should have several *-linux and *-initrd.gz files in gnulinux/ plus a generated config file named default inside pxelinux.cfg/
You may add a symlink to this script inside /etc/cron.monthly so you keep stuff up-to-date.

Then, you must install a “Trivial FTP Daemon” on you local server which will, in the context of PXE (Preboot Execution Environment), serve these files you just got:

apt-get install tftpd-hpa
update-rc.d tftpd-hpa defaults

Edit /etc/default/tftpd-hpa, especially TFTP_DIRECTORY setting (you know, /opt/my/what/the/…).

Finally, you must update your DHCP Daemon so it advertises we’re running PXE (filename and next-server options). With ISC dhcpd, in /etc/dhcp/dhcpd.conf, for my subnet, I have now:

subnet netmask {

  # PXE / boot on lan
  filename "pxelinux.0";

Obviously, you wont forget to do:

invoke-rc.d isc-dhcp-server restart
invoke-rc.d tftpd-hpa start

That’s all. Now on your client, go in the BIOS, look for “boot on lan” and whatever crap it may be called (it varies greatly), activate it. Then boot. It’ll do some DHCP magic to find the path to the PXE and the menu should be printed on your screen at some point.

We can actually do plenty of things with this simple stuff. We could, for instance, use it to boot diskless terminals on a specifically designed distro.

Syndicated 2012-07-14 23:24:09 from # cd /scratch

Converting PDFs to multiple HTML pages with pdftk and pdftohtml

As already stated on this blog, Bada OS is total crap. Scripting is a mess, T9 is missing of original versions, updating is not an available option depending on your phone (even if the phone is less than a year old). It keeps being absolutely worthless when it comes to reading PDF. No matter how, even if you feed it a specifically cropped PDF with no margins, you’ll always end up with something not really readable, too big, too small, whatever. A pain in the ass.

I soon realized it’s best, with such an appalling combination of software and hardware, to convert ebooks/PDFs to HTML. And as the provided HTML reader can’t remember what page you last read (not surprising) and, ahem, is unable to load a 3 MB page (low memory it says: even if a 30 MB PDF can be loaded by the PDF reader with no issue on the exact same phone, go figure!), it needs splitted HTML.

PDF is usually an output format, not a source format. While there’s plenty to convert to PDF, fact is there is no complete suite to convert from. pdftk is powerful but not easy to handle IMHO and pdftohtml latest released is almost 10 years old. So I ended writing a small wrapper (pdf2htmls.pl) for both theses tools to convert one PDF to multiples HTML files with basic indexes. It takes –input=file.pdf and (optional) –output=directory arguments. Asides from Perl, it requires debian packages pdftk and poppler-utils.

The indexes are über-crude. They could be improved with chapters/titles, I’ll maybe add that later.

Syndicated 2012-06-21 12:57:03 from # cd /scratch

Having homemade aliases, functions and such available to every interactive shells,

Years ago, I remember RedHat already provided /etc/bashrc.d/ to add custom scripts to be sourced site-wide whenever bash was started. Debian still only provides /etc/profile.d for such scripts to be sourced site-wide. So, starting using Debian, I added stuff in this latest directory and made sure that /etc/bash.bashrc itself ran /etc/profile so it would be sourced in any cases.

There is actually a problem with that.

As defined (RTFM! `man bash`), /etc/profile is to be sourced for interactive login shells (`bash –login`) while /etc/bashrc or /etc/bash.bashrc is to be sourced for interactive non-login shells (`bash`). Having /etc/profile ran by /etc/bash.bashrc defeats the overal purpose of distinguishing the two of them. LFS ask for /etc/profile content to be a run-once thing, for logins, not something that should be started for any xterm.

But if you don’t, anything in /etc/profile.d will be ignored by most shells you’ll start on a X session, where you actually log in once and then starts numerous xterms. Ok, to put your aliases and local functions, you can edit /etc/bash.bashrc and use skels for ~/.bashrc, but that’s way less convenient than just copying a script into a directory.

To get something consistent, I added the /etc/bashrc.d directory. I think such directory should exists by default on Debian, even if I would agree if someone was to point out that this should not be BASH-specific.

Here’s an example of my /etc/bashrc.d and my /etc/profile.d. My local debian package postinst script add automatically the required following line in /etc/bash.bashrc:

[ -z "$ETC_BASHRC_SOURCED" ] && for i in /etc/bashrc.d/*.sh ; do if [ -r "$i" ]; then . $i; fi; done

Note that the same postinst script symlink /etc/profile.d/bash_completion.sh to /etc/bashrc.d/bash_completion.sh. The very existence of this file in /etc/profile.d IMHO show the extent of the broken default design. How come someone would actually want bash completion for login shells but not for interactive non-login shells? I would actually expect the contrary: as bash completion can be CPU time consuming, if it is to be skipped in only one case, it’s definitely on login shells! Why is it so? Probably because only /etc/profile.d exists.

(I’ve read also some people saying that /etc/bash.bashrc should be edited by hand. On any computers of a local network just to add a few local aliases? Ouch!)

Syndicated 2012-05-29 09:45:35 from # cd /scratch

For a change, today I won’t describe how I did something but how I did not.

I had I mind to use tumblr with a daily automated post of a picture. I devised it would be nice if a daily cronjob on my local server was updating a git directory and then post the first image in the queue.

First, I found out that tumblr refuses to handle mail sent directly by mutt and the local server smtp. So I then tried having mutt sending the mail using gmail authenticated smtp. I did not work either. But it works fine to any other recipient. And it works if directly sent with gmail web interface. We’ve made it incredibly easy to post from your desktop or mobile phone. Just send an email to the custom email address for the blog you’d like to publish to they claimed. Go figure, that probably something they implied by the confusing sentence Send posts directly to your mobile posting email address. You cannot email another email address and then forward the email from there. I understand spam is an issue they have to care about, but how comes that even gmail authentication isn’t proof enough of goodwill?

Anyway I end up with a non working script for the simple task of sending a mail.

Syndicated 2012-05-07 07:58:01 from # cd /scratch

Upgrading Dell Latitude C640 CPU

Just because it’s quite cheap, via ebay, I decided to upgrade my Dell Latitude C640 CPU from a 2.0 GHz P4-M (sl6fk) to a 2.4 GHz one (sl6vc). It could, in theory, get a 2.6 GHz one, but these (like sl6wz) are way more expensive.

The Dell Latitude C640 service manual describe in lenghty details how to actually change the CPUs. There isn’t much point in describing it here. Remove the hard drive, the keyboard, then the CPU thermal cooling assembly and you can easily access the CPU socket.

After that change, the BIOS complained about a “Processor Microde Update Failure – The revision of processor in the system is not supported.”, a non-blocker item. A quick check with dmidecode showed me the current bios were version A08 released 03/04/2003, actually a few month before first releases of the new processor. So I decided to upgrade the BIOS too, following these advices. I downloaded a windows BIOS update from dell website.  On a computer with wine available (not the case of my laptop),  I ran wine ./R71684.exe and stopped after it extracted all the files it contained, then I ran unshield x data1.cab to get the contents of this cabinet. I found a file BiosHeader/C640_A10.HDR that I copied on my laptop. On the laptop, with the package libsmbios-bin installed and the module dell_rbu loaded, I ran dellBiosUpdate -f ./C640_A10.HDR -u which returned:

Supported RBU type for this system: (MONOLITHIC)
Using RBU v2 driver. Initializing Driver. 
Setting RBU type in v2 driver to: MONOLITHIC
Prep driver for data load.
Writing RBU data (4096bytes/dot): .................................................................................................................................
Notify driver data is finished.
Activate CMOS bit to notify BIOS that update is ready on next boot.
Update staged sucessfully. BIOS update will occur on next reboot.

Then rebooted the laptop and it restarted mentioning it was now running BIOS version A10. Cpufreq works fine, everything is in order.

Syndicated 2012-04-26 22:05:16 from # cd /scratch

151 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!