Older blog entries for amits (starting at number 78)

9 Jan 2013 (updated 22 May 2013 at 11:16 UTC) »

Workarounds for common F18 bugs

I’ve been using the Fedora 18 pre-release for a couple of months now, and am generally happy with how it works.  I filed quite a few bugs, some got resolved, some not.  Here’s a list of things that don’t work as they used to in the past, with workarounds so they may help others:

  • Bug 878619Laptop always suspends on lid close, regardless of g-s-t policy: I used to set the action on laptop lid close to lock the screen by default, instead of putting it in the suspend state.  I used to use the function keys or menu item to suspend earlier.  However, with GNOME 3.6 in F18, the ‘suspend’ menu item has gone away, replaced by ‘Power Off’.  The developers have now removed the dconf settings to tweak the action of lid close (via gnome-tweak-tool or dconf-editor).  As described in GNOME Bug 687277, this setting can be tweaked by adding a systemd inhibitor:
    systemd-inhibit --what=handle-lid-switch \
                    --who=me \
                    --why=because \
                    --mode=block /bin/sh
  • Bug 887218 – 0.5.0-1 regression: 147e:2016 Upek fingerprint reader no longer works: fprintd may not remember the older registered fingerprints, re-registering them is a workaround.
  • Bug 878412Cannot assign shortcuts to switch to workspaces 5+: I use keyboard shortcuts (Ctrl+F<n>) to switch workspaces.  Till F16, I could assign shortcuts to as many workspaces as are currently in use.  Curiously, with F18, shortcuts can only be assigned to workspaces 1 through 4.  This was a major productivity blocker for me, and an ugly workaround is to create a shell script that switches workspaces via window manager commands: install ‘wmctrl’, and create custom shortcuts to switch workspaces by invoking ‘wmctrl -s <workspace-1>’.  wmctrl counts workspaces from 0, so to switch to workspace 5, invoke ‘wmctrl -s 4′.
  • Bug 878736Desktop not shown after unlocking screensaver: This one is due to some focus-stealing apps and gnome-shell’s new screensaver not working together.  I use workrave, an app that helps me keep my eyesight and wrists in relatively good shape.  Other people have complained even SDL windows (games, qemu VMs, etc.) interact badly with the new screensaver.  For my workaround, I’ve set workrave to not capture focus for now.
  • Bug 878981“Alt + Mouse click in a window + mouse move” doesn’t move windows anymore: The modifier key is now changed to the ‘Super’ key, so Super + mouse click + mouse move works in a similar way to how using the Alt key worked earlier.  I’m still lacking the window resize modifier that KDE offers (modifier key + right-click+mouse move)
  • Bug 878428__git_ps1 not found: I’ve discussed this earlier.

Other than these, a couple of bugs that affect running F18 in virtual machines:

  • Bug 864567display garbled in KVM VMs on opening windows: Using any other display driver for the guest other than cirrus works fine.
  • Bug 810040 – F17/F18 xen/kvm/vmware/hyperv guest with no USB: gnome-shell fails to start if fprintd is present: I mentioned this earlier as well: remove fprintd in the VM, or add ‘-usb’ to the qemu command line.

Syndicated 2013-01-09 13:24:36 (Updated 2013-05-22 10:24:38) from Think. Debate. Innovate. - Amit Shah's blog

7 Dec 2012 (updated 22 May 2013 at 11:16 UTC) »

Mystery Shopper Needed

Most of the spam I receive gets caught by spam filters, and pushed into the separate spam folder.  I check the folder once in a while for false positives.

A recent message in my spam folder, with the subject ‘Mystery shopper needed’ caught my attention:

Mystery Shopper needed
+++++++++++++++++++++++++++

We have post of Mystery Shopper in your area. All you need is to act like a customer, you be will surveying different outlets like Walmart, Western Union, etc and provide us with detailed information about their service.

You will get $200.00 per one task and you can handle as many tasks as you want. Each assignment will take one hour and it wont affect your present occupation because it is flexible.

Before any task we will give you with the resources needed. You will be sent a check or money order, which you will cash and use for the task. Included to the  check would be your assignment payment, then we will provide you details through email. You just need to follow instruction given to you as a Secret Shopper.

If you are interested, please fill in the details below and send it back to us to john_paul2_john@aol.com for approval.

First Name:
Last Name:
Full Address:
City, State and Zip code:
Cell and Home Phone Numbers:
Email:

Hope to hear from you soon.

Head of Operations,
John Paul.

I can’t resist going shopping — and being paid for it!  Posted this here in case anyone else missed this email due to “bad” spam filters.  We don’t have Walmart here yet, but we certainly do have Western Union.

PS: If you’re interested in treasure hunts: can you spot who’s actually sending these messages?

Return-Path: <john@rapanuiviaggi.redacted>
Delivered-To: <redacted>
Received: (qmail invoked by alias); 28 Nov 2012 04:07:23 -0000
Received: from dns.hsps.ntpc.edu.tw (EHLO dns.hsps.ntpc.edu.tw) [163.20.58.14]
        by mx0.gmx.net (mx002) with SMTP; 28 Nov 2012 05:07:23 +0100
Received: from dns.hsps.ntpc.edu.tw (localhost [127.0.0.1])
        by dns.hsps.ntpc.edu.tw (Postfix) with ESMTP id C5BD97DF740D;
           Wed, 28 Nov 2012 10:34:02 +0800 (CST)
Received: from dns.hsps.ntpc.edu.tw (localhost [127.0.0.1])
        by dns.hsps.ntpc.edu.tw (Postfix) with ESMTP id 7DA667DF7379;
           Wed, 28 Nov 2012 10:34:02 +0800 (CST)
From: "John Paul." <john@rapanuiviaggi.redacted>
Reply-To: john_paul2_john@aol.com
Subject: Mystery Shopper needed.
Date: Wed, 28 Nov 2012 10:34:02 +0800
Message-Id: <20121128023012.M26524@rapanuiviaggi.redacted>
X-Mailer: OpenWebMail 2.52 20060502
X-OriginatingIP: 41.150.63.142 (web2)
MIME-Version: 1.0
Content-Type: text/plain;
           charset=iso-8859-1
To: undisclosed-recipients: ;
X-NetStation-Status: PASS
X-NetStation-SPAM: 0.00/5.00-8.00

Syndicated 2012-12-07 06:37:41 (Updated 2013-05-22 10:28:15) from Think. Debate. Innovate. - Amit Shah's blog

20 Nov 2012 (updated 22 May 2013 at 11:16 UTC) »

__git_ps1 not found after upgrade to Fedora 18

If you have enabled git information in the shell prompt (like branch name, working tree status, etc.) [1], an upgrade to F18 breaks this functionality.  What’s worse, __git_ps1 (a shell function) isn’t found, and a yum plugin goes looking for a matching package name to install, making running any command on the shell *very* slow.

A workaround, till the bug is fixed, is to do:

ln -s /usr/share/git-core/contrib/completion/git-prompt.sh  /etc/profile.d/

Bug 878428, if you want to track progress.

[1] To add such git information in the shell display (for bash), add this to your .bashrc file:

export GIT_PS1_SHOWDIRTYSTATE=true
export GIT_PS1_SHOWUNTRACKEDFILES=true
export PS1='\[\033[00;36m\]\u@\h\[\033[00m\]:\[\033[01;34m\] \w\[\033[00m\]$(__git_ps1 " (%s)")\$ '

Syndicated 2012-11-20 12:22:00 (Updated 2013-05-22 10:26:58) from Think. Debate. Innovate. - Amit Shah's blog

18 Nov 2012 (updated 22 May 2013 at 11:16 UTC) »

Avi Kivity Stepping Down from the KVM Project

Avi Kivity giving his keynote speech

Avi Kivity announced he is stepping down as (co-)maintainer of the KVM Project at the recently-concluded KVM Forum 2012 in Barcelona, Spain.  Avi wrote the initial implementation of the KVM code back at Qumranet, and has been maintaining the KVM-related kernel and qemu code for about 7 years now.

In his keynote speech, he mentioned he’s founding a startup with a friend, and hopes to create new technology as exciting as KVM.  He also mentioned they’re in stealth mode right now, so questions about the new venture didn’t get any answers.

He returned to the stage on the second day of the Forum to talk about the new memory API work he’s been doing in qemu, and in his typical dry humour, he mentioned he was supposed to vanish in a puff of smoke after his keynote, but the special effects machinery didn’t work, so he was back on stage.  Avi later rued the lack of laughter at this joke, and that made him very sad.  To offer him some consolation, it was pointed out that not everyone knew of his departure, as many had missed his keynote.  He quipped “that’s even worse than not getting laughs”.

His leadership, as well as his humour, will be missed.  Personally, he’s helped me grow during the last few years we’ve worked together.  But I’m sure whatever he’s working on will be something to look forward to, and we’re not really bidding him adieu from the tech world.

Syndicated 2012-11-18 06:32:05 (Updated 2013-05-22 10:31:17) from Think. Debate. Innovate. - Amit Shah's blog

28 Oct 2012 (updated 22 May 2013 at 11:16 UTC) »

Setting Up Your Free Private Feed Reader

I’ve tried several RSS feed readers, offline as well as online: aKregator, Liferea, rss2email being the ones tried for a long time. One drawback with these offline tools is they may miss feeds when I’m offline for prolonged periods (travel, vacations, etc.). Also, they’re tied to one device; can’t switch laptops and have the feeds be in sync. I tried Google Reader for a while as well, for a solution in the “cloud”, which worked for a while, but not anymore.

So I started to search for an online feed reader, preferably with hosting services, since I didn’t want to keep up with updates to the software. I found several free readers, and Tiny Tiny RSS seemed like a really good option.  The developer hosts an online version of the reader, which I used for quite a while.  (The online service is soon going to be discontinued.)  I was quite content with that option, but when OpenShift was launched, I thought I’d try hosting tt-rss myself: it initially began as an experiment to using OpenShift. Then, when I moved this blog to OpenShift, I realised it didn’t really take much effort to host the blog, and that I could switch my primary instance of tt-rss from the developer-hosted instance to my own. It turned out to be really easy, and here I’ll share my recipe.

I first grabbed the ttrss sources from the git repo:

cd ~/src/
git clone git://github.com/gothfox/Tiny-Tiny-RSS.git

I then created an OpenShift php app.

cd ~/openshift
rhc app create -a ttr -t php-5.3

Then, added a mysql db and the phpmyadmin tool to manage the db, in case something goes wrong sometime.

rhc-ctl-app -e add-mysql-5.1 -a ttr
rhc-ctl-app -e add-phpmyadmin-3.4 -a ttr

After this initial setup, I copied all the files from the ttrss src dir to the php/ directory of the OpenShift repo:

cp -r ~/src/Tiny-Tiny-RSS/* ~/openshift/ttr/php/

Next is to add all the files to the git repo:

cd ~/openshift/ttr/
git add php
git commit -m 'Add tt-rss sources'

Now to set up the environment on the server for tt-rss to work in. E.g. creating directories where tt-rss will store its feed icons, temporary files, etc. This is needed, as the OpenShift git directory is transient: it’s deleted and re-created whenever ‘git push’ is done. So to store persistent data between git pushes, we need to use the OpenShift data directory. Create an app build-time action hook to setup the proper directory structure each time the app is built (i.e. after a git push). Learn more about the different build hooks here.

Edit the .openshift/action_hooks/build file, so it looks like this:

#!/bin/bash
# This is a simple build script, place your post-deploy but pre-start commands
# in this script.  This script gets executed directly, so it could be python,
# php, ruby, etc.

TMP_DIR=$OPENSHIFT_DATA_DIR/tmp
LOCK_DIR=$OPENSHIFT_DATA_DIR/lock
CACHE_DIR=$OPENSHIFT_DATA_DIR/cache

ln -sf $OPENSHIFT_DATA_DIR/icons $OPENSHIFT_REPO_DIR/php/ico
if [ ! -d $TMP_DIR ]; then
    mkdir $TMP_DIR
fi

if [ ! -d $LOCK_DIR ]; then
    mkdir $LOCK_DIR
fi

if [ ! -d $CACHE_DIR ]; then
    mkdir $CACHE_DIR
fi

if [ ! -d $CACHE_DIR/export ]; then
    mkdir $CACHE_DIR/export
fi

if [ ! -d $CACHE_DIR/images ]; then
    mkdir $CACHE_DIR/images
fi

Make this file executable, and commit the result:

chmod +x .openshift/action_hooks/build
git add .openshift/action_hooks/build
git commit -m 'build hook: create and link to persistent RW directories'

Next was to create the tt-rss config file from the provided template:

cd ~/openshift/ttr/php/
cp config.php-dist config.php

And then editing the config file.

First, the DB info. I created a new db user via the phpmyadmin interface, but you can use the default admin user as well.

        define('DB_TYPE', "mysql");
        define('DB_HOST', $_ENV['OPENSHIFT_DB_HOST']);
        define('DB_USER', "<user>");
        define('DB_NAME', "ttr");
        define('DB_PASS', "<your pass>");
        //define('DB_PORT', '5432'); // when neeeded, PG-only

Next come the files and directories section:

        define('LOCK_DIRECTORY', $_ENV['OPENSHIFT_DATA_DIR'] . "/lock");
        // Directory for lockfiles, must be writable to the user you run
        // daemon process or cronjobs under.

        define('CACHE_DIR', $_ENV['OPENSHIFT_DATA_DIR'] . '/cache');
        // Local cache directory for RSS feed content.

        define('TMP_DIRECTORY', $_ENV['OPENSHIFT_DATA_DIR'] . "/tmp");
        // Directory for temporary files

        define('ICONS_DIR',  $_ENV['OPENSHIFT_DATA_DIR'] . '/icons');
        define('ICONS_URL', "ico");

The last icons bit is a modification from the default of ‘feed-icons’. If you’re setting up a new repo, there’s no need to deviate from the default, but when I had deployed the tt-rss instance, the default icons directory was ‘icons’, which unfortunatley clashes with Apache’s idea of what $URL/icons is. So I used ‘ico’. Remember to modify the bit in the build hook above to create the appropriate symlink if this ICONS_URL is changed.

These config settings are the ones specific to OpenShift. Modify the others to suit your needs.

Lastly, add a cron job to update the feeds at an hourly interval:

cd ~/openshift/ttr
mkdir .openshift/cron/hourly

I created a new file, called update-feeds.sh, in the new .openshift/cron/hourly directory, and added the following to it:

#!/bin/bash

$OPENSHIFT_REPO_DIR/php/update.php -feeds >/dev/null 2>&1
date >> $OPENSHIFT_LOG_DIR/update-feeds.log

For troubleshooting cron jobs, you can append custom output to any file in the log directory, like the date being output above. For other ways to update feeds, refer to the tt-rss documentation.

Add this file to git:

cd ~/openshift/ttr
git add .openshift/cron/hourly/update-feeds.sh
git commit -m 'add hourly cron job to update feeds'

Lastly, push the result to the OpenShift servers:

git push

That’s it! Enjoy your completely free (free as in freedom, as well as free as in beer) and personal feed reader in the clouds.

Syndicated 2012-10-28 16:36:14 (Updated 2013-05-22 10:29:35) from Think. Debate. Innovate. - Amit Shah's blog

14 Sep 2012 (updated 22 May 2013 at 11:16 UTC) »

Virtualization at the Linux Plumbers Conference 2012

The 2012 edition of the Linux Plumbers Conference concluded recently.  I was there, running the virtualization microconference.  The format of LPC sessions is to have discussions around current as well as future projects.  The key words are ‘discussion’ (not talks — slides are optional!) and ‘current’ and ‘future’ projects — not discussing work that’s already done; rather discussing unsolved problems or new ideas.  LPC is a great platform for getting people involved in various subsystems across the entire OS stack in one place, so any sticky problems tend to get resolved by discussing issues face-to-face.

The virt microconf had A LOT of submissions: 17 topics to be discussed in a standard time slot of 2.5 hours for one microconf track.  I asked for a ‘double track’, making it 5 hours of time for 17 topics.  Still difficult, but reducing a few topics to ‘lightning talks’, we could get a somewhat decent 20 minutes per topic.  I contemplated between rejecting topics and thus increasing the time each discusison would get, or keeping all the topics, and asking the people to wrap up in 20 minutes.  I went for the latter — getting more stuff discussed (and hence, more problems / issues ‘out there’) is a better use of time, IMO.  That would also ensure that people stay on-topic and focussed.

There was also a general change in the way microconfs were scheduled this time: the microconfs were not given a complete 2.5-hour slot.  Rather, they were given 3 slots of 45 minutes each.  This helped the schedule pages to show the topics of the microconfs being discussed at that time, so the attendees could pick and choose the discussion they wanted to attend, rather than seeing a generic ‘Virtualization Micrconf’ slot.  I think this was a good idea.  Individual microconf owners could request for modifications to this scheme, of course, and some microconfs just chose to run the entire session in one slot, or reserved one whole day in a room, etc.  For the virt microconf, I went with six separate slots, scheduled in a way to avoid conflicts with other virt-related topics in other sessions, giving a total of 4.5 hours for 17 topics.

I segregated the CFP submissions so I could schedule related discussions in one slot, to avoid jumping between subjects and to also help concentrate on specifics in an area.  Two submissions, one on security and one on storage, were by themselves, so I clubbed them into one ‘security and storage‘ session.  The others were nicely aligned, so we could have ‘x86‘, ‘MM‘, ‘ARM‘, ‘Networking‘ and ‘lightning talks’ topics in separate slots.  Since there were 4 network-related talks, I asked for a double slot (two 45-min slots back-to-back), and clubbed the lightning talks in the same session, which was scheduled to be the last session for the virt microconf.

Given this, I would say the microconf went quite well — the notes and slides are up at the LPC 2012 virt microconf wiki, and we could get good discussions going for most of the topics, given the time constraints.  Of course, a major benefit of going to conferences is to meet people outside of the sessions, in the hallways and at social events, and the discussions continued there as well.  I did bank on this extra time we would have into the ‘reject vs take all of them’ problem mentioned earlier.  From what I heard, the beer at the social events failed to stop technical discussions, so it all worked out for the best.

Each microconf owner (or a representative) had to do a short summary at the end of the LPC, for the benefit of the people not present for some sessions.  I did the virt summary in roughly these words:

We had a quite productive virtualization microconfierence.  We received a lot of submissions, and accepted them all, which meant we had to limit the time for each discussion in the slots, but we could divide the slots by a general topic, effectively increasing the discussion time for the larger topic.

 

We had a healthy representation from the KVM as well as Xen sides.  For example, in the MM topic, we discussed NUMA awareness for KVM as well as Xen.  Dario Faggioli presented the Xen side, and Andrea Arcangeli spoke on the Linux/KVM side.  Andrea spoke about AutoNUMA. It has been contentious on the mailing lists, and from the Kernel Summit discussions, it looked like some agreement will be reached soon.  Xen uses a similar approach to AutoNUMA, and they would end up pushing the patches soon as well.  Daniel Kiper spoke about integrating the various balloon drivers in the kernel to remove code duplication.

 

Both AMD and Intel publically announced new hardware features for interrupt virtualization for the first time here, and it was interesting to see them compare notes and find out what the other is doing and how, for example do they support IOMMU?  x2apic?  Etc.

 

New ARM architecture support work was presented by Marc Zyngier for the KVM effort, and Stefano Stabellini for the Xen effort.  Much of the work seems to be done, and patches are in a shape to be applied for the next merge window.  There are a few open issues, and they were discussed as well.

 

We had quite a few talks for the networking session.  Alex Williamson spoke about VFIO, which seemed to get mentioned a lot throughout the conference in multiple sessions.  This is a new way of doing device assignment, and progress looks positive, with the kernel side already merged in 3.6, and qemu patches queued up for 1.3.  Alex Graf then talked about ‘semi-assignment’, a way to do device assignment (or pci passthrough) while also getting proper migration support.  The effort involved writing device emulation for each device supported, and the approach wasn’t too popular.  IBM and Intel guys have been doing virtio net scalability testing, and John Fastabend spoke about some optimisations, which were generally well-received.  We should expect patches and more benchmarks soon.  Vivek Kashyap spoke about network overlays, and how creating a tunnel for networks for VMs can help with VM migration across networks.

 

We also had a session on security, by Paul Moore, who gave an overview of the various methods to secure VMs, specifically the new seccomp work.

 

Lastly, we had Bharata Rao talk about introducing a glusterfs backend for qemu, replacing qemu’s block drivers, which gives more flexibility in handling disk storage for VMs.

 

The organisers are collecting feedback, so if you were there, be sure to let them know of your experience, and what we could do better in the coming years.

I’d like to thank the Linux Foundation and the Linux Plumbers Conf organisers for giving me the opportunity to be there and run the virt microconf.

Syndicated 2012-09-14 05:07:30 (Updated 2013-05-22 10:32:25) from Think. Debate. Innovate. - Amit Shah's blog

27 Jun 2012 (updated 22 May 2013 at 11:16 UTC) »

Changing GNOME Default Action for Low Battery

The GNOME default of ‘hibernate’ or suspend-to-disk on very low battery power isn’t optimal for many laptops — hibernate is known to be broken on several hardware setups, it frequently results in file system corruption, and just causes pain.  That, combined with the weird behaviour of the GNOME power manager to put the system in hibernate, even when the battery isn’t low, annoyed me enough to go hunting for a way to change the default.

The GUI doesn’t expose a ‘sleep’ setting; it just offers hibernate and shutdown, so here’s a tip to just put the system to sleep state (suspend to RAM), which is a much well-behaved default for me.

Install dconf-editor, and go to

 org.gnome.settings-daemon.plugins.power

and modify the

critical-battery-action

to suspend.

For the curious, the weird behaviour of the GNOME power manager I mentioned above is noted in these bug reports:

Bug 673220 – ‘Critical capacity’ warning on laptop with multiple batteries broken
Bug 673221 – Shutdown action on battery low doesn’t save session
Bug 673222 – More prominent warning, at least 5-10% before battery goes critically low
Bug 673223 – System enters shutdown/hibernate even when power connected but battery low

Syndicated 2012-06-27 05:47:53 (Updated 2013-05-22 10:33:28) from Think. Debate. Innovate. - Amit Shah's blog

Workaround for error after upgrading VM from F16 to F17

Updating a Fedora 16 guest to a Fedora 17 guest via preupgrade gave me the ‘Oh no, something has gone wrong!’ screen at the GDM login screen.  It’s quite frustrating to see that screen because you can’t switch to a virtual terminal for troubleshooting, or even reboot or shutdown.

To send the key sequence Ctrl+Alt+F2 to the guest to switch to a virtual terminal, use the qemu monitor by pressing

 Ctrl+Alt+2

and use sendkey to send the key sequence:

(qemu) sendkey ctrl-alt-f2

Then go back to the guest window by issuing

Ctrl+Alt+1

After logging in as root, I poked in the gdm log files in /var/log/gdm/ and saw the fprint daemon was causing some errors.  Removing the fprintd package fixed this, but this is just a workaround, not a solution:

yum remove fprintd

Bug filed.

Syndicated 2012-06-04 11:47:10 (Updated 2012-06-04 11:48:12) from Think. Debate. Innovate. - Amit Shah's blog

11 May 2012 (updated 12 May 2012 at 14:10 UTC) »

Using adb To Copy Files To / From Your Android Device

Some devices, like the Galaxy Nexus and the HP Touchpad* (via the custom Android ROMs) don’t expose themselves as USB storage devices.  They instead use MTP or PTP to transfer media files (limiting to only photos and audio/video files being shown from the device).

This happens due to there being no separate sdcard on these devices, and ‘unplugging’ an sdcard from a running device to be exposed to the connected computer could cause running apps on the device itself to malfunction.  Android developer Dan Morill explains this here. He also mentions how the Nexus S doesn’t have this problem.

There are several apps that can open shares to the device using one of several protocols (DAV, SMB, etc.).  However, one quick way I’ve found to copy files to and from the device connected via USB to a computer is by using the adb tool.  It’s available as part of the ‘android-tools’ package on Fedora.

To copy a file from the computer to an android device connected via usb, use this:

adb push /path/to/local/file /mnt/sdcard/path/to/file

This will copy the local file to the device in the specified location.  Directories can be created on the device via the shell:

adb shell

and using the usual shell commands to navigate around and create directories.

* On the Touchpad, WebOS can expose the storage as a USB Storage Media.  The current nightly builds of CM9 can’t.

Syndicated 2012-05-11 11:10:16 (Updated 2012-05-12 14:07:59) from Think. Debate. Innovate. - Amit Shah's blog

Blog moved to wordpress on openshift

I moved this blog a while back from Blogger to WordPress. I was looking to move away from Blogger/Blogspot, to something self-hosted. I had come up with the following list to make the move seamless (for me as well as regular visitors):

  • Ability to use custom domains: Since I used blogger’s custom domains feature to redirect the blogger/blogspot links to my domain, I wanted to retain that functionality
  • Make the move seamless to site visitors
  • Preserve links and link structure.  All earlier links, rss feeds, etc., should continue to work as they did with the earlier setup (helps in maintaining search engine rankings)
  • No dependence on 3rd party server/software for leaving comments: Some blogging platforms are simple and minimal; they however end up using other services for comments to blog posts. I didn’t want that to happen — all the content should be on one sever without the users needing any sort of registration elsewhere.
  • Easy to manage the software: Shouldn’t be too time-consuming to keep the blog up

Red Hat‘s OpenShift PaaS platform had just announced support for domain aliases for applications, so I started looking at what would be involved in moving the blog on their platform.

Read on for my experiences and details on deploying this WordPress blog on OpenShift.

I already had played with OpenShift a bit, and loved their workflow of deploying apps using git. Deploying a wordpress install on OpenShift would mean I wouldn’t have to manage my own servers, operating systems, software updates, etc. It’s all on the stable and secure RHEL platform, with PHP managed by the RHEL team. So all I would need to worry about is just the wordpress installation itself.  As long as I routinely check for security updates to wordpress, and push those updates to the site, I should be doing OK.

So I created a new php-5.3 app using ‘rhc-create-app’. mysql is needed for the database, so I also added an instance to the app with the command

 rhc-ctl-app -e add-mysql-5.1 -a <appname>

To manage the mysql instance, a phpmyadmin cartridge is desirable too:

rhc-ctl-app -e add-phpmyadmin-3.4 -a <appname>

To make sure my custom domain works, let’s add aliases as well:

rhc-ctl-app -c add-alias --alias log.amitshah.net
rhc-ctl-app -c add-alias --alias www.amitshah.net

I had used both, log. and www. for the blog, so let’s continue using both so that both domains continue working. Of course, I changed the DNS CNAME entries for www. and log. over to <appname>-<domainname>.rhcloud.com via my name provider’s site.

Next, using the admin credentials on the mysql db, I then created a new db and a new user and gave the user all permissions on that db.  All this is quite simple using the phpmyadmin interface.

That’s it, all set with the app on OpenShift.

I then went and downloaded the latest wordpress release (3.2.1 then) zip file and extracted the files in a local directory.

Now here’s where I started using the power of git and OpenShift: I created a git repo in the wordpress directory and added all files to it, and made an initial commit. This is my base from where I’ll use wordpress.  New wordpress releases can be copied in this directory, and a new commit will map to the upstream release version. Any modifications to files I make in my wordpress installation (e.g. theme changes) are tracked in another branch in the same directory, with that branch being rebased on top of the latest release (the master branch).

With this setup, I can just copy the contents of this directory into my app’s php directory and push the changes to OpenShift. The ‘php’ directory is where all the app code resides. I then added all files in the git repo and committed the result. I then created the wp-config.php file as a copy of the wp-config-sample.php file, modified it to suit my installation, committed the change, and also added the file to the other wordpress directory created in the first step above. I then just pushed the changes, and the app was  live on the cloud and I could get started with wordpress’s wizard-based installation.

Now here’s one oddity of hosting apps on OpenShift: the app directory isn’t writable, or isn’t the place where the app itself can make changes and assume they’d be preserved (I think this is a good thing). Since the app is deployed via git, any content written to the server app directory can be lost on the next git push. For wordpress, this means the ‘uploads’ directory has to be given a place where images, etc., can be uploaded without problems.

The OpenShift people have helpfully given us some environment variables and hooks in the app deployment process, which can be used to do this right.

The default wordpress uploads directory is ‘wp-content/uploads’.  We can continue using this directory, with the following snippet placed in ‘.openshift/action_hooks/build’:

cd $app_dir
cat >> .openshift/action_hooks/build
if [ ! -d $OPENSHIFT_DATA_DIR/uploads ]; then
    mkdir $OPENSHIFT_DATA_DIR/uploads
fi

ln -sf $OPENSHIFT_DATA_DIR/uploads $OPENSHIFT_REPO_DIR/php/wp-content/

This ensures the ‘wp-content/uploads’ location is available for wordpress to put stuff into, and it also ensures the content goes into a place where OpenShift will not destroy the data on the next git push.

OK, having done all this, I was now ready to import my older blog posts. I installed the blogger-to-wordpress and livejournal-to-wordpress plugins (well, since I’m doing this, I thought I might as well import my older lj entries), git push’ed them, and did the import from the web interface.

Comments from livejournal entries and some blogger posts didn’t get fetched. I don’t know why that happened. I tried the import a couple more times, but those posts didn’t show up. I just decided to not bother about that; if there was any frequently-visited post, I could always go back and import it by hand. Since I didn’t expect to do any more imports, I removed those plugins and pushed the result again.

There is a blogger-to-wordpress redirect plugin, but that plugin does a lot more than just redirecting: it imports images uploaded to blogger or picasaweb on the blogger posts, generates blogger template to redirect blogger posts to wordpress, maps blogger posts to wordpress posts, etc.  Now most of this functionality is one-time; importing pictures, generating blogger template for redirection, etc., doesn’t need to be present all the time (can’t be too careful with php apps and security). I used the plugin to import all the blogger/picasaweb pictures it could fetch, and removed it as well.

I then enabled wordpress’s custom URL structure, which allows blogger-like post URLs, with the year and month as well as post title in the URL. Enabling this needs .htaccess modifications, which wordpress can’t make directly in our setup (because it can’t write to the app directory).  So created a new .htaccess file in the php/ dir. in the OpenShift app directory and included the snippet wordpress helpfully tells you it would include if the directory were writable (my code is in the snippet below).

I also took some hints from the blogger-to-wordpress plugin and created a minimal plugin that maps blogger URLs to wordpress URLs, and installed this plugin.

Next up was to ensure the older feeds kept working, and also ensuring the contents of the wp config file, and directory listings weren’t displayed. I also searched for some wordpress hardening tips, and compiled a fun-looking .htaccess file, snippet included below:

# Disable directory listing
Options All -Indexes

<files .htaccess>
    order allow,deny
    deny from all
</files>

<files wp-config.php>
    order allow,deny
    deny from all
</files>

RewriteEngine On
RewriteBase /

# Most of following comes from
# http://bloggertowp.org/migrate-from-blogger-to-wordpress-best-tutorial/

# Redirect feeds from labels
RewriteRule feeds/posts/default/-/(.*) category/$1/feed/ [L,R=301]

# Redirect older blogger RSS feeds
RewriteRule rss.xml feed/ [L,R=301]
RewriteCond %{QUERY_STRING} ^alt=rss$
RewriteRule feeds/posts/default feed/? [L,R=301]

# Redirect older blogger ATOM feeds
RewriteRule atom.xml feed/atom/ [L,R=301]
RewriteRule feeds/posts/default feed/atom/ [L,R=301]

# Redirect older blogger comments feeds
RewriteRule feeds/comments/default comments/feed/ [L,R=301]

# Redirect archives
RewriteRule ^([0-9]{4})_([0-9]{1,2})_([0-9]{1,2})_archive\.html$ $1/$2 [L,R=301]

# Redirect labels
RewriteRule ^search/label/(.*)$ category/$1/ [L,R=301]

# This is WP default: makes pretty URLs possible.
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]

I also installed the WP-Piwik and smart-404 plugins. WP-Piwik is a plugin that adds Piwik javascript code to give me a summary of the visits to the site, and the search keywords people use to land on my site. More on Piwik and its setup in a follow-up blog post. Smart-404 shows a list of pages with similar titles to the one being used in the 404 page. I had noticed a few 404 page hits via Piwik.

I’ve enabled the Akismet plugin that comes with the wordpress distribution, and it has flagged over 600 comments as spam so far, with just 2 false-positives. That’s impressive, but I intend to look further into this:

  1. Is there a way to reduce spam comments?
  2. Why do wordpress sites get spammed so much?

What I’ve seen so far is people search for specific terms on the ‘net, land on some post, and put the spam comment. So these are actual humans, not bots. Since they’re investing enough effort into finding blogs and adding comments, spam prevention techniques like CAPTCHAs aren’t going to work all the time. Akismet is working fine so far, so I’ll continue using it, but I’m going to think / search for ways to mitigate spam.

Overall, the move was really painless, done within a weekend and the most time was spent in learning about WordPress and moving the existing posts to the new blog. There were hardly any OpenShift issues, it stayed nicely out of the way, and I really like that about the platform.

I still haven’t figured out a way to map Blogger labels to WordPress Categories/Tags; these are new concepts (to me), and I’ll probably get something done here with some more htaccess trickery.

Syndicated 2011-12-30 12:19:51 from Think. Debate. Innovate.

69 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!