Older blog entries for amits (starting at number 48)

On Mind Maps

I wrote an article on mind maps in the BenefIT magazine for the March 2011 issue.  The people at BenefIT are nice enough to license the content under a CC license, so I can host the pdf and point you to it:

Mind-maps.pdf

This article talks about how mind maps are beneficial for the thought process and how you can use them to make decisions.

This is my second article that got published in the BenefIT magazine.  I've written one on taking frequent breaks from the computer earlier.  Writing for non-tech, business-oriented people is different, and not very straightforward :-)

This is a post from http://log.amitshah.net/, licensed CC BY-SA.

Syndicated 2011-03-12 13:43:00 (Updated 2011-03-12 13:43:10) from Amit Shah

Maximum LCD Brightness Lower Than Before?

If you're trying out a kernel newer than 2.6.38-rc6 and find your LCD brightness doesn't go up to its maximum, here's some help:  boot into an older kernel, set the brightness to maximum, then reboot into the newer kernel, and now you'll get the max. brightness that you're used to.

The git commit by Indan Zupancic explains why this happens:


drm/i915: Do not handle backlight combination mode specially

The current code does not follow Intel documentation: It misses some things and does other, undocumented things. This causes wrong backlight values in certain conditions. Instead of adding tricky code handling badly documented and rare corner cases, don't handle combination mode specially at all. This way PCI_LBPC is never touched and weird things shouldn't happen.

If combination mode is enabled, then the only downside is that changing the brightness has a greater granularity (the LBPC value), but LBPC is at most 254 and the maximum is in the thousands, so this is no real functional loss.

A potential problem with not handling combined mode is that a brightness of max * PCI_LBPC is not bright enough. However, this is very unlikely because from the documentation LBPC seems to act as a scaling factor and doesn't look like it's supposed to be changed after boot. The value at boot should always result in a bright enough screen.

IMPORTANT: However, although usually the above is true, it may not be when people ran an older (2.6.37) kernel which messed up the LBPC register, and they are unlucky enough to have a BIOS that saves and restores the LBPC value. Then a good kernel may seem to not work: Max brightness isn't bright enough. If this happens people should boot back into the old kernel, set brightness to the maximum, and then reboot. After that everything should be fine.

For more information see the below links. This fixes bugs:

  http://bugzilla.kernel.org/show_bug.cgi?id=23472 
  http://bugzilla.kernel.org/show_bug.cgi?id=25072

This is a post from http://log.amitshah.net/, licensed CC BY-SA.

Syndicated 2011-03-01 15:38:00 (Updated 2011-03-01 15:45:11) from Amit Shah

Stay Healthy By Taking Breaks

Most of us lead sedentary lifestyles these days -- most of our time is spent in front of computers. This slowly is causing a lot of problems people from previous generations haven't experienced: back aches, knee problems, wrist pains, myopia, among others. And just going to a gym or putting in one hour of physical activity a day isn't enough. It doesn't help balance the inactivity over the entire day.

I recently wrote an article in the BenefIT magazine that talks about two tools: Workrave and RSIBreak. Thanks to the publishers, the article is available in pdf format under a CC license.

I've tried both the software but have been using Workrave for quite a while now and am quite happy with it. To briefly introduce them: both software prompt the user to take a break at regular intervals. They have timers that trigger at configured intervals asking the user to take a break. Workrave also has some stretching exercises suggested that can be performed in the longer breaks. The shorter (and more frequent) breaks can be used to take the eyes off the monitor and to relax them. Read the article for more details.

I've reviewed Workrave version 0.9.1 in the article, though the current version as of now is 0.9.3, which has a few differences from those mentioned in the article. The prime difference is the addition of a 'Natural Rest Break' that gets triggered when the screen-saver gets activated, which is nice since if the user walks away from the computer for a prolonged period of time, the rest break in effect has been taken, and the next one is scheduled after the configured duration once the screen-saver is unlocked.

Both software are available in the Fedora repository: Workrave is based on the GTK toolkit (and integrates nicely with the GNOME desktop), whereas RSIBreak is based on the Qt toolkit (and integrates nicely with the KDE desktop). Give these software a try for a cheap but effective way of staying healthy!

Syndicated 2011-01-21 20:22:00 (Updated 2011-01-21 20:22:19) from Amit Shah

Idea: Faster Metadata Downloads With Yum and Git

The presto plugin for yum has worked great for me so far.  It's been very useful, not for the lack of download limits, but for the time saved in getting the bits downloaded.  The time saved is significant if the bandwidth is not too good (it never is).

However, I've observed in some cases the presto metadata is larger than the actual package size in some cases -- e.g., a font.  If a font package, say 21KB in size, has a deltarpm of 3KB in size, it results in a savings of 18KB of downloads.  This is a very impressive 85% of savings.  However, the presto metadata itself could be more than 400KB, nullifying the advantage of the drpm.  We're effectively downloading, in this corner case, 418KB instead of 21KB.  That is 19 times of what of the actual package size.

So here's an idea: why not let git handle the metadata for us?  The metadata is a text (or sqlite) file that lists package names, their dependencies, version numbers and so on.  Since text can be very easily handled by git, it should be a breeze fetching metadata updates from a git server.  At install-time (or upgrade-time), the metadata git repository for a particular Fedora version can be cloned, and on each update, all that's necessary for yum to do is invoke 'git pull' and it gets all the latest metadata.  Downloads: a few KB each day instead of a few MBs.

The advantages are numerous:

  • Saves server bandwidth
  • Uses very less server resources when using the git protocol
  • Scales really well
  • Compresses really well
  • Makes yum faster for users
    • I think this is the biggest win -- not having to wait ages for a 'yum search' to finish everyday has to get anyone interested.  Makes old-time Debian users like me very happy.
There are some challenges to be considered as well:
  • Should the yum metadata be served by just one canonical git server, while the packages get served by mirrors?  Not each mirror may have the git protocol enabled nor can the Fedora project ask each mirror to configure git on the server.
    • Doing this can result in slow mirrors not able to service package download requests for the latest metadata
    • This can be mitigated by using git over http over the server
  • The metadata can keep growing
    • This can be mitigated by having a separate git repository for the metadata belonging to each release.  Multiple git repos can be set up easily for extra repositories (e.g., for external repos or for multiple version repos while doing an upgrade).
  • The mirror list has to be updated to also include git repositories that can be worked on with 'git remote'.
I've filed an RFE for this feature.  For someone looking for a weekend hack for yum in python, this should be a good opportunity to jump right in!  If you intend to take this up, get in touch with the developers, make sure no one else is working on this yet (or collaborate with others) and update the details on the Fedora Feature Page.

Syndicated 2010-12-30 20:58:00 (Updated 2010-12-30 20:58:48) from Amit Shah

Book review: The Grand Design

I just finished reading Stephen Hawking and Leonard Mlodinow's 'The Grand Design' (wikipedia link; Amazon link here). It's a great book to get up to speed on where physics stands as of today in our understanding of the universe.

Physicists come up with theories to explain why the world behaves the way it does. Those which show promise continue to be tested with new observations. Some of the theories stand the test of a few real-life situations, some don't. Some make sense in particular settings, some don't. Some are easily understandable by the layperson, some are not. All this doesn't mean that the theories which don't make sense or which don't stand up to real-world tests or observations are wrong. They just make sense in a particular setting and we use them to accurately model our world in that setting. We use other theories to explain other facets of our world. Or even the same ones, when put under a magnifying glass. If you think this doesn't make sense, the book will make it understandable. If you think it sounds crazy, it is, and the book will tell you why. If you think physicists are going mad, well, I don't think they are, unless you mean they're going mad in the search of the one true answer to life, the universe and everything that's beyond "42". (Yes, the authors are cool enough to include the Hitchhiker's reference (Amazon link) as well.)

The writing is very clear. The first two chapters can be read and understood by people who have not taken advanced courses in science. They're very clearly written and explained. These chapters lay the foundation for the details in the next six chapters.

Things start getting interesting and slightly complicated progressively in each chapter from chapter 3 onwards. Obviously, since the concept of quantum theory starts getting introduced.

The authors use great everyday analogies in explaining complex phenomena. They also make good use of humour to keep the readers engaged and the tone light. There are no equations used in the book, so they don't alienate people who have studied science back in their school and college years but have lost touch of it since. (Stephen Hawking mentions an editor telling him that for each equation he uses in 'A Brief History of Time' (Amazon link), he'll lose half the readership. I think that's a brilliant way to make the text easily accessible and understandable.)

I read about physics after a really long time. I don't even remember reading or studying the quantum theory. But I guess I would have. However, at many points while reading the book, I felt if I had such a resource by my side while studying for my engineering classes, it would have done a much better job at arousing and sustaining my interest in classical and theoretical sciences. I came up with a few questions while going through the text only to be explained later on in some cases, or the topics not broached upon by the authors for want of simplicity. I'm sure I can get the answers to some of the questions I have by poking around in very detailed literature on the topics. I'm glad I've retained my inquisitive nature when it comes to the sciences and also that I can raise questions that aren't answered in simple terms.

To conclude, this is a great book for people without science background wanting to learn about our universe, how it was formed, how it came into being by reading the first two or three chapters and glossing over the rest. It's a great book for people who have studied physics but lost touch with it to recollect some theory and understand the current understanding of the physicists on how the universe formed and why things are the way they are.

I haven't read 'A Brief History of Time' by Stephen Hawking nor the updated 'A Briefer History of Time' (Amazon link) by Stephen Hawking and Leondard Mlodinow, the authors of 'The Grand Design'. I guess that book would be the right starting point before one reads this book, but I didn't find myself getting lost too much. Perhaps it helps others. I intend to read 'A Brief History of Time', which I own for quite a while now, in the near future.

It's difficult being a genius and figuring out how the universe works and trying to put together its past and determining the future. It's doubly difficult to write about it in a way that laypersons can understand. Kudos to the Stephen Hawking and Leonard Mlodinow and the team behind 'The Grand Design' for doing just that.

PS:  I'm running an experiment again, this time with links to amazon product pages.  I'm putting the amazon links separately so you know you'll go to a company's site.  Let me know how this works -- does the '(Amazon link)' text hurt the flow?  Do you want links to Amazon product pages at all?  Should I make the Amazon link the default?

Syndicated 2010-12-30 19:43:00 (Updated 2010-12-30 19:43:45) from Amit Shah

Fedora Miniconf and foss.in/2010

A very delayed post on the Fedora Miniconf and foss.in/2010.

foss.in/2010 was held on the 15th, 16th and 17th of this month in Bengaluru. I could confirm my attendance very late, so I missed out on the CfP and a chance at speaking in the main conference, but I could manage to get a speaking slot in the Fedora miniconf. Thanks to Rahul for accomodating me at a short notice.

One of the main things I was looking forward to was meeting my team-mate Juan Quintela. Though we met recently at the KVM Forum 2010, I was going to use this opportunity to catch him and discuss some of the things I'm working on that overlap with his domain, virtual machine live migration, and get things going.

The other thing was to get to know more people -- Fedora users and developers from India who I've spoken with on the irc channel but not met, other developers and users of free software from around the world. Add to that a few people who I've worked with and not met and also people whose software I use daily and who I want to thank for working on what they do.  It was also nice meeting the old known faces from the IBM LTC in Bengaluru -- Balbir Singh, Kamalesh Babulal, Vaidy, Aneesh K. V., et al.

It's always a certainty that there will be users of virtualization (particularly kvm) stack and it's nice to get a feel of how many people are using kvm, in what ways, how well it works for them, and so on. That's always a motivation.

The Fedora miniconf was on the 16th. The schedules for talks for miniconfs aren't published by the foss.in people, so it was left to us to do our advertising and crowd-pulling. Rahul had listed the speakers and the talks on the Fedora foss.in/2010 wiki page. I went ahead and took out a few print-outs for the talks and assigned time slots for each talk depending on the suggested length given by the speakers for their talks as well as the slot allotted to the Fedora Project for the miniconf. The print-outs of the schedules were meant to be pasted around the venue to attract attention to the remotest section that was to host the miniconf, Hall C. However, we just ended up keeping the printouts as handouts at the Fedora stall that we set up. The Fedora stall was quite a crowd-puller. And since it was set up on the second day, we didn't have to compete with the other stalls since they had their share of attendance on the first day.

The other members of the Fedora crowd, Rahul, Saleem, Arun, Shreyank, Aditya, Suchakra, Siddhesh, Neependra, ... have written about the Fedora stall and their experiences earlier (and linked to from the Fedora foss.in/2010 page).

The Fedora miniconf was a great success, going by the attendance and the participation we had. My talk was the first, and I could see we had a full house. I think my talk went quite well. It could have been a little disappointing for people who expected demos, but I wanted to aim this talk towards people who had a general sense of using and deploying Fedora virt as well as Fedora on the cloud and also at people who would go and do stuff themselves rather than being given everything on a silver platter. This does resonate also with the foss.in philosophy of recent years of being a contributor-oriented conference rather than a user-originted one, so I didn't mind doing that. Gauging by the response I got after the talk, I believe I was right in doing that. (I even got one email mentioning it was a great talk by the CEO of a company).

The other talks from the Fedora miniconf were engaging, I learnt quite a bit from what the others are up to. Arun's talk on packaging emacs extensions was entertaining. He connects with the audience, I liked that about him.

Aditya's talk on Fedora Summer Coding was a good call to students to participate in the free software world via Fedora's internship programme. He narrated his own experience as a Fedora Project intern, which touches the right chords of the intended audience. I think doing more such talks will get him over the jitters of presenting to a big crowd.

Suchakra's doing good work on accessing an embedded Linux box via a console inside a browser tab -- it's a very interesting project.

Neependra's talk was a good walk-through of using tracing commands to see what really happens in the kernel when a userspace program runs. He walked through the 'mkdir' command and showed the call trace. This was a good demo. He spoke about the various situations in which tracing tools could be used, not just for debugging, and that should have set people's thoughts in motion as to how they could get more information on how the system behaves instead of just using a system.

Shreyank's talk on creating a web tool for managing student projects and the Fedora Summer of Code was interesting as well. It was nice to see the way an actual student project was designed and developed and how it's going to make future students' and mentors' lives easier. This talk should have served as a good introduction to the flow and process students have to go through in applying, starting, reviewing and completing their project.

Apart from the Fedora miniconf, I attended a few sessions in the main conf. James Morris's keynote on the history of the security subsytem in the Linux kernel was very informative. Rahul's keynote on the 'Failures of Fedora' was totally packed with anecdotes and analyses of the decisions taken by the Fedora project and their impact on the users and developers. Fedora (earlier Red Hat Linux) is one of the oldest distributions around, and any insights into the functioning and data as to what works and what does not is a great source of information to look for building engaging communities of users and contributors.

Lennart's two talks on systemd and the state of surround sound on Linux were not very new to me. However, there were a few bits in there that provided some food for thought.


Juan's talk on live migration was packed full of experiences in getting qemu to a state where migration works fairly well. He also spoke about all the work that's left to do. It was totally technical and I think the people who were misguided by it being labelled as a 'sysadmin' talk or by the title (expecting to migrate from an older physical machine to a newer physical machine w/o downtime) quickly left the hall. Whoever stayed back were either people who work on QEMU/KVM (esp. the folks from the IBM LTC) or people too polite to walk out.

Dimitris Glezos's talk on building large-scale web applications was a very informative one for me. I've never done web programming (except for html, css and a bit of php ages ago), and this was a good intro for me to understand what various web development frameworks there are, their pros and cons, the way to deploy them, the way to structure them, etc. It was evident he took a lot of effort to prepare the slides and the talk, it was totally worth it.

Danese Cooper's keynote on the Wikimedia Foundation was an equally informative talk. She spoke on a wide range of topics, including the team that makes up Wikimedia, their servers and datacentres, their load balancing strategy, their backup systems, their editing process, their localisation efforts, their search for a new mirror site in the APAC region, etc. I was interested in one aspect, machine-readable wikipedia content, to which they had a satisfactory answer: they're migrating to semantic web content and would look at a machine-readable API once they're done adding semantics to their content.

The other time was spent at the Fedora booth and talking to Juan and the other friends.

The foss.in team announced this would be the last foss.in, so thanks to them for hanging around so long. To fill the void, we're going to have to step up and organise a platform for like-minded people from the free/open source software community around here. I've been part of organising some events earlier in different capacities, and I'm looking forward to being part of an effort that provides such a platform. There's a FUDCon being planned for next year in Pune, I'll be involved in it, and will take things along from there.

Syndicated 2010-12-30 05:21:00 (Updated 2010-12-30 05:21:43) from Amit Shah

Auto-login to web proxies using NetworkManager

My ISP uses a web proxy that one has to log into to access the Internet. This logging in is a manual, repetitive process, which is easily automatable. So I embarked on a few hour-long project to get to the proxy, supply login credentials and configure NetworkManager to auto-login via running the script each time a connection goes up.

It's not just ISPs -- hotel wifi networks, airport wifis, all use such web-based proxies that one has to login to first before the 'net becomes accessible. So the steps I followed can be easily followed by others to add support for auto-logging into such web proxies.

I'll get to the details in a bit, but I'll first point to the code (licensed under the GPL, v2). It's written in Python, a language that's relatively new for me. I've written a couple of small programs earlier, but those were just enough to remind me of the syntax; I had to frequently look up the Python docs to get a lot of the details, like interacting with http servers, cookie management, config file management and so on. My C-style writing of the Python script might be evident: it should be possible for someone with more experience in Python to shorten or optimise the script.

My ISP, Tikona Digital Networks, uses a somewhat roundabout way to bring up the login page: for any URL accessed before the proxy login, it first displays an http page that has a redirect URL and a 'Please wait while login page is loaded' message. The page to be redirected to is then loaded. This page shows another 'Please wait' message, sets a cookie and does a POST action to the real login page after a 5-second timeout. The real login page asks for the username and password. After providing that info, one has to click on the Login button, which translates to a javascript-based POST request, and if the username/password provided match the ones in their database, we're authenticated to the web proxy. The web proxy doesn't interfere with any further 'net access.

Now that I've gone through the rough overview of the approach to take, I'll detail the steps I took to get this script ready:

Step 1: Follow the redirect URL

Open a browser, type in some URL -- say 'www.google.com'. This always resulted in a page that asked me to wait while it went to the login page.

OK, so time for a short python script to check what's happening:


import urllib

f = urllib.urlopen("http://www.google.com")
s = f.read()
f.close()

print s

This snippet accesses the google.com website and dumps on the screen the result of the http request.

Here's the dump that I get before the login.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<title>Please wait while the login page is loaded...</title>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<META HTTP-EQUIV="PRAGMA" CONTENT="NO-CACHE"/>
<META HTTP-EQUIV="EXPIRES" CONTENT="-1"/>
<META HTTP-EQUIV="Refresh" CONTENT="2;URL=https://login.tikona.in/userportal/?requesturi=http%3a%2f%2fgoogle%2ecom%2f&ip=113%2e193%2e150%2e95&nas=tikonapune&requestip=google%2ecom&sc=5a54aa1fd2de7a9c2b92a865de55b943">
</head>
<body>
<p align="center">Please wait...<p>
Please wait while the login page is loaded...
<!---
<msc>
<login_url><![CDATA[https://login.tikona.in/userportal/NSCLOGIN.do?requesturi=http%3a%2f%2fgoogle%2ecom%2f&ip=113%2e193%2e150%2e95&mac=00%3a16%3a01%3a8e%3a06%3a92&nas=tikonapune&requestip=google%2ecom&sc=5a54aa1fd2de7a9c2b92a865de55b943]]></login_url>
<logout_url><![CDATA[https://login.tikona.in/userportal/NSCLOGOUT.do?requesturi=http%3a%2f%2fgoogle%2ecom%2f&ip=113%2e193%2e150%2e95&mac=00%3a16%3a01%3a8e%3a06%3a92&nas=tikonapune&requestip=google%2ecom&sc=5a54aa1fd2de7a9c2b92a865de55b943]]></logout_url>
<status_url><![CDATA[https://login.tikona.in/userportal/NSCSTATUS.do?requesturi=http%3a%2f%2fgoogle%2ecom%2f&ip=113%2e193%2e150%2e95&mac=00%3a16%3a01%3a8e%3a06%3a92&nas=tikonapune&requestip=google%2ecom&sc=5a54aa1fd2de7a9c2b92a865de55b943]]></status_url>
<update_url><![CDATA[https://login.tikona.in/userportal/NSCUPDATE.do?requesturi=http%3a%2f%2fgoogle%2ecom%2f&ip=113%2e193%2e150%2e95&mac=00%3a16%3a01%3a8e%3a06%3a92&nas=tikonapune&requestip=google%2ecom&sc=5a54aa1fd2de7a9c2b92a865de55b943]]></update_url>
<content_url><![CDATA[https://login.tikona.in/userportal/NSCCONTENT.do?requesturi=http%3a%2f%2fgoogle%2ecom%2f&ip=113%2e193%2e150%2e95&mac=00%3a16%3a01%3a8e%3a06%3a92&nas=tikonapune&requestip=google%2ecom&sc=5a54aa1fd2de7a9c2b92a865de55b943]]></content_url>
</msc>
-->

</body>
</html>

This shows there's a redirect that'll happen after the timeout (the META HTTP-EQUIV="Refresh" line). The redirect is to the link shown.

Step 2: Get the redirect link

So now our task is to get the link from the http-equiv header and open that later. Using regular expressions, we can remove the text around the link and just obtain the link:

refresh_url_pattern = "HTTP-EQUIV=\"Refresh\" CONTENT=\"2;URL=(.*)\">"
refresh_url = search(refresh_url_pattern, s)

The URL to access is then available in refresh_url.group(1). group(1) contains the matched string in parentheses above in the pattern searched.

Now open the page obtained in the refresh URL:

f = urllib.urlopen(refresh_url.group(1))
s = f.read()

s now contains:

<html>
<head>
<title>Powered by Inventum</title>
<SCRIPT>
function moveToLogin() {
setTimeout("loadForm()",500);
}
function loadForm(){
document.forms[0].action="login.do?requesturi=http%3A%2F%2Fgoogle.com%2F&act=null";
document.forms[0].method="post";
document.forms[0].submit();
}
</SCRIPT> 
</head>
<body onload="moveToLogin();">
<FORM>
Loading the login page...
</FORM>
</body>
</html>
Step 3: Get the base URL, open login page

So this page does an HTTP POST request. The URL of the new page being loaded is relative to the current one, so we have to extract the base URL from the previous redirect URL obtained.

base_url_pattern = "(http.*/)(\?.*)$"
base_url = search(base_url_pattern, refresh_url.group(0))

The baseurl is then available via base_url.group(1). This regular expression pattern isolates the text before the first '?', as is found in the refresh URL above.

So now we have to load the page login.do which is at address 'https://login.tikona.in' and which is to be passed the parameters '?requesturi=http%3A%2F%2Fgoogle.com%2F&act=null'. This calls for another regular expression by which we can isolate the 'login.do...' part from the 'action' part of the POST request above.

load_form_pattern = ".*action=\"(.*)\";"
load_form_id = search(load_form_pattern, s)
load_form_url = base_url.group(1) + load_form_id.group(1)

load_form_url is now the URL we need to access to get to the login page:

f = urllib.urlopen(load_form_url)
s = f.read()

This should get our login page.

But it's not. After spending some time checking and double-checking what's happening I couldn't see anything going wrong. There was just one more thing to try: cookies. I disabled cookies in firefox and tried accessing the page. Voila, no login page.

Step 4: Enable cookie handling

So we now have to enable cookies in our python script to be able to enter login information. The urllib2 and cookielib libraries do that for us, so a slight re-write of the code gets us to this:

import urllib, urllib2, cookielib, ConfigParser, os
from re import search

cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))

f = opener.open("http://google.com")
s = f.read()

All other open calls (urllib.urlopen) are now replaced by opener.open. This way cookies are handled for the session and the login page appears after accessing the load_form_url:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
<title>Tikona Digital Networks</title>
<link rel="stylesheet" type="text/css" href="/userportal/pages/css/style.css" />
<script language="JavaScript" src="/userportal/pages/js/cookie.js"></script>
<script language="JavaScript" src="/userportal/pages/js/common.js"></script>

</head>

<body>
<form name="form1">
<div id="wrap">
<div class="background_login">
<div class="logo_header">
<div class="logoimg"><img src="/userportal/pages/images/logo.jpg" alt="Tikona Digital Networks" /></div>
<div class="sitelink"><a href="http://www.tikona.in" target="_blank">www.tikona.in</a></div>

</div>
<div class="clear"></div>
<div class="login_box">
<div id="right_curved_block">
<div class="blue_head">
<div class="blue_head_right">
<div class="blue_head_left">&nbsp;</div>
<div class="hdng">Login</div>
</div>
</div>

<div class="clear"></div>
<div class="block_content">
<div class="form">
<table height="100%" border="0" cellpadding="0" cellspacing="0">

<tr>


<td width="126"><label>Service Type</label></td>
<td width="200" align="left" valign="middle">
<select name="type"><option value="1">Check Account Details</option>

<option value="2" selected="selected">Internet Access</option></select>
</td>

</tr>
<tr>
<td width="126"><label>User Name</label></td>
<td width="200" align="left" valign="middle"><input type="text" name="username" value="" class="logintext">

</td>
</tr>
<tr>
<td width="126"><label>Password</label></td>
<td width="200" align="left" valign="middle"><input type="password" name="password" value="" class="loginpassword"></td>
</tr>
<tr>
<td width="126"><label>Remember me</label></td>

<td width="200" align="left" valign="middle"><div style=" width:30%; float:left;"><input name="remeberme" id="rememberme" type="checkbox" class="checkbox"/></div>
<div style=" width:70%; float:right;"><a href="javascript:savesettings()"><img src="/userportal/pages/images/login.gif" alt="" width="117" height="30" hspace="0" vspace="0" border="0" align="right" /></a></div></td>
</tr>
</table>
</div>
</div>
<div class="clear"> </div>
<div class="white_bottom">
</div>
</div>
</div>

<div class="tips_box">
<div class="v_box">
<div id="tips_block">
<div class="white_head_v">
<div class="blue_head_right">
<div class="white_head_left_v">&nbsp;</div>
<div class="wbs_version">&nbsp;</div>
</div>
</div>
<div class="clear"></div>
<div class="block_content">
<div class="scrol">
<h1>Importance of Billing Account Number</h1>
<br />

<font size="2">
<ul>
<li>Billing Account Number (BAN) is a 9 digit unique identification number of your Tikona Wi-Bro service bill
account. It is mentioned below your name and address in the bill.</li>
<li>Bill payments done through cheque or demand draft should mandatorily have BAN mentioned on them. <br />
<span style="color:#558ed5">Example:</span> Cheque or demand draft should be drawn in the name of &lsquo;
Tikona Digital Networks Pvt. Ltd. a/c xxx xxx xxx&rsquo;. Here &lsquo;xxx xxx xxx&rsquo; denotes your BAN.
</li>
<li>If the BAN is not mentioned or incorrectly mentioned on the cheque or demand draft, the bill amount does 
not get credited against your Tikona Wi-Bro service account.</li>
<li>In case you have paid bill through cheque or demand draft without mentioning BAN on it and the amount is 
not credited to your Tikona billing account, then please contact TikonaCare at 1800 20 94276. Kindly furnish 
your cheque number, service ID, BAN and bank statement for payment verification.</li>
</ul>    
</font><br />
</div>
</div>
<div class="clear"> </div>
<div class="white_bottom">&nbsp;</div>
</div>
</div>

</div>
<div style="padding:110px 0 0 0; float:left; width:100%;">
<div class="helpline">
Tikona Care: 1800 20 94276  | <a href="mailto:customercare@tikona.in">customercare@tikona.in</a></div>
</div>

<div class="footer_line">&nbsp;</div>
<div class="footer_blueline"></div>

<div class="footer">
Copyright &copy; 2009. Tikona Digital Networks. All right Reserved.
</div>
</div>
</div>
<input type="hidden" name="act" value="null">
</form>
</body>
</html>
Step 5: Login

OK, this page doesn't say what exactly to do after the username/password is entered. There's no POST action. Instead, what they do is call the saveettings() function on clicking of the login.gif image. saveettings() is in the cookie.js file:

function savesettings()
{

if (document.forms[0].rememberme.checked)
{ 
createCookie('nasusername',document.forms[0].username.value,2);
createCookie('type',document.forms[0].type.value,2);
createCookie('nasrememberme',1,2);

}
else{
eraseCookie('nasusername');
eraseCookie('type');
eraseCookie('nasrememberme');
}
document.forms[0].action = "newlogin.do?phone=0";
document.forms[0].method = "post";
document.forms[0].submit();
return true;      
}

OK, so the page 'newlogin.do' is to be opened as a response to the clicking of the login button. And the username and password info has to be passed along, of course.

We already have the base url for the login page that we just used. Now we have to combine the base url with the 'newlogin.do' page instead of the 'login.do' page that we accessed earlier:

login_form_id = "newlogin.do?phone=0"
type = "2"

login_form_url = base_url.group(1) + login_form_id

login_data = urllib.urlencode({'username': username, 'password': password,
'type': type})

f = opener.open(login_form_url, login_data)

... and success! This is enough to get the login done. I added config file handling to the final code so that the username/password are stored in a config file. The final code also ensures that we're on a Tikona network before proceeding with the steps of logging in (by checking if the redirect URL is obtained in Step 1). See the latest code here.

Step 6: Auto-login on successful connection

Just one last step remains: a NetworkManager dispatcher script that will invoke this login program each time a network becomes ready:

#!/bin/sh

if [ "$2" = "up" ]; then
/home/amit/bin/tikona-auto-login || :
fi

Put this in /etc/NetworkManager/dispatcher.d with the appropriate permissions (744) and we're good to go!


Next steps:
The project surely isn't complete: a lot of support has to be added to NetworkManager itself to present a good UI to enable/disable these dispatcher scripts and also to prompt for a username/password instead of storing in a config file. This and several other TODO items are listed in the README file. If you plan on adding new networks that can be auto-logged in to, it's easy to follow these steps or feel free to email me for guidance.

Syndicated 2010-09-28 13:55:00 (Updated 2010-09-28 13:55:59) from Amit Shah

14 Sep 2010 (updated 16 Feb 2011 at 07:16 UTC) »

Communication between Guests and Hosts

Guest and Host communication should be a simple affair -- the venerable TCP/IP sockets should be the first answer to any remote communication.  However, it's not so simple once some special virtualisation-related constraints are added to the mix:

  • the guest and host are different machines, managed differently
  • the guest administrator and the host administrator may be different people
  • the guest administrator might inadvertently block IP-based communication channels to the host via firewall rules, rendering the TCP/IP-based communication channels unusable
The last point needs some elaboration: system administrators want to be really conservative in what they "open" to the outside world.  In this sense, the guest and host administrators are actively hostile to each other.  Also, rightly, neither should trust each other, given that a lot of the data stored in operating systems are now stored within clouds and any leak of the data could prove disastrous to the administrators and their employers.

So what's really needed is a special communication channel between guests and hosts that are not susceptible to being blocked out by guests or hosts as well as being a very special-purpose low-bandwidth channel that doesn't look to re-implement TCP/IP.  Some other requirements are mentioned on this page.

After several iterations, we settled on one particular implementation: virtio-serial.  The virtio-serial infrastructure rides on top of virtio, a generic para-virtual bus that enables exposing custom devices to guests.  virtio devices are abstracted enough so that guest drivers need not know what kind of bus they're actually riding on: they are PCI devices on x86 and native devices on s390 under the hood.  What this means is the same guest driver can be used to communicate with a virtio-serial device under x86 as well as s390.  Behind the scenes, the virtio layer, depending on the guest architecture type, works with the host virtio-pci device or virtio-s390 device.

The host device is coded in qemu.  One host virtio-serial device is capable of hosting multiple channels or ports on the same device.  The number of ports that can ride on top of a virtio-serial device is currently arbitrarily limited to 31, but one device can very well support 2^31 ports.  The device is available since upstream qemu release 0.13 as well as in Fedora from release 13 onwards.

The guest driver is written for Linux and Windows guests.  The API exposed includes open, read, write, poll, close calls.  For the Linux guest, ports can be opened in blocking as well as non-blocking modes.  The driver is included upstream from Linux kernel version 2.6.35.  Kernel 2.6.37 will also have asynchronous IO support -- ie, SIGIO will be delivered to interested userspace apps whenever the host-side connection is established or closed, or when a port gets hot-unplugged.

Using the ports is simple: when using qemu from the command line directly, add:

-chardev socket,path=/tmp/port0,server,nowait,id=port0-char
-device virtio-serial \
-device virtserialport,id=port1,name=org.fedoraproject.port.0,chardev=port0-char

this creates one device with one port and exposes to the guest the name 'org.fedoraproject.port.0'.  Guest apps can then open /dev/virtio-ports/org.fedoraproject.port.0 and start communicating with the host.  Host apps can open the /tmp/port0 unix domain socket to communicate with the guest.  Of course, there are other qemu chardev backends that can be used other than unix domain sockets.  There also is an in-qemu API that can be used.

More invocation options and examples are given in the invocation and how to test sections. 

There is sample C code for the guest as well as sample python code from the test suites.  The original test suite, written to verify the functionality of the user-kernel interface, will in the near future be moved to autotest, enabling faster addition of more tests and tests that not just check for correctness, but also regressions and bugs.

virtio-serial is already in use by the Matahari, Spice, libguestfs and Anaconda projects.  I'll briefly mention how Anaconda is going to use virtio-serial: starting Fedora 14, guest installs of Fedora will automatically send Anaconda logs to the host if a virtio-serial port with the name of 'org.fedoraproject.anaconda.log.0' is found.  virt-install is modified to create such a virtio-serial port.  This means debugging early anaconda output will be easier with the logs available on the host (and not worrying about guest file system corruptions during install or network drivers not available before a crash).

Further use: There are many more uses of virtio-serial, which should be pretty easy to code:
  • shutting down or suspending VMs when a host is shut down
  • clipboard copy/paste between hosts and guests (this is under progress  by the Spice team)
  • lock a desktop session in the guest when a vnc/spice connection is closed
  • fetch cpu/memory/power usage rates at regular intervals for monitoring

Syndicated 2010-09-14 10:49:00 (Updated 2011-02-16 06:48:14) from Amit Shah

Upgrading from Fedora 11 to Fedora 13

Having already installed (what would be) F13 on my work and personal laptops the traditional way -- by installing a fresh copy (since I wanted to modify the partition layout), I tried an upgrade on my desktop.

My desktop was running Fedora11 and I moved it to Fedora13. I wanted to test how the upgrade functionality works, does it run into any errors (esp. since it's from 11 -> 13, skipping 12 entirely), if the experience is smooth, etc.

I started out by downloading the RC compose from http://alt.fedoraproject.org/. Since all my installs are for the x86-64 architecture, I downloaded the DVD.iso. I then loopback-mounted the DVD on my laptop:


# mount -o loop /home/amit/Downloads/Fedora-13-x86_64-DVD.iso /mnt/F13

I then exported the contents of the mount via NFS; edit /etc/exports and put the following line:

/mnt/F13 172.31.10.*

This ensures the mount is only available to users on my local network.

Then, ensure the nfs services are running:

# service nfs start
# service nfslock start

On my desktop which was to be upgraded, I mounted the NFS export:

# mount -t nfs 172.31.1.12:/mnt/F13 /mnt

And copied the kernel and initrd images to boot into:

# cp /mnt/isolinux/vmlinuz /boot
# cp /mnt/isolinux/initrd.img /boot

Then update the grub config with this new kernel that we'll boot into for the upgrade. Edit /boot/grub.conf and add:

title Fedora 13 install
    root (hd0,0)
    kernel /vmlinuz
    initrd /initrd.img

Once that's done, reboot and select the entry we just put in the grub.conf file. The install process starts and asks where the files are located for the install. Select NFS and provide the details: Server 172.31.1.12 and directory /mnt/F13.

The first surprise for me was to see the updated graphics for the Anaconda installer. They got changed in the time I installed F13 (beta) on my laptops. The new artwork certainly looks very good and smooth. More white, less blue is a departure from the usual Fedora artwork, but it does look nice.



I then proceeded to select 'upgrade', it found my old F11 install and everything after that 'just worked'. I was skeptical about this while it was running: I had some rpmfusion.org repositories enabled and some packages installed from those repositories. I was wondering if those packages would be upgraded as well, or would they be left at the current state, which could create dependency problems, or if they would be completely removed. I had to wait for the install to finish, which took a while. The post-install process took more than half an hour, and when it was done, I selected 'Reboot'. Half-expecting something to have broken or to not work, I logged in, and voila, I was presented the shiny new GNOME 2.30 desktop. The temporary install kernel that I had put in as the default boot kernel was also removed. Small thing in itself, but great for usability.

Everything looked and felt right, no sign of breakage, no error messages, no warnings, just some good seamless upgrade.

I can't say really expected this. Coming from a die-hard Debian fan, distribution upgrades are something that was the forte of just Debian. For now. The Fedora developers have done a really good job of getting this process extremely easy to use and extremely reliable. Kudos to them!

While the Fedora 13 release has been pushed back a week for a install-over-NFS bug, it needs a certain combination of misfortunes to trigger, and luckily, I didn't hit that bug. However, when trying the F13 beta install on my laptop, I had hit a couple of Anaconda bugs, one of which is now resolved for F14 (crash when upgrading without a bootloader configuration) and the other one (no UI refresh if I switch between virtual consoles until a package finishes install -- really felt while installing over a slow network link) is a known problem with the design of Anaconda, and hopefully the devs get to it.

Overall, a really nice experience and I can now comfortably say Fedora has really rocketed ahead (all puns intended) since the old times when even installing packages used to be a nightmare. This is good progress indeed, and I'm glad to note that the future of the Linux desktop is in very good hands.

Cheers to the entire team!

Syndicated 2010-05-13 07:19:00 (Updated 2010-05-13 07:19:24) from Amit Shah

Summercamp artwork

Thanks to Nicu's excellent step-by-step howto on creating artwork with inkscape and the Open Clip Art Library, I managed to create this today:


This is for a couple of friends who are planning to organise a summer camp for kids in the locality.

Thanks Nicu!

Syndicated 2010-04-15 14:26:00 (Updated 2010-04-15 14:26:23) from Amit Shah

39 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!