Older blog entries for etbe (starting at number 1036)

Phone Calls and Other Distractions

Harald Welte has written about the distraction of phone calls and how it impacts engineering work [1]. He asks why people feel that they are entitled to interrupt him given the cost to his work.

Some years ago while working as a programmer I was discussing such things with a colleague who worked for the consulting part of the same company. He was really surprised when I told him that a phone call at the wrong time would cost me at least 30 minutes work and possibly an hour or more. His work was also quite technical and demanding but the difference between software development (where you need to think about a lot of program state to consider where the problem might be) and consulting (where you have to deal with a smaller number of configuration file options and sometimes getting debugging information to someone like me) is considerable. So the attitudes towards receiving calls also tends to be quite different.

Computer work requires more concentration, thought, and knowledge of system state than many (most?) career choices. If someone finds that an unexpected phone call costs them no more than a few minutes work then it’s quite reasonable of them to phone other people whenever they feel like it – generally by default people think that everyone else is just like them.

In terms of managing interruptions to my work, I generally encourage people to email me and that works reasonably well. So I don’t have too many problems with distracting phone calls. I used Jabber for a while a few years ago but I didn’t reinstall my Jabber server after it became corrupt because of the distraction. I believe that was due to using Jabber in the wrong way. I should have just started a Jabber client when I wasn’t doing anything important and then killed it when I started doing some serious coding. Having a Jabber message interrupt me when I’m watching a TED talk or reading blogs is no big deal. In fact I could tell everyone who has my phone number that if they see me on Jabber then they can just phone me if they wish while knowing that it won’t distract me from anything serious. I wonder if I could configure a Jabber client to only receive messages when a program such as mplayer is running.

I have configured my laptop and workstation to never alert me for new mail. If I’m not concentrating then I’ll be checking my email frequently and if I am concentrating I don’t want a distraction. I have configured my phone to give one brief vibration when it gets mail and not make any sound, I will only notice that if I’m not concentrating on anything. It’s a standard Android feature to associate ring tones with phone numbers, it’s a pity that the K9 MUA doesn’t allow associating email addresses with notifications. There are some people who’s email could usefully trigger an audible alert. There is an K9 feature request from 2009 to allow notifications only when the IMAP flag “Flagged” is set which would allow the mail server to determine which users are important, but there’s no sign that it will be implemented soon.

I’ve started playing with Google+ recently due to Ingress team interaction being managed through it. Google+ seems quite poor in this regard, it defaults to making a ring tone for lots of different events. Turning that off is easy enough but getting notifications only about things that are important to me seems impossible. I would like to get an audible alert when someone makes a Google+ post with an Ingress code (because they expire quickly and because they only seem to be posted at times when I’m not busy) but not get audible alerts about anything else. I’m sure that most people who use Google+ would like to have different notifications for various types of event. But the Android client has options for whether there should be vibration and/or noise and for which events get the notifications. No options for different notifications for different events and for treating some community posts differently from others.

It seems that the default settings for most programs suit people who never need to spend much time concentrating on a task. It also seems that most programs don’t offer configuration options that suit the needs of people who do concentrate a lot but who also sometimes receive important phone calls and email. It’s ironic that so many applications are designed in the least optimal way for the type of people who develop applications. The Google+ developers have an excuse as doing what I desire would be quite complex. But there are other programs which should deal with such things in a user friendly manner.

Related posts:

  1. Globalisation and Phone Calls I just watched an interesting TED talk by Pankaj Ghemawat...
  2. A Mobile Phone for Sysadmin Use My telco Three have just offered me a deal on...
  3. Health and Status Monitoring via Smart Phone Health Monitoring Eric Topol gave an interesting TED talk about...

Syndicated 2013-01-18 12:15:22 from etbe - Russell Coker

Cooling Phones

According to the bureau of meteorology today is 39C. But mad dogs and Ingress players go out in the midday sun, so I took advantage of some spare time to capture a couple of portals.

After that my phone battery was apparently at 46C and my phone refused to charge.

It seems that in addition to the range of hardened phone cases we need some cooling cases for phones. A case that contained a substance with a melting point of 39C wouldn’t melt from body heat but would set an upper limit on the phone temperature. A peltier device probably wouldn’t work as it would take too much power (and the batteries supplying the power would produce more heat).

I think that the phones with an aluminium back are the best design. Aluminium is light, reflective (unlike the black plastic which is so common), and conducts heat better than most things. A phone shell made of copper probably isn’t viable due to copper being dense and soft.

Another problem is the need for third party cases to protect against damage. If the phone companies designed phones to be solid, rubbery at the edges (to bounce not break) and so that the screen didn’t touch the surface when the phone is face down then we could avoid phone cases which also act as thermal insulation.

I am a bit disappointed in Samsung. I could understand Nokia making phones that don’t survive the heat well, but I don’t think that Korea is that much cooler than Australia. A phone that works well on the hottest day of summer in Seoul should do better than my Galaxy S3.

Related posts:

  1. Dual SIM Phones vs Amaysim vs Contract for Mobile Phones Currently Dick Smith is offering two dual-SIM mobile phones for...
  2. Old Mobile Phones as Toys In the past I have had parents ask for advice...
  3. Efficiency of Cooling Servers One thing I had wondered was why home air-conditioning systems...

Syndicated 2013-01-17 05:58:28 from etbe - Russell Coker

Promoting Enthusiasm

Rusty wrote an insightful post titled “What Can I Do To Help?” about reactions to new ideas [1]. He suggests that people make an effort to have a positive approach when someone talks about a new idea, it’s quite common for people to point out reasons why the new idea might not work out which is discouraging for the person who had the idea. I think that is a really good point. I probably haven’t done too well in that regard in the past and will try to do better in future.

Code Written by Assholes

Rusty previously wrote a post titled “If you didn’t run code written by assholes, your machine wouldn’t boot” which implies that we should just let assholes be assholes [2]. That doesn’t go well with his “What Can I Do To Help?” post. Note that I’m not accusing Rusty of hypocrisy here, giving advice to help people who want to get along well with others is not in contradiction with refraining from giving unsolicited advice and encouragement to difficult people who have expressed no interest in improving their behavior. A comment on the latter post by “Doctor Whom” says “If I had seen this kind of talk when I was a teenager, I would have thought twice about picking up coding“, presumably given the number of people who read Rusty’s blog there are some teenagers who experienced some discouragement towards a career in computers (or a hobby in FOSS) from Rusty’s post.

I’ve already written a response to the “If you didn’t run code written by assholes” post, among other things I suggested that people who are minor assholes should be assisted to be less difficult and major assholes should be excluded [3]. In that post I was working on the assumption that for every significant task that needs to be completed (such as making a popular OS bootable) someone will do it, if the person working on it disappears then someone else will take over – there is a community of programmers who will work on whatever needs to be done.

The Importance of Individuals

But in terms of new ideas it really comes down to individuals. Most projects which are significant and important now probably started out as one person or a small group who had an idea that seemed unlikely to succeed at the time. So while any big and successful project can have people replaced (which is among other things a requirement of long-term success) there are situations in which individuals with ideas matter.

Another important factor is that even ideas which turn out to be impractical are still useful. Someone who has an impractical idea about a technical issue and investigates it fully will learn a lot and may end up working on the less radical ways of solving similar problems – this is good for the individual and the community.

Another Way of Promoting Enthusiasm

In terms of promoting enthusiasm it seems that one thing that can be done by high profile people is to avoid writing posts like “If you didn’t run code written by assholes, your machine wouldn’t boot”. When people in positions of power and influence appear to have no interest in promoting good behavior it really discourages people who are vulnerable to the assholes – which among other things means most members of minority groups. Obviously Rusty could’t stamp out all asshole behavior, but if he announced a plan to try and make things better in that regard then it would help. It’s difficult to be enthusiastic when faced with discrimination from a minority and disinterest from the majority.

Of course with the way the Internet works I’m sure someone will say “what about the assholes who have great ideas, shouldn’t we nurture their enthusiasm by letting them keep doing asshole things?”. I think that for the major assholes this won’t be a problem, for example anyone who’s racist will be well aware that many people disagree strongly with them and thus won’t be particularly discouraged when they meet more people who disagree. For the minor assholes (people who don’t want to be assholes) it will be somewhat discouraging to be corrected, but that could be a learning experience for them that’s worth more than support in implementing their latest technical idea.

Update: Why Rusty is Important

In response to a comment by private mail I’ve added this section after publication.

Firstly I think that the opinions of all members of the community matter as they all affect the social environment which determines what types of behavior are encouraged and discouraged. But Rusty is more important than most people.

Firstly Rusty has a Wikipedia page [4], that alone is an objective criteria indicating his importance.

But in terms of influencing people in the FOSS community the most important things are that he’s a high profile Linux kernel programmer (which alone gives significant status and influence) and that he’s the founder of the first Linux conference in Australia (which is now known as Linux.conf.au AKA LCA). When issues such as the anti-harassment policy for LCA are being discussed any opinion that Rusty offered would be taken very seriously. But so far he doesn’t seem to be involved in any of the public discussions.

Related posts:

  1. Are Assholes Essential to a Free Software Project? What do Assholes do? Rusty just wrote a post titled...
  2. Terms of Abuse for Minority Groups Due to the comments on my blog post about Divisive...

Syndicated 2013-01-14 13:40:57 from etbe - Russell Coker

Promoting Enthusiasm

Rusty wrote an insightful post titled “What Can I Do To Help?” about reactions to new ideas [1]. He suggests that people make an effort to have a positive approach when someone talks about a new idea, it’s quite common for people to point out reasons why the new idea might not work out which is discouraging for the person who had the idea. I think that is a really good point. I probably haven’t done too well in that regard in the past and will try to do better in future.

Code Written by Assholes

Rusty previously wrote a post titled “If you didn’t run code written by assholes, your machine wouldn’t boot” which implies that we should just let assholes be assholes [2]. That doesn’t go well with his “What Can I Do To Help?” post. Note that I’m not accusing Rusty of hypocrisy here, giving advice to help people who want to get along well with others is not in contradiction with refraining from giving unsolicited advice and encouragement to difficult people who have expressed no interest in improving their behavior. A comment on the latter post by “Doctor Whom” says “If I had seen this kind of talk when I was a teenager, I would have thought twice about picking up coding“, presumably given the number of people who read Rusty’s blog there are some teenagers who experienced some discouragement towards a career in computers (or a hobby in FOSS) from Rusty’s post.

I’ve already written a response to the “If you didn’t run code written by assholes” post, among other things I suggested that people who are minor assholes should be assisted to be less difficult and major assholes should be excluded [3]. In that post I was working on the assumption that for every significant task that needs to be completed (such as making a popular OS bootable) someone will do it, if the person working on it disappears then someone else will take over – there is a community of programmers who will work on whatever needs to be done.

The Importance of Individuals

But in terms of new ideas it really comes down to individuals. Most projects which are significant and important now probably started out as one person or a small group who had an idea that seemed unlikely to succeed at the time. So while any big and successful project can have people replaced (which is among other things a requirement of long-term success) there are situations in which individuals with ideas matter.

Another important factor is that even ideas which turn out to be impractical are still useful. Someone who has an impractical idea about a technical issue and investigates it fully will learn a lot and may end up working on the less radical ways of solving similar problems – this is good for the individual and the community.

Another Way of Promoting Enthusiasm

In terms of promoting enthusiasm it seems that one thing that can be done by high profile people is to avoid writing posts like “If you didn’t run code written by assholes, your machine wouldn’t boot”. When people in positions of power and influence appear to have no interest in promoting good behavior it really discourages people who are vulnerable to the assholes – which among other things means most members of minority groups. Obviously Rusty could’t stamp out all asshole behavior, but if he announced a plan to try and make things better in that regard then it would help. It’s difficult to be enthusiastic when faced with discrimination from a minority and disinterest from the majority.

Of course with the way the Internet works I’m sure someone will say “what about the assholes who have great ideas, shouldn’t we nurture their enthusiasm by letting them keep doing asshole things?”. I think that for the major assholes this won’t be a problem, for example anyone who’s racist will be well aware that many people disagree strongly with them and thus won’t be particularly discouraged when they meet more people who disagree. For the minor assholes (people who don’t want to be assholes) it will be somewhat discouraging to be corrected, but that could be a learning experience for them that’s worth more than support in implementing their latest technical idea.

Related posts:

  1. Are Assholes Essential to a Free Software Project? What do Assholes do? Rusty just wrote a post titled...
  2. Terms of Abuse for Minority Groups Due to the comments on my blog post about Divisive...

Syndicated 2013-01-14 09:55:57 from etbe - Russell Coker

Android Multitasking

My new Samsung Galaxy S3 has support for “Multi Window Mode”, here is a video which shows how to use this, Multi Window Mode starts at about 2:30 [1].

A common complaint about Android is the lack of multitasking, which is partly true and only slightly alleviated by Multi Window Mode.

Running Multiple Programs

Traditionally Android has multitasked with a similar ability to a Unix shell session, you can have applications running in the background and switch between them but you can’t see multiple applications running at the same time (apart from seeing notification messages at the top of the screen). The new Multi Window Mode allows two or more applications to share a screen. But it only applies to a small number of applications which support it, on my phone that is applications shipped by Samsung and Google Chrome. Also I can’t have multiple copies of Chrome open at the same time which means I can’t do the things that I do on every PC that runs Chromium (the non-Google build of Chrome).

I have not yet found a situation where Multi Window Mode has been useful to me, the applications I use for most tasks don’t support it.

So while Android being based on Linux does multitask really well in the technical computer-science definition it doesn’t do so well in the user-centric definition. In practice Android multitasking is mostly about task switching and doing things like checking email in the background. Having multiple programs running at once is particularly difficult due to the Android model of applications sometimes terminating when they aren’t visible. A common task is to view a message in a MUA and switch between that and another window (EG a web browser). K9 is my preferred MUA for Android which seems to have no option to switch back and still be viewing the same message as before the task switch – so at least three actions need to be taken to get back to the same message after I resume K9. One major feature to make multitasking on Android more usable would be a way of rapidly switching between two applications and being certain that each one would be in the same state when the user switches back to it.

Copy and Paste

Another problem with Android multitasking is the difficulty in copying and pasting text. While copy/paste is not strictly required for multitasking it is a logical requirement when you have multiple applications running on behalf of a single user. For PCs everyone knows that you select text by holding down the SHIFT key while using the cursor control keys or by holding down the mouse button and swiping the mouse cursor over the text and you then use CTRL-C or the Edit menu to copy text. On Android it’s a long press to select text which then gives you markers for the start and end, you drag those markers around and then select that you want to copy the text. This is at best a lot more inconvenient than using a high resolution input device like a mouse to select text. At worst it doesn’t seem particularly reliable, K9 Mail for example won’t let me copy text from a message for unknown reasons – on a desktop OS such problems are vanishingly rare such that I can’t think of an example of it happening.

IO and Multitasking

Multitasking for a user (as opposed to the multitasking needed to host dozens or hundreds of concurrent users) on Unix servers was very limited in the days of VT100 terminals and similar devices. Programs such as GNU Screen [2] allowed a text display to perform windowing functions that are similar to a modern GUI. But generally it seems that the ability for a user to run multiple programs at once is largely limited by their ability to see the output and to rapidly switch between programs or sessions.

As a keyboard with ~100 keys and a text display with 80*25 characters is a major limitation it’s obviously going to be a comparable (and often greater) limitation to have an on-screen “keyboard” that takes half the screen space, a single program taking all the screen, and a drop-down status bar that might be useful for multitasking.

With Android 4.0 and above you can activate a task switcher by holding down the home button for two seconds [3]. There are also a variety of third party task switching programs on the Google Play store which all seem to start by holding down the home button. One problem with these options is that they require an extended press of the home button where the ideal is something that is as quick as ALT-TAB or a single mouse movement on a desktop system. One possible input action would be to switch between the most recent tasks with a palm swipe on the screen – an operation that is quick and easy. Currently a palm swipe is used for a screen capture, but as there are four possible directions for swiping the screen one of them could be used for screen capture and another two for task switching. But this wouldn’t do a lot of good without the ability to switch tasks without losing the context – that either requires Android application changes or having the OS not tell an application that it was occluded.

The iPaQ had interesting capabilities for input, it had a main button on the front that could be pushed in four directions (usually for cursor control) as well as being pushed in, four extra buttons on the front, and a button on the side [4]. I don’t recall the methods that Familiar Linux on the iPaQ [5] used for task switching, but it was less of an issue as the iPaQ had less RAM, no Internet access and no telephony functions. I think that adding a bunch of extra buttons to an Android phone would make it a lot more useful. The iPaQ method of cursor control is one that could be considered (it could alleviate the copy/paste problems among other things). As an aside in two years of using Android phones I’ve done less serious writing on Android than I did in my first two years of using Familiar on the iPaQ largely due to the lack of keys and a stylus on my Android phones.

The screen size on Android phones is also a limit for multitasking. The earlier/cheaper phones that have small screens with resolutions such as 320*480 have very limited ability to display two programs at once. The 720*1280 display in the Galaxy S3 has a lot more potential in this regard and the Galaxy Note 2 and the HTC J Butterfly AKA Droid DNA have even more potential. In the future it seems that screen size limitations on phone multitasking will be a solved problem for everyone who can afford one of the high end phones – which incidentally are much cheaper than the iPaQ was 10 years ago.

Conclusion

Checking email in the background etc is very useful on Android systems. But in terms of the user running two programs at once it seems very limited, and that situation seems likely to remain until the vendors adopt multi-window support. This could be difficult given that Google applied a lot of pressure to CyanogenMod to stop them from doing it [6].

Even when a system with a large display (particularly a tablet) runs a version of Android that supports multi-window mode that still won’t entirely solve the problem. No matter how big your display is there are occasions when you need to use it all for a single application while still having the ability to rapidly switch to another application. User interface tweaks to allow rapid task switching without losing application context are necessary.

Finally for the past two years that I’ve been using Android devices I have been disappointed in the ways that they compare poorly to the iPaQ running Familiar I used 10 years ago. I once wrote a feature article for a magazine on an iPaQ while so far I haven’t even written a blog post on an Android device. I think that some of the earlier Android devices might have been better in some ways, the trackball on the HTC Nexus One might have made it more suitable for writing long articles than more recent Android devices.

Related posts:

  1. Review of the EeePC 701 I have just bought a EeePC 701 [1], I chose...
  2. Standardising Android Don Marti wrote an amusing post about the lack of...
  3. Love of Technology at First Sight After seeing the Retina display I’ve been thinking about the...

Syndicated 2013-01-13 09:38:44 from etbe - Russell Coker

The Death of the Netbook

The Age has an interesting article about how Apple supposedly killed the Netbook [1]. It’s one of many articles with a similar spin on the news that the last two companies making Netbooks are going to cease production. The main point of these articles is that Apple decided that Netbooks were crap and killed the market for them by producing tablets and light laptops that squeeze them out of the market.

Is the Macbook Air a Netbook?

According to the Wikipedia page the Macbook Air [2] weighs 1080g for the 11″ version and 1340g for the 13″ version. According to Wikipedia the EeePC 701 (the first EeePC) weighs 922g and the last EeePC weighs 1460g [3]. The last EeePC produced is heavier than ANY Macbook Air while the first (and lightest) EeePC is only 158g lighter than the 11″ Macbook Air.

The 11″ Macbook Air is 300*192*17mm (979cm^3) in size while the EeePC 701 is 225*165*35mm (1299cm^3) and the biggest EeePC was 266*191*38mm (1931cm^3). So the 11″ Macbook Air is 13% wider than the widest EeePC but takes less volume than any EeePC. The 13″ Macbook Air is 325*227*17mm (1254cm^3) which is still less volume than any EeePC. The Wikipedia page about Netbooks defines them as being small, lightweight, legacy-free (in terms of hardware not software) and cheap [4]. The Macbook Air clearly meets all the criteria apart from price.

The Apple US web site offers the version of the 11″ Macbook Air with 64G of storage for $999 with free shipping, for comparison the EeePC 701 was on sale in stores for $500 in 2008. The CPI adjusted price for the EeePC 701 would be at least $550 in today’s money. The Macbook is a bit less than twice as expensive as the EeePC was, but that’s more of an issue of Apple being expensive – a few years ago companies like HP were also selling Netbooks that were more expensive than the EeePC.

Unless having an awful keyboard is a criteria for being a Netbook I think that the Macbook Air meets the criteria.

As an aside, a relative recently asked me for advice on a device that is like a Macbook Air but cheaper. Does anyone know of a good option?

Is Netbook Production Ceasing?

Officeworks currently sells an ASUS “Notebook” that has a 11.6″ display and weighs 1.3kg for $398, it’s got a metal body that looks a bit like a Macbook Air (which is the latest fashion and is good for heat dissipation). That’s not advertised as a Netbook or a “Eee” product but it’s cheap, lighter than the heaviest EeePC, and not much bigger than an EeePC.

It seems that the general prices of laptops other than Apple products (which have always had higher prices) have been dropping a lot recently. There are lots of good options if you want a laptop that costs $500 or less. Even Thinkpads (one of the most expensive and best designed ranges of laptops) are well below $1000.

Do the Articles about Netbooks Make Sense?

The claims being made are that Apple skipped Netbooks because they couldn’t make a good profit. This disregards the fact that the iPhone and iPad (which are very profitable) are in the high end of the price range that was occupied by Netbooks. While Apple does make a good deal of money from the iPhone App Market it would be possible to make a Netbook with a lower production price than an iPhone because making things smaller requires more engineering work and often more expensive parts. This also disregards the fact that there are a range devices which work as an iPad case with keyboard, an iPad with such a keyboard meets most criteria for being a Netbook, so Apple is one iPad keyboard device away from selling Netbooks.

It’s interesting to note that I haven’t yet seen an article about the profits from Netbooks which didn’t make an issue of the MS-Windows license fees. The first Netbooks only ran Linux but later on they switched to Windows, that had to make a big impact on profits. An article about Netbooks which just assumes that everyone has to pay a MS license fee is missing too much of the Netbook history to be useful. I wonder if anyone could make products that are as profitable as the iPhone and Macbook Air if they had to pay for MS license fees and design their hardware to work with MS software (as opposed to Apple who can change their software to allow a cheaper hardware design).

The articles also claim that Netbooks give a bad user experience. When I bought my EeePC 701 it was the fastest system I owned for loading OpenOffice, SSD random read speeds were really good (writes sucked but that didn’t matter so much). The keyboard on an EeePC 701 is not nearly as good as a full size laptop but it is also a lot better than using a tablet, I’ve used both a 10″ Android tablet and an EeePC as a ssh client and there is no comparison. When I’m going somewhere that requires random sysadmin work (or other serious typing) and I can’t carry much weight then I still take my EeePC 701 and I don’t consider taking a tablet. The low resolution of the screen is a major issue, but it’s about the same as a Macbook Air so that’s not an advantage for Apple. I knew some people who used an EeePC 701 for the majority of their work, I couldn’t do that but obviously some people have different requirements.

I now use my phone for many tasks that I used to do on my EeePC (even light sysadmin work) so my EeePC sometimes goes unused for months. But it’s still an important part of my collection of computers. It works well for what it does and I don’t feel any need to buy a replacement. When it wears out I’ll probably buy something similar to an 11″ Macbook Air to replace it unless there’s a good option of a tablet with a detachable keyboard.

My plans for computer ownership for the near future are based on a reasonably large Android phone (currently a Samsung Galaxy S3 but maybe a Galaxy Note 2 or similar next year), a small laptop or large tablet with hardware keyboard (currently an EeePC 701), a large laptop (currently a Thinkpad T61), and a workstation (currently a NEC system with an Intel E4600 CPU and a Dell U2711 27″ monitor). A reasonably small and light system with a hardware keyboard and solid state storage is an important part of my computer needs. If tablet computers with hardware keyboards replace traditional Netbooks that’s not really killing Netbooks but introducing a new version of the same thing.

But a good way of getting web hits on an article is to claim that a once popular product is dead.

Related posts:

  1. How to Choose a NetBook I’ve previously written some suggestions for people choosing a portable...
  2. The Always Innovating Smartbook/Netbook Always Innovating have an interesting netbook that can be detached...
  3. My Ideal Netbook I have direct knowledge (through observation or first-hand reports) of...

Syndicated 2013-01-08 12:51:49 from etbe - Russell Coker

Links January 2013

AreWomenHuman has an interesting article about ViolentAcrez and the wide support for trolling (including by media corporations) [1].

Chrys Stevenson wrote an important article for the ABC about the fundamentalist Christians who are trying to take over the Australian education system [2].

Tavi Gevinson gave an interesting TED talk titled “A teen just trying to figure it out” about her work starting Rookie magazine and her ideas about feminism [3].

Burt Rutan gave an interesting and inspiring TED talk about the future of space expploration [4]. One of his interesting points is that “fun really is defendable” in regard to tourism paying for the development of other space industries.

Stephen Petranek gave an interesting TED talk about how to prepare for some disasters that could kill a significant portion of the world’s population [5]. Some of these are risks of human extinction, we really need to spend some money on it.

John Wilbanks gave an intresting TED talk about the way that current informed consent laws prevent large-scale medical research [6]. He says “I live in a web world where when you share things beautiful stuff happens, not bad stuff“.

Joey Hess was interviewed for The Setup and the interview sparked a very interesting Hacker News discussion about workflow for software development [7]. Like most developers I prefer large screens with high resolution, I have an EeePC 701 which works reasonably well for an ultra-portable system but I largely don’t use it now I have an Android phone (extremely portable and totally awful input usually beats moderately portable and mostly awful input for me). But Joey’s methods are interesting and it seems that for some people different systems give the best result.

Jeff Masters gave an insightful TED talk about the weather disasters that may seriously impact the US in the next 30 years [8]. Governments really need to start preparing for such things, some of them are really cheap to mitigate if work is started early.

Bryan Stevenson gave an inspiring TED talk about the lack of justice in the US justice system [9].

Wouter Verhelst wrote an insightful article about some of the criticisms of Linux from Windows users [10]. He references a slightly satirical post he previously wrote about why Windows isn’t ready for desktop use.

Paul Carr wrote an interesting article comparing “disruptive” business practices of dot-com companies to the more extreme aspects of Ayn Rand’s doctrine [11]. In reading some of the links from that article I discovered that Ayn Rand was even more of a sociopath than I had previously realised.

Lindy West gave an amazing Back Fence PDX talk about dealing with nasty blog comments from the PUA/MRA communities [12]. After investigating them she just feels sorry for the trolls who’s lives suck.

Hang from the Vlogbrothers explains gender, sex, sexual orientation, etc [13].

Rick Falkvinge wrote an interesting article about recent political news from Brazil, they had a proposed law that was very positive for liberty on the Internet but it was sabotaged by the media and telcos [14]. We should try to avoid paying any money to the media industry so that they can go away sooner.

Amy Cuddy gave an interesting TED talk about body language, power, and the imposter syndrome [15].

Caleb Chung gave an interesting TED talk about toy design which focussed on Pleo a robotic dinosaur with a SD card and USB socket to allow easy reprogramming by the user [16].

Related posts:

  1. Links January 2012 Cops in Tennessee routinely steal cash from citizens [1]. They...
  2. Links January 2011 Halla Tomasdottir gave an interesting TED talk about her financial...
  3. Links March 2012 Washington’s Blog has an informative summary of recent articles about...

Syndicated 2013-01-04 11:47:41 from etbe - Russell Coker

Modern Swap Use

A while ago I wrote a blog post debunking the idea that swap should be twice as large as RAM [1]. The issue of swap size has come up for discussion in a mailing list so it seems that it’s worth another blog post.

Swap Size

In my previous post I suggested making swap space equal to RAM size for systems with less than 1G of RAM and half RAM size for systems with 2-4G of RAM with a maximum of 2G for any system. Generally it’s better to have the kernel Oom handler kill a memory hungry process than to have the entire system go so slow that a hardware reset is needed. I wrote memlockd to lock the important system programs and the shared objects and data files they use into RAM to make it easier to recover from some bad paging situations [2], but even that won’t always make it possible to login to a thrashing system in a reasonable amount of time.

But it’s not always good to reduce swap size. Sometimes if you don’t have enough swap then performance sucks in situations where it would be OK if there was more swap. One factor in this regard is that pages mapped read-only from files are discarded when there is great memory pressure and no swap space to page-out the read-write pages. This can mean that you get thrashing among memory for executables and shared objects while there’s lots of unused data pages in RAM that can’t be paged out. One noteworthy corner case in this regard is Chromium which seems to take a lot of RAM if you have many tabs open for a long time.

Another factor is that there is a reasonable amount of memory allocated which will almost never get used (EG all the X based workstations which have 6 gettys running on virtual consoles which will almost never be used). So a system which has a large amount of RAM relative to it’s use (EG a single-user workstation with 8G of RAM) can still benefit from having a small amount of swap to allow larger caches. One workstation I run has 8G of RAM for a single user and typically has about 200M of the 1G swap space in use. It doesn’t have enough swap to make much difference if really memory hungry programs are run, but having 4G of memory for cache instead of 3.8G might make a difference to system performance. So even systems which can run without using any swap can still be expected to give better performance if there is some swap space for programs that are very inactive.

I have some systems with as much 4G of swap, but they are for corner cases where swap is allocated but there isn’t a lot of paging going on. For example I have a workstation with 3G of RAM and 4G of swap for the benefit of Chromium.

Hardware Developments

Disk IO performance hasn’t increased much when compared to increases in RAM size over the last ~15 years. A low-end 1988 era hard drive could sustain 512KB/s contiguous transfer rates and had random access times of about 28ms. A modern SATA disk will have contiguous transfer rates getting close to 200MB/s and random access times less than 4ms. So we are looking at about 400* the contiguous performance and about 10* the random access since 1988. In 1988 2MB was a lot of RAM for a PC, now no-one would buy a new PC with less than 4G – so we are looking at something like 2000* the RAM size in the same period.

When comparing with 1993 (when I first ran a Linux server 24*7) it was 4M of RAM, maybe 15ms random access, and maybe 1MB/s contiguous IO – approximately double everything I had in 1988. But I’ll use the numbers from 1988 because I’m more certain of them even though I never ran an OS capable of proper memory management on the 1988 PC.

In the most unrealistically ideal situation paging IO will be a factor of 5* more expensive now relatively than it was in 1988 (RAM size increase by a factor of 2000 divided by contiguous IO performance increase by a factor of 400). But in a more realistic situation (random IO) it will be 200* more expensive relatively than it was. In the early 90s a Linux system with swap use equal or greater than RAM could perform well (my recollection is that a system with 4M of RAM and 4M of active swap performed well and the same system with 8M of active swap was usable). But nowadays there’s no chance of a system that has 4G of RAM and 4G of swap in active use being usable in any way unless you have some corner case of an application allocating RAM and not using it much.

Note that 4G of RAM doesn’t make a big system by today’s standards, so it might be reasonable to compare 2M in 1988 to 8G or 16G today. But I think that assuming 4G of RAM for the purpose of comparison makes my point.

Using Lots of Swap

It is probably possible to have some background Chromium tabs using 4G of paged out RAM with good system performance. I’ve had a system use 2G of swap for Chromium and perform well overall. There are surely many other ways that a user can launch processes that use a lot of memory and not use them actively. But this will probably require that the user know how the system works and alter their usage patterns. If you have a lot of swap in use and one application starts to run slowly you really don’t want to flip to another application which would only make things worse.

If you use tmpfs for /tmp then you need enough swap to store everything that is written to /tmp. There is no real upper limit to how much data that could be apart from the fact that applications and users generally don’t want to write big data files that disappear on reboot. I generally only use a tmpfs for /tmp on servers. Programs that run on servers tend not to store a large amount of data in /tmp and that data is usually very temporary, unlike workstation users who store large video files in /tmp and leave them there until the next reboot.

Are there any other common corner cases that allow using a lot of swap space without serious performance problems?

My Swap Space Allocation

When I setup Linux workstations for other people I generally use 1G of swap. For a system that often has multiple users (via the user-switch feature) I use 2G of swap if it only has 3G of RAM (I run many systems which are limited to 3G of RAM by their chipset).

For systems that I personally use I generally allocate 4G of swap, this is to cater to excessive RAM use by Chromium and also because I test out random memory hungry programs and can’t easily predict how much swap space will be needed – a typical end-user desktop system is a lot more predictable than a workstation used for software development.

For servers I generally allocate 512M of swap space. That’s enough to page out things that aren’t used and make space for cache memory. If that turns out to be inadequate then it’s probably best to just let the watchdog daemon reboot the system.

Related posts:

  1. Swap Space There is a wide-spread myth that swap space should be...
  2. Killing Servers with Virtualisation and Swap The Problem: A problem with virtual machines is the fact...
  3. Xen and Swap The way Xen works is that the RAM used by...

Syndicated 2012-12-31 00:07:47 from etbe - Russell Coker

Samsung Galaxy Camera – a Quick Review

I recently had a chance to briefly play with the new Samsung Galaxy Camera [1]. The Galaxy Camera is an Android device with a 4.8″ display (the same size as the Samsung Galaxy S3) that has a fairly capable camera (IE nothing like a typical phone camera). It runs Android 4.1 (Jelly Bean) and the camera has 21* zoom with a 16 megapixel sensor.

Camera Features

It seems that professional photographers are often annoyed when they see someone with a DSLR set in auto mode. It’s widely regarded that auto mode is a waste of a good camera, although the better lenses used with DSLRs will usually give a better result than any compact camera even when it’s in auto mode. The problem is that photography is quite complex, in an earlier post about digital cameras I summarised some of the technical issues related to cameras and even without any great detail it became a complex issue [2]. The Galaxy Camera has a reasonably friendly GUI for changing camera settings which even includes help on some of the terms, I expect that most people who use it will end up using most of the features which could make it a good training camera for someone who is going to move to a DSLR. A DSLR version of the Galaxy Camera could also be an interesting product. The camera also has modes such as “Waterfall” and “Panorama”, hopefully the settings for those would be exposed to the user so they could devise their own similar groups of settings.

I’ve seen the phone criticised for the lack of physical controls as the expert mode in software is inherently slower than manually turning dials on a DSLR. But it seems obvious to me that anyone who knows how to use the controls manually should be using a DSLR or bridge camera and anyone who doesn’t already know how to do such things will be better suited by the software controls.

It supports 120fps video at 720*480 resolution (with a file format stating that it’s 30fps to give 1/4 speed) which could be useful. I used to have a LG Viewty smart-phone that did 120fps video but the resolution was too low to be useful. 720*480 is enough resolution to see some detail and has the potential for some interesting video, one use that I’ve heard of is filming athletes to allow them to analyse their performance in slow motion. It also does 60fps video at 720p (1280*720) resolution.

One down-side to the device is that the lens cover doesn’t seem particularly sturdy. It’s quite OK for a device that will be stored in a camera case but not so good for a device that will be used as a tablet. I didn’t get to poke at the lens cover (people don’t like it if you mess with their Christmas presents) but it’s design is a couple of thin flaps that automatically retract when the camera is enabled which looks quite weak. I’d like to see something solid which doesn’t look like it will slide back if the device is treated as roughly as a phone.

I think that the lack of a solid lens cover could be the one factor that prevents it from being used as a replacement for a smart phone. Apart from that a Galaxy Camera and a cheap GSM phone could perform all the functions of a high end phone such as the Galaxy S3 while also producing great pictures. It would probably make sense for retailers to bundle a cheap phone with a Galaxy Camera for this purpose.

Tablet Features

The device boasts WiFi Direct to allow multiple cameras and phones to talk to each other without a regular WiFi access point [3]. I didn’t test this and I don’t think it would be particularly useful to me, but it seems like a handy feature for less technical users.

It can connect to the Internet via Wifi or 3G, supports automatic upload of pictures (it comes with Dropbox support by default like the Galaxy S3), and has a suite of photo and video editing software. I don’t expect that any photo editing software that runs on an Android device would be much good (I think that you really need fine cursor control with a mouse and a high resolution screen), but it would probably be handy for sending out a first draft of photos.

Most Android apps should just work, the exceptions being apps that rely on a camera that faces the user or full phone functionality. So the Galaxy Camera can do almost anything that an Android phone or tablet can do.

Value

The RRP for the Galaxy Camera is $599, that puts it in the same price range as a DSLR with a single lens. While that’s not a bad price when compared to smart-phones (it’s cheaper than the LTE version of the Galaxy S3 phone) it’s still quite expensive for a camera that’s not a DSLR.

Fortunately Kogan is selling it for $469 and has free shipping at the moment [4]. This still makes it more expensive than some of Bridge Cameras which probably have significantly better optical features, but in terms of what the typical user can do with a camera the Galaxy Camera will probably give a much better result.

The sensor in the Galaxy Camera is smaller than that in the Nokia 808 PureView [5] (1/2.3″ vs 1/1.2″) so the Nokia PureView should be able to take better pictures in some situations. Unfortunately the Nokia 808 doesn’t run Android, I’d probably own one if it did.

Some of the reviews are rather harsh, the Verge has a harsh but fair review by Aaron Souppouris which makes a number of negative comparisons to cheaper cameras [6]. I really recommend reading Aaron’s review as there’s a lot of good information there. But I think that Aaron is missing some things, for example he criticises the inclusion of ebook software by saying that he wouldn’t read a book on a camera. But the device is a small tablet computer which also has a compact camera included. I can easily imagine someone reading a book or playing Angry Birds on their camera/tablet while in transit to where they are will photograph something. I can also imagine a Galaxy Camera being a valuable tool for a journalist who wants to be able to write short articles and upload pictures and video when out of the office.

Aaron concludes by suggesting that the Galaxy Camera is a $200 camera with $300 of editing features. I think of it as $200 in camera hardware with software that allows less skilled users to take full advantage of the hardware and the ability to do all the software/Internet things that you would do on a $450+ smart-phone.

Would I Buy One?

No.

The Galaxy Camera is among other things a great toy, I’d love to have one to play with but I can’t spare $469 on one. Part of the reason for this is that my wife just bought a DSLR and is getting lessons from a professional photographer, so I really won’t get better pictures from a Galaxy Camera. The DSLR on auto mode will allow me to take pictures that will usually be better than a Galaxy Camera can achieve (sometimes you just can’t beat a good lens). For more demanding pictures my wife can tweak the DSLR. The 120fps video is a really nice feature, I don’t know if my wife’s DSLR can do that, but it’s a toy feature not something I really need.

I’ve just bought a Galaxy S3 which is a great little tablet computer (most of the time it won’t be used for phone calls). I don’t need another 4.8″ tablet so a significant part of the use of the Galaxy Camera doesn’t apply to me.

I recommend the Galaxy Camera to anyone who wants to take good photos but can’t get a DSLR and lessons on how to use it properly. But if you would rather get a 35mm camera with interchangeable lenses that runs Android then it might be worth waiting. I expect that the Galaxy Camera will be a great success in the market (it’s something you will love when you see it). That will drive the development of similar products, if Samsung doesn’t release a 35mm Android camera soon then someone else will (for example Sony develops both high end cameras and Android phones).

If my wife didn’t have a DSLR then I’d probably have bought a Galaxy Camera already. I will recommend it to my parents and many other people I know who want an OK camera and can benefit from a tablet, but don’t know how to use a DSLR properly (or don’t want to carry a bulky camera).

Related posts:

  1. Samsung Galaxy S3 First Review with Power Case My new Samsung Galaxy S3 arrived a couple of days...
  2. A First Digital Camera I’ve just been asked for advice on buying a digital...
  3. CyanogenMod and the Galaxy S Thanks to some advice from Philipp Kern I have now...

Syndicated 2012-12-29 14:35:14 from etbe - Russell Coker

SIM Annoyances

My new Samsung Galaxy S3 phone takes a micro-SIM (see the Wikipedia page about Subscriber Identity Modules for details [1]). All my other mobile devices take a mini-SIM so I can’t just put the SIM from my old phone in my new phone. I’ve just made my second application to Virgin Mobile for a new micro-SIM, the first time they sent me a nano-SIM by mistake. Also if my new phone breaks I will have difficulty in getting a micro-SIM to run in a mini-SIM device (it should be possible, but will be difficult at least). I may even have to ask my Telco to send me a new mini-SIM to allow the use of an older phone if my Galaxy S3 breaks.

The difference between micro and mini SIMs is 25*15mm vs 15*12mm. For a phone with a 4.8″ display this doesn’t seem to be a great benefit. It doesn’t seem that the phone would be much bigger if they had designed it for a mini-SIM. If they had used a nano-SIM which is 0.09mm thinner that might have allowed them to make the entire phone thinner, I think that such thin phones are a bad idea but the reviewers seem to like thin phones with small batteries. I think that any device which is large enough to have a speaker somewhere near my ear and a microphone somewhere near my mouth will be big enough to fit a 25mm long SIM card.

It seems to have been a trend in Australia in the last few years towards only selling unlocked phones on contracts (relying on the contract terms to lock the customer in) and only use locked phones for discount sales for pre-paid use. Also the “free” phones in Australia are considerably more expensive than buying a phone outright [2]. So I’m sure that most people have a collection of older phones which in future can’t be easily interchanged with new phones due to SIM card size differences.

This is more of a problem due to the fact that modern phones seem to have a low quality of hardware production, the majority of Android phones owned by my relatives have failed in some substantial way well within the two year replacement period. I don’t believe that I can rely on my Galaxy S3 working until I want to buy something better, I expect that my old Galaxy S (which has a hardware fault that makes it crash regularly but is mostly usable) will be pressed into service again.

In terms of phone purchases, buying a phone that takes a micro-SIM seems like a bad strategy as there is an even smaller nano-SIM available. For all I know Samsung will release a Galaxy S4 or Note3 that uses a nano-SIM and give me the same upgrade problem in a couple of years time.

I wish that the people who reviewed phones would pay more attention to the real world use cases than to slightly better specifications. The micro and nano SIMs seem to provide no real benefit for users. Saving a fraction of a gram in device mass or a fraction of a millimeter in one dimension might seem nice when aggregated with other small savings, but for the user it’s no real benefit. Being able to interchange a bunch of random phones is a real benefit. It’s also a real benefit to be able to buy a phone and expect it to just work immediately. With the current state of the market one can’t buy a phone and make any assumptions about the SIM size that it will accept, at best this requires some extra research before buying and at worst this could involve some time without phone service.

I think that a major feature for a review of an important device like a modern smart phone should be how quickly and easily it can be deployed for full use. When SIM size issues prevent a new device from being used properly for over a week (as has happened for me) then it seems like a design failure.

Related posts:

  1. Changing Phone Prices in Australia 18 months ago when I signed up with Virgin Mobile...
  2. Back to the Xperia X10 10 months ago I was given a Samsung Galaxy S...
  3. CyanogenMod and the Galaxy S Thanks to some advice from Philipp Kern I have now...

Syndicated 2012-12-28 02:11:53 from etbe - Russell Coker

1027 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!