Older blog entries for etbe (starting at number 1059)

Nexus 4

My wife has had a LG Nexus 4 for about 4 months now so it’s time for me to review it and compare it to my Samsung Galaxy S3.

A Sealed Case

The first thing to note about the Nexus 4 is that it doesn’t support changing a battery or using micro-SD storage. The advantage of these design choices is that it allows reduced weight and greater strength compared to what the phone might otherwise be. Such choices would also allow the phone to be slightly cheaper which is a massive advantage, it’s worth noting that the Nexus 4 is significantly cheaper than any other device I can buy with comparable specs. My wife’s phone has 8G of RAM and cost $369 at the start of the year while the current price is $349 for the 8G version and $399 for the 16G version. Of course one down-side of this is that if you need 16G of storage then you need to spend an extra $50 on the 16G phone instead of buying a phone with 8G of storage and inserting a 16GB micro-SD card which costs $19 from OfficeWorks. Also there’s no option of using a 32G SD card (which costs less than $50) or a 64G SD card.

Battery etc

The battery on the Nexus 4 isn’t nearly big enough, when playing Ingress it lasts about half as long as my Galaxy S3, about 90 minutes to fully discharge. If it was possible to buy a bigger battery from a company like Mugan Power then the lack of battery capacity wouldn’t be such a problem. But as it’s impossible to buy a bigger battery (unless you are willing to do some soldering) the only option is an external battery.

I was unable to find a Nexus 4 case which includes a battery (which is probably because the Nexus 4 is a lot less common than the Galaxy S3) so my wife had to buy an external battery. If you are serious about playing Ingress with a Nexus 4 then you will end up with a battery in your pocket and cable going to your phone from the battery, this is a real annoyance. While being a cheap fast phone with a clear screen makes it well suited to Ingress the issue of having a cable permanently attached is a real down-side.

One significant feature of the Nexus 4 is that it supports wireless charging. I have no immediate plans to use that feature and the wireless charger isn’t even on sale in Australia. But if the USB connector was to break then I could buy a wireless charger from the US and keep using the phone, while for every other phone I own a broken connector would render the phone entirely useless.

Screen Brightness

I have problems with my Galaxy S3 not being bright enough at midday when on “auto” brightness. I have problems with my wife’s Nexus 4 being too bright in most situations other than use at midday. Sometimes at night it’s painfully bright. The brightness of the display probably contributes to the excessive battery use. I don’t know whether all Nexus 4 devices are like this or whether there is some variance. In any case it would be nice if the automatic screen brightness could be tuned so I could make it brighter on my phone and less bright on my wife’s.

According to AndroSensor my Galaxy S3 thinks that the ambient light in my computer room is 28 lux while my wife’s Nexus 4 claims it’s 4 lux. So I guess that part of the problem is the accuracy of the light sensors in the phones.

On-Screen Buttons

I am a big fan of hardware buttons. Hardware buttons work reliably when your fingers are damp and can be used by feel at night. My first Android phone the Sony-Ericsson Xperia X10 had three hardware buttons for settings, home, and back as well as buttons for power, changing volume, and taking a photo which I found very convenient. My Galaxy S3 has hardware buttons for power, home, and volume control. I think that Android phones should have more hardware buttons not less. Unfortunately it seems that Google and the phone manufacturers disagree with me and the trend is towards less buttons. Now the Nexus 4 only has hardware buttons for power, and volume control.

One significant advantage of the Galaxy S3 over the Nexus 4 is that the S3′s settings and back buttons while not implemented in hardware are outside the usable screen area. So the 4.8″ 1280*720 display is all for application data while the buttons for home, settings, and back on the Nexus 4 take up space on the screen so only a subset of the 4.7″ 1280*768 is usable by applications. While according to specs the Nexus 4 has a screen almost as big as the Galaxy S3 and a slightly higher resolution in practice it has an obviously smaller screen with fewer usable pixels.

Also one of the changes related to having the buttons on-screen means that the “settings” button is often in the top right corner which I find annoying. I didn’t like that aspect of the GUI the first time I used a tablet running Android 3.0 and I still don’t like it now.

GPS

My wife’s Nexus 4 seems to be much less accurate than my Galaxy S3 for GPS. I don’t know how much of this is due to phone design and how much is due to random factors in manufacturing. I presume that a large portion of it is due to random manufacturing issues because other people aren’t complaining about it. Maybe she just got unlucky with an inaccurate phone.

Shape and Appearance

One feature that I really like in the Samsung Galaxy S is that it has a significant ridge surrounding the screen. If you place a Galaxy S face-down on a desk that makes it a lot less likely to get a scratch on the screen. The LG U990 Viewty also had a similar ridge. Of course the gel case I have bought for every Android phone has solved this problem, but it would really be nice to have a phone that I consider usable without needing to buy such a case. The Nexus 4 has a screen that curves at the edges which if anything makes the problem worse than merely lacking a ridge around the edge. On the up-side the Nexus 4 looks and feels nice before you use it.

The back of the Nexus 4 sparkles, that’s nice but when you buy a gel case (which doesn’t seem to be optional with modern design trends) you don’t get to see it.

The Nexus 4 is a very attractive package, it’s really a pity that they didn’t design it to be usable without a gel case.

Conclusion

Kogan is currently selling the Galaxy S3 with 16G of storage for $429. When comparing that to the 16G version of the Nexus 4 at $399 that means there’s a price of $30 to get a SD socket, the option of replacing a battery, one more hardware button, and more screen space. So when comparing the Google offers for the Nexus 4 with the Kogan offer on the Galaxy S3 or the Galaxy Note which also has 16G of storage and sells for $429 the Google offer doesn’t seem appealing to me.

The Nexus 4 is still a good phone and is working well for my wife, but she doesn’t need as much storage as I do. Also when she got her phone the Galaxy S3 was much more expensive than it is now.

Also Kogan offer the 16G version of the Nexus 4 for $389 which makes it more appealing when compared to the Galaxy S3. It’s surprising that they can beat Google on price.

Generally I recommend the Nexus 4 without hesitation to anyone who wants a very capable phone for less than $400 and doesn’t need a lot of storage. If you need more storage then the Galaxy S3 is more appealing. Also if you need to use a phone a lot then a Galaxy S3 with a power case works well in situations where the Nexus 4 performs poorly.

Related posts:

  1. CyanogenMod and the Galaxy S Thanks to some advice from Philipp Kern I have now...
  2. Back to the Xperia X10 10 months ago I was given a Samsung Galaxy S...
  3. Samsung Galaxy S3 First Review with Power Case My new Samsung Galaxy S3 arrived a couple of days...

Syndicated 2013-05-29 05:17:14 from etbe - Russell Coker

Links May 2013

Cameron Russell (who works as an underwear model) gave an interesting TED talk about beauty [1].

Ben Goldacre gave an interesting and energetic TED talk about bad science in medicine [2]. A lot of the material is aimed at non-experts, so this is a good talk to forward to your less scientific friends.

Lev wrote a useful description of how to disable JavaScript from one site without disabling it from all sites which was inspired by Snopes [3]. This may be useful some time.

Russ Allbery wrote an interesting post about work and success titled ‘The “Why?” of Work’ [4]. Russ makes lots of good points and I’m not going to summarise them (read the article, it’s worth it). There is one point I disagree with, he says “You are probably not going to change the world“. The fact is that I’ve observed Russ changing the world, he doesn’t appear to have done anything that will get him an entry in a history book but he’s done a lot of good work in Debian (a project that IS changing the world) and his insightful blog posts and comments on mailing lists influence many people. I believe that most people should think of changing the world as a group project where they are likely to be one of thousands or millions who are involved, then you can be part of changing the world every day.

James Morrison wrote an insightful blog post about what he calls “Penance driven development” [5]. The basic concept of doing something good to make up for something you did which has a bad result (even if the bad result was inadvertent) is probably something that most people do to some extent, but formalising it in the context of software development work is a cencept I haven’t seen described before.

A 9yo boy named Caine created his own games arcade out of cardboard, when the filmmaker Nirvan Mullick saw it he created a short movie about it and promoted a flash mob event to play games at the arcade [6]. They also created the Imagination Foundation to encourage kids to create things from cardboard [7].

Tanguy Ortolo describes how to use the UDF filesystem instead of FAT for USB devices [8]. This allows you to create files larger than 2G while still allowing the device to be used on Windows systems. I’ll keep using BTRFS for most of my USB sticks though.

Bruce Schneier gave an informative TED talk about security models [9]. Probably most people who read my blog already have a good knowledge of most of the topics he covers. I think that the best use of this video is to educate less technical people you know.

Blaine Harden gave an informative and disturbing TED talk about the concentration camps in North Korea [10]. At the end he points out the difficult task of helping people recover from their totalitarian government that will follow the fall of North Korea.

Bruce Schneier has an interesting blog post about the use of a motherboard BMC controller (IPMI and similar) to compromise a server [11]. Also some “business class” desktop systems and laptops have similar functionality.

Russ Allbery wrote an insightful article about the failures of consensus decision-making [12]. He compares the Wikipedia and Debian methods so his article is also informative for people who are interested in learning about those projects.

The TED blog has a useful reference article with 10 places anyone can learn to code [13].

Racialicious has an interesting article about the people who take offense when it’s pointed out that they have offended someone else [14].

Nick Selby wrote an interesting article criticising the Symantic response to the NYT getting hacked and also criticises anti-viru software in general [15]. He raises the point that most of us already know, anti-virus software doesn’t do much good. Securing Windows networks is a losing game.

Joshua Brindle wrote an interesting blog post about security on mobile phones and the attempts to use hypervisors for separating data of different levels [16]. He gives lots of useful background information about how to design and implement phone based systems.

Related posts:

  1. Links March 2013 Russ Allbery wrote an informative post about how to determine...
  2. Links January 2013 AreWomenHuman has an interesting article about ViolentAcrez and the wide...
  3. Links February 2013 Aaron on Software wrote an interesting series of blog posts...

Syndicated 2013-05-28 10:39:53 from etbe - Russell Coker

SCSI Failures

For a long time it was widely regarded that SCSI was the interface for all serious drives that were suitable for “Enterprise Use” or for anything else which requires reliable operation. On the other hand IDE was for cheap disks that were only suitable for home use. The SCSI vs IDE issue continues to this day but now we have SAS and SATA filling the same market niches with the main difference between the current debate and the debate a decade ago being that a SATA disk can be connected on a SAS bus.

Both SAS and SATA have a single data cable for each disk which avoids the master/slave configuration on IDE and the issue of bus device ID number (from 0-7 or 0-15) and termination on SCSI.

Termination

When a high speed electrical signal travels through a cable some portion of the signal will be reflected from any cable end point or any point of damage. To prevent the signal reflection from the end of a cable you can have a set of resistors (or some other terminating device) at the end of the cable, see the Terminator(electrical) Wikipedia page [1] for a brief overview. As an aside I think that page could do with some work, if you are an EE with a bit of spare time then improving that page would be a good thing.

SCSI was always designed to have termination while IDE never was. I presume that this was largely due to the cable length (18″ for IDE vs 1.5m to 25m for SCSI) and the number of devices (2 for IDE vs 7 or 15 for SCSI). I also presume that some of the problems that I’ve had with IDE systems have been related to signal problems that could have been avoided with a terminated bus.

My first encounter with SCSI was when working for a small business that focused on WindowsNT software development. Everyone in the office knew a reasonable amount about computers and was happy to adjust the hardware of their own workstation. A room full of people who didn’t understand termination who fiddled with SCSI buses tended to give a bad result. On the up-side I learned that a SCSI bus can work most of the time if you have a terminator in the middle of the cable and a hard drive at the end.

There have been two occasions when I’ve been at ground zero for a large deployment of servers from a company I’ll call Moon Computers. In both cases there were two particularly large and expensive servers in a cluster and one of the cluster servers had data loss from bad SCSI termination. This is particularly annoying as the terminators have different colours, all that was needed to get the servers working was to change the hardware to make them look the same. As an aside the company with no backups [2] had one of the servers with bad SCSI termination.

Heat

SCSI disks and now SAS disks tend to be designed for higher performance, this usually means greater heat dissipation. A disk that dissipates a lot of heat won’t necessarily work well in a desktop case with small and quiet fans. This can become a big problem if you have workstations running 24*7 in a hot place (such as any Australian city that’s not in Tasmania) and turn the air-conditioner off on the weekends. One of my clients lost a few disks before they determined that IDE disks are the only option for systems that are to survive Australian heat without any proper cooling.

Differences between IDE/SATA and SCSI/SAS

In 2009 I wrote about vibration and SATA performance [3]. Rumor has it that SCSI/SAS disks are designed to operate in environments where there is a lot of vibration (servers with lots of big fans and fast disks) while IDE/SATA disks are designed for desktop and laptop systems in quiet environments. One thing I’d like to do is to test performance of SATA vs SAS disks in a server that vibrates.

SCSI/SAS disks have apparently been designed for operation in a RAID array and therefore will give a faster timeout on a read error (so another disk can return the data). While IDE/SATA disks are designed for a non-RAID situation and will spend longer trying to read the data.

There are also various claims about the error rates from SCSI/SAS disks being better than those of IDE/SATA disks. But I think that in all cases the error rates are small enough not to be a problem if you use a filesystem like ZFS or BTRFS but they are also large enough to be a significant risk with modern data volumes if you have a lesser filesystem.

Data Loss from Storage Failure

In the data loss that I’ve personally observed from storage failures the loss from SCSI problems (termination and heat) is about equal to all the hardware related data loss I’ve seen on IDE disks. Given that the majority of disks I’ve been responsible for have been IDE and SATA that’s a bad sign for SCSI use in practice.

But all serious data loss that I’ve seen has involved the use of a single disk (no RAID) and inadequate backups. So a basic RAID-1 or RAID-5 installation will solve most hardware related data loss problems.

There was one occasion when heat caused two disks in a RAID-1 to give errors at the same time, but by reading from both disks I managed to get almost all the data back, RAID can save you from some extreme error conditions. That situation would have been ideal for BTRFS or ZFS to recover data.

Conclusion

SCSI and SAS are designed for servers, using them in non-server systems seems to be a bad idea. Using SATA disks in servers can have problems too, but not typically problems that involve massive data loss.

Using technology that is too complex for the people who install it seems risky. That includes allowing programmers to plug SCSI disks into their workstations and whoever it was from Moon computers or their resellers who apparently couldn’t properly terminate a SCSI bus. It seems that the biggest advantage of SAS over SCSI is that SAS is simple enough for most people to be able to correctly install it.

Making servers similar to the systems that the system administrators use at home seems like a really good idea. I think that one of the biggest benefits of using x86 systems as servers is that skills learned on home PCs can be transferred to administration of servers. Of course it would also be a good idea to have test servers that are identical to servers in production so that the sysadmin team can practice and make mistakes on systems that aren’t mission critical, but companies seem to regard that as a waste of money – apparently the risk of down-time is cheaper.

Related posts:

  1. Hot-swap Storage I recently had to decommission an old Linux server and...
  2. lifetime failures (LF) This morning at LCA Andrew Tanenbaum gave a talk about...
  3. Planning Servers for Failure Sometimes computers fail. If you run enough computers then you...

Syndicated 2013-05-28 08:24:05 from etbe - Russell Coker

Noise from Shaving

About 10 years ago I started using an electric shaver. An electric shaver is more convenient to use as it doesn’t require any soap, foam, or water. It is also almost impossible to cut yourself properly with an electric shaver which is a major benefit for anyone who’s not particularly alert in the morning. Generally my experience of electric shavers has been good, although the noise is quite annoying.

Recently a friend told me that an electric shaver is as noisy as a chain-saw. Given the inverse-square law and the fact that the shaver operates within 1cm of my ears that sounds plausible. So the risk of hearing loss is a great concern. Disposable ear plugs are very cheap and they can be used multiple times (they don’t get particularly dirty while shaving or get squashed in the short time needed to shave). So for a few weeks I’ve been using ear plugs while shaving which reduces the noise and presumable saves me from some hearing damage – although after 10 years of using electric shavers I may have already sustained some damage.

According to Cooper Safety their ear plugs reduce noise by 29dB, [1] I presume that the cheap ones I bought from Bunnings would be good for at least 15dB.

According to Better Hearing Sydney the noise from an electric shaver is typically around 90dB, less than the 100dB that is typical of a chain-saw [2]. So if my ear-plugs are good for 15dB then they would reduce the noise from a typical electric shaver to 75dB which is well below the 85dB that will cause hearing damage. Given that the noise from a typical shaver is only slightly above the damage threshold it seems that I might not need particularly good ear-plugs when shaving.

A quick scan of shaver reviews indicates that the amount of noise differs by brand and technology. The Hubpages review suggests that rotary shavers tend to make less noise than foil shavers [3], but I’m sure that it varies enough between brands that some rotary shavers are louder than the quietest foil shavers. It seems that the best thing to do when buying a new shaver would be to go to a specialised shaver shop (which has many models on offer) and get the staff to demonstrate them to determine which is the quietest. If a typical shaver produces 90dB then it seems likely that one of the more quiet models would produce less than 85dB.

Another item on my todo list is to buy a noise meter to measure the amount of noise produced in the places where I spend time. There are some Android apps to measure noise, I’m currently playing with the Smart Tools Co Sound Meter [4] which gives some interesting information. The documentation notes that phone microphones are limited to the typical volume and frequencies of human voice, so my Galaxy S3 can’t measure anything about 81dB. My wife’s Nexus 4 doesn’t seem to register anything above 74dB. Additionally there is some uncertainty about the accuracy of the microphone, there is a calibration feature but that requires another meter. Anyway the Sound Meter app suggests that my shaver (a Philips HQ7380/B) produces only 71dB at the closest possible range – and drops down to 67dB at the range I would use if I grew sideburns.

Conclusion

Getting a proper noise meter to protect one’s hearing seems like a good idea. An Android app for measuring noise is a good thing to have, even though it’s not going to be accurate it’s convenient and will give an indication.

When buying a shaver one should listen to all the options and choose a quiet one (I might have got a quiet one by luck).

Sideburns seem like a good idea if you value your hearing.

Related posts:

  1. Testing Noise Canceling Headphones This evening I tested some Noise Canceling Headphones (as...
  2. Noise Canceling Headphones and People Talking The Problem I was asked for advice on buying headphones...
  3. Noise in Computer Rooms Some people think that you can recognise a good restaurant...

Syndicated 2013-05-23 09:56:22 from etbe - Russell Coker

No Backups WTF

Some years ago I was working on a project that involved a database cluster of two Sun E6500 servers that were fairly well loaded. I believe that the overall price was several million pounds. It’s the type of expensive system where it would make sense to spend adequately to do things properly in all ways.

The first interesting thing was the data center where it was running. The front door had a uniformed security guard and a sign threatening immediate dismissal for anyone who left the security door open. The back door was wide open for the benefit of the electricians who were working there. Presumably anyone who had wanted to steal some servers could have gone to the back door and asked the electricians for assistance in removing them.

The system was poorly tested. My colleagues thought that with big important servers you shouldn’t risk damage by rebooting them. My opinion has always been that rebooting a cluster should be part of standard testing and that it’s especially important with clusters which have more interesting boot sequences. But I lost the vote and there was no testing of rebooting.

Along the way there were a number of WTFs in that project. One of which was when the web developers decided to force all users to install the latest beta release of Internet Explorer, a decision that was only revoked when the IE install process broke MS-Office on the PC of a senior manager. Another was putting systems with a default Solaris installation live on the Internet with all default services running, there’s never a reason for a database server to be directly accessible over the Internet.

No Backups At All

But I think that the most significant failing was the decision not to make any backups. This wasn’t merely forgetting to make backups, when I raised the issue I received a negative reaction from almost everyone. As an aside I find it particularly annoying when someone implies that I want backups because I am likely to stuff things up.

There are many ways of proving that there’s a general lack of competence in the computer industry. But I think that one of the best is the number of projects where the person who wants backups has their competence questioned instead of all the people who don’t want backups.

A decision to make no backups relies on one of two conditions, either the service has to be entirely unimportant or you need to have no bugs in the OS or hardware defects that can corrupt data, no application bugs, and a team of sysadmins who never make mistakes. The former condition raises the question of why the service is being run and the latter is impossible.

As I’m more persistent than most people I kept raising the issue via email and adding more people to the CC list until I got a positive reaction. Eventually I CC’d someone who responded with “What the fuck” which I consider to be a reasonable response to a huge and expensive project with no backups. However the managers on the CC list regarded the use of profanity in email to be a much more serious problem. To the best of my knowledge there were never any backups of that system but the policy on email was strongly enforced.

This is only a partial list of WTF incidents that assisted in my decision to leave the UK and migrate to the Netherlands.

Not Doing Much

About a year after leaving I returned to London for a holiday and had dinner with a former colleague. When I asked what he was working on he said “Not much“. It turned out that proximity to the nearest manager determined the amount of work that was assigned. As his desk was a long way from the nearest manager he had spent about 6 months getting paid to read Usenet. That wasn’t really a surprise given my observations of the company in question.

Related posts:

  1. Red Hat, Microsoft, and Virtualisation Support Red Hat has just announced a deal with MS for...
  2. The Security Benefits of Automation Some Random WTFs The Daily WTF is an educational and...
  3. Rackspace RHEL4 updates A default RHEL4 install of a Rackspace (*) server contains...

Syndicated 2013-05-21 09:43:37 from etbe - Russell Coker

Advice on Buying a PC

A common topic of discussion on computer users’ group mailing lists is advice on buying a PC. I think that most of the offered advice isn’t particularly useful with an excessive focus on building or upgrading PCs and on getting the latest and greatest. So I’ll blog about it instead of getting involved in more mailing-list debates.

A Historical Perspective – the PC as an Investment

In the late 80′s a reasonably high-end white-box PC cost a bit over $5,000 in Australia (or about $4,000 without a monitor). That was cheaper than name-brand PCs which cost upwards of $7,000 but was still a lot of money. $5,000 in 1988 would be comparable to $10,000 in today’s money. That made a PC a rather expensive item which needed to be preserved. There weren’t a lot of people who could just discard such an investment so a lot of thought was given to upgrading a PC.

Now a quite powerful desktop PC can be purchased for a bit under $400 (maybe $550 if you include a good monitor) and a nice laptop is about the same price as a desktop PC and monitor. Laptops are almost impossible to upgrade apart from adding more RAM or storage but hardly anyone cares because they are so cheap. Desktop PCs can be upgraded in some ways but most people don’t bother apart from RAM, storage, and sometimes a new video card.

If you have the skill required to successfully replace a CPU or motherboard then your time is probably worth enough that getting more value out of a PC that was worth $400 when new and is worth maybe $100 when it’s a couple of years old probably isn’t a good investment.

Times have changed and PCs just aren’t worth enough to be bothered upgrading. A PC is a disposable item not an investment.

Buying Something Expensive?

There are a range of things that you can buy. You can spend $200 on a second-hand PC that’s a couple of years old, $400 on a new PC that’s OK but not really fast, or you can spend $1000 or more on a very high end PC. The $1000 PC will probably perform poorly when compared to a PC that sells for $400 next year. The $400 PC will probably perform poorly when compared to the second-hand systems that are available next year.

If you spend more money to get a faster PC then you are only getting a faster PC for a year until newer cheaper systems enter the market.

As newer and better hardware is continually being released at low enough prices that make upgrades a bad deal I recommend just not buying expensive systems. For my own use I find that e-waste is a good source of hardware. If I couldn’t do that then I’d buy from an auction site that specialises in corporate sales, they have some nice name-brand systems in good condition at low prices.

One thing to note is that this is more difficult for Windows users due to “anti-piracy” features. With recent versions of Windows you can’t just put an old hard drive in a new PC and have it work. So the case for buying faster hardware is stronger for Windows than for Linux.

That said, $1,000 isn’t a lot of money. So spending more money for a high-end system isn’t necessarily a big deal. But we should keep in mind that it’s just a matter of getting a certain level of performance a year before it is available in cheaper systems. Getting a $1,000 high-end system instead of a $400 cheap system means getting that level of performance maybe a year earlier and therefore at a price premium of maybe $2 per day. I’m sure that most people spend more than $2 per day on more frivolous things than a faster PC.

Understanding How a Computer Works

As so many things are run by computers I believe that everyone should have some basic knowledge about how computers work. But a basic knowledge of computer architecture isn’t required when selecting parts to assemble to make a system, one can know all about selecting a CPU and motherboard to match without understanding what a CPU does (apart from a vague idea that it’s something to do with calculations). Also one can have a good knowledge of how computers work without knowing anything about the part numbers that could be assembled to make a working system.

If someone wants to learn about the various parts on sale then sites such as Tom’s Hardware [1] provide a lot of good information that allows people to learn without the risk of damaging expensive parts. In fact the people who work for Tom’s Hardware frequently test parts to destruction for the education and entertainment of readers.

But anyone who wants to understand computers would be better off spending their time using any old PC to read Wikipedia pages on the topic instead of spending their time and money assembling one PC. To learn about the basics of computer operation the Wikipedia page for “CPU” is a good place to start. Then the Wikipedia page for “hard drive” is a good start for learning about storage and the Wikipedia page for Graphics Processing Unit to learn about graphics processing. Anyone who reads those three pages as well as a selection of pages that they link to will learn a lot more than they could ever learn by assembling a PC. Of course there’s lots of other things to learn about computers but Wikipedia has pages for every topic you can imagine.

I think that the argument that people should assemble PCs to understand how they work was not well supported in 1990 and ceased to be accurate once Wikipedia became popular and well populated.

Getting a Quality System

There are a lot of arguments about quality and reliability, most without any supporting data. I believe that a system designed and manufactured by a company such as HP, Lenovo, NEC, Dell, etc is likely to be more reliable than a collection of parts uniquely assembled by a home user – but I admit to a lack of data to support this belief.

One thing that is clear however is the fact that ECC RAM can make a significant difference to system reliability as many types of error (including power problems) show up as corrupted memory. The cheapest Dell PowerEdge server (which has ECC RAM) is advertised at $699 so it’s not a feature that’s out of reach of regular users.

I think that anyone who makes claims about PC reliability and fails to mention the benefits of ECC RAM (as used in Dell PowerEdge tower systems, Dell Precision workstations, and HP XW workstations among others) hasn’t properly considered their advice.

Also when discussing overall reliability the use of RAID storage and a good backup scheme should be considered. Good backups can do more to save your data than anything else.

Conclusion

I think it’s best to use a system with ECC RAM as a file server. Make good backups. Use ZFS (in future BTRFS) for file storage so that data doesn’t get corrupted on disk. Use reasonably cheap systems as workstations and replace them when they become too old.

Update: I find it rather ironic when a discussion about advice on buying a PC gets significant input from people who are well paid for computer work. It doesn’t take long for such a discussion to take enough time that the people involved could spent their time working instead, put enough money in a hat to buy a new PC for the user in question, and still had money left over.

Related posts:

  1. Buying Old PCs I install quite a number of internet gateway machines for...
  2. Buying a Laptop from Another Country Mary Gardiner has written a lazyweb post asking about how...
  3. IT Recruiting Agencies – Advice for Contract Workers I read an interesting post on Advogato about IT recruiting...

Syndicated 2013-05-21 04:58:03 from etbe - Russell Coker

Advice on Buying a PC

A common topic of discussion on computer users’ group mailing lists is advice on buying a PC. I think that most of the offered advice isn’t particularly useful with an excessive focus on building or upgrading PCs and on getting the latest and greatest. So I’ll blog about it instead of getting involved in more mailing-list debates.

A Historical Perspective – the PC as an Investment

In the late 80′s a reasonably high-end white-box PC cost a bit over $5,000 in Australia (or about $4,000 without a monitor). That was cheaper than name-brand PCs which cost upwards of $7,000 but was still a lot of money. $5,000 in 1988 would be comparable to $10,000 in today’s money. That made a PC a rather expensive item which needed to be preserved. There weren’t a lot of people who could just discard such an investment so a lot of thought was given to upgrading a PC.

Now a quite powerful desktop PC can be purchased for a bit under $400 (maybe $550 if you include a good monitor) and a nice laptop is about the same price as a desktop PC and monitor. Laptops are almost impossible to upgrade apart from adding more RAM or storage but hardly anyone cares because they are so cheap. Desktop PCs can be upgraded in some ways but most people don’t bother apart from RAM, storage, and sometimes a new video card.

If you have the skill required to successfully replace a CPU or motherboard then your time is probably worth enough that getting more value out of a PC that was worth $400 when new and is worth maybe $100 when it’s a couple of years old probably isn’t a good investment.

Times have changed and PCs just aren’t worth enough to be bothered upgrading. A PC is a disposable item not an investment.

Buying Something Expensive?

There are a range of things that you can buy. You can spend $200 on a second-hand PC that’s a couple of years old, $400 on a new PC that’s OK but not really fast, or you can spend $1000 or more on a very high end PC. The $1000 PC will probably perform poorly when compared to a PC that sells for $400 next year. The $400 PC will probably perform poorly when compared to the second-hand systems that are available next year.

If you spend more money to get a faster PC then you are only getting a faster PC for a year until newer cheaper systems enter the market.

As newer and better hardware is continually being released at low enough prices that make upgrades a bad deal I recommend just not buying expensive systems. For my own use I find that e-waste is a good source of hardware. If I couldn’t do that then I’d buy from an auction site that specialises in corporate sales, they have some nice name-brand systems in good condition at low prices.

One thing to note is that this is more difficult for Windows users due to “anti-piracy” features. With recent versions of Windows you can’t just put an old hard drive in a new PC and have it work. So the case for buying faster hardware is stronger for Windows than for Linux.

That said, $1,000 isn’t a lot of money. So spending more money for a high-end system isn’t necessarily a big deal. But we should keep in mind that it’s just a matter of getting a certain level of performance a year before it is available in cheaper systems. Getting a $1,000 high-end system instead of a $400 cheap system means getting that level of performance maybe a year earlier and therefore at a price premium of maybe $2 per day. I’m sure that most people spend more than $2 per day on more frivolous things than a faster PC.

Understanding How a Computer Works

As so many things are run by computers I believe that everyone should have some basic knowledge about how computers work. But a basic knowledge of computer architecture isn’t required when selecting parts to assemble to make a system, one can know all about selecting a CPU and motherboard to match without understanding what a CPU does (apart from a vague idea that it’s something to do with calculations). Also one can have a good knowledge of how computers work without knowing anything about the part numbers that could be assembled to make a working system.

If someone wants to learn about the various parts on sale then sites such as Tom’s Hardware [1] provide a lot of good information that allows people to learn without the risk of damaging expensive parts. In fact the people who work for Tom’s Hardware frequently test parts to destruction for the education and entertainment of readers.

But anyone who wants to understand computers would be better off spending their time using any old PC to read Wikipedia pages on the topic instead of spending their time and money assembling one PC. To learn about the basics of computer operation the Wikipedia page for “CPU” is a good place to start. Then the Wikipedia page for “hard drive” is a good start for learning about storage and the Wikipedia page for Graphics Processing Unit to learn about graphics processing. Anyone who reads those three pages as well as a selection of pages that they link to will learn a lot more than they could ever learn by assembling a PC. Of course there’s lots of other things to learn about computers but Wikipedia has pages for every topic you can imagine.

I think that the argument that people should assemble PCs to understand how they work was not well supported in 1990 and ceased to be accurate once Wikipedia became popular and well populated.

Getting a Quality System

There are a lot of arguments about quality and reliability, most without any supporting data. I believe that a system designed and manufactured by a company such as HP, Lenovo, NEC, Dell, etc is likely to be more reliable than a collection of parts uniquely assembled by a home user – but I admit to a lack of data to support this belief.

One thing that is clear however is the fact that ECC RAM can make a significant difference to system reliability as many types of error (including power problems) show up as corrupted memory. The cheapest Dell PowerEdge server (which has ECC RAM) is advertised at $699 so it’s not a feature that’s out of reach of regular users.

I think that anyone who makes claims about PC reliability and fails to mention the benefits of ECC RAM (as used in Dell PowerEdge tower systems, Dell Precision workstations, and HP XW workstations among others) hasn’t properly considered their advice.

Also when discussing overall reliability the use of RAID storage and a good backup scheme should be considered. Good backups can do more to save your data than anything else.

Conclusion

I think it’s best to use a system with ECC RAM as a file server. Make good backups. Use ZFS (in future BTRFS) for file storage so that data doesn’t get corrupted on disk. Use reasonably cheap systems as workstations and replace them when they become too old.

Related posts:

  1. Buying Old PCs I install quite a number of internet gateway machines for...
  2. IT Recruiting Agencies – Advice for Contract Workers I read an interesting post on Advogato about IT recruiting...
  3. Buying a Laptop from Another Country Mary Gardiner has written a lazyweb post asking about how...

Syndicated 2013-05-21 04:57:03 from etbe - Russell Coker

Voltage Inside a Car

I previously wrote a post with some calculations about the power supplied to laptops from a car battery [1]. A comment on the post suggested that I might have made a mistake in testing the Voltage because leaving the door open (and thus the internal lights on) will cause a Voltage drop.

So I’ve done some more tests:

Test Voltage
battery terminals 12.69
front power socket with doors closed 12.64
front power socket with doors open OR ignition switch on 12.37
cigarette lighter socket with ignition switch on 12.32
front power socket with doors closed and headlights on 11.96
front power socket with engine running 14.38
front power socket with engine running and headlights on 14.29

In my previous tests I recorded 12.85V inside my car (from the front power socket which although having the same connector as a cigarette lighter isn’t designed for lighting cigarettes) and 13.02V from the battery terminals – a 0.17V difference. In my tests today I was unable to reproduce that but I think that my biggest mistake was to take the reading too quickly. Today I noticed that it took up to a minute for the Voltage to stabilise after opening a door (the Voltage dips after any current draw and takes time to recover) so a quick reading isn’t going to be accurate.

My car is a Kia Carnival which has two sockets in the front for power and for actually lighting cigarettes. The one for lighting cigarettes has a slightly lower Voltage and only works when the ignition is turned on. The car also has a power socket in the boot (the trunk for US readers) which delivers the same Voltage as the power socket in the front.

Also one thing to note is that today is a reasonably cold day (16.5C outside right now) and my car hasn’t been driven since last night so the battery would be quite cold (maybe 12C or less). My previous measurements were taken in summer so the battery would have been a lot warmer and therefore working more effectively.

Conclusion

The Voltage drop from turning on the internal lights surprised me, I had expected that a car battery which is designed to supply high current wouldn’t be affected by such things. Certainly not to give a 2% Voltage drop! The Voltage difference from reading inside the car and at the battery terminals might be partly due to the apparent lead coating on the terminals, I pushed the probes of my multimeter beneath the surface of the metal and got a really good connection.

The 14% Voltage increase when the engine was running was also a surprise. It seems to me that if you are running a power hungry device (such as a laptop) it would be a good idea to disconnect it when the engine is turned off. A 14% higher voltage will give a 14% lower current if the PSU is efficient and therefore less problems with heat in the wiring and less risk of blowing a fuse.

Also it’s a good idea to be more methodical about performing tests than I was before my last post. There are lots of other tests I could run (such as testing after the engine has been running for a while) but at the moment I don’t have enough interest in this topic to do more tests. Please leave a comment if there’s something interesting that you think I missed.

Related posts:

  1. Power Supplies and Wires For some time I’ve been wondering how the wire size...
  2. paper about ZCAV This paper by Rodney Van Meter about ZCAV (Zoned Constant...
  3. Perpetual Motion It seems that many blog posts related to fuel use...

Syndicated 2013-05-17 02:57:47 from etbe - Russell Coker

Effective Conference Calls

I’ve been part of many conference calls for work and found them seriously lacking. Firstly there’s a lack of control over the call, so when someone does something stupid like putting an unmuted phone handset near a noise source there’s no way to discover who did it and disconnect them.

Another problem is that of noise on the line when some people don’t mute their phones, which is related to the lack of control as it’s impossible to determine who isn’t muting their phone.

Possibly the biggest problem is how to determine who gets to speak next. When group discussions take place in person non-verbal methods are used to determine who gets to speak next. With a regular phone call (two people) something like the CSMACD algorithm for network packets works well. But when there are 8+ people involved it becomes time consuming to resolve issues of who speaks next even when there are no debates. This is more difficult for multinational calls which can have a signal round trip time of 700ms or more.

I think that we need a VOIP based conference call system for smart phones to manage this. I think that an ideal system would be based on the push to talk concept with software control that only allows one phone to transmit at a time. If someone else is speaking and you want to say something then you would push a button to indicate your desire but your microphone wouldn’t go live while the other person was speaking. The person speaking would be notified of your request and one of the following things would happen:

  • You are added to the queue of people wishing to speak. When the other person finished speaking the next person in the queue gets a turn.
  • You are added to the queue and the moderator of the call chooses who gets to speak next. This isn’t what I’d prefer but would probably be desired by managers for corporate calls.
  • You get to interrupt the person who’s speaking. This may not be ideal but is similar to what currently happens.

Did I miss any obvious ways for the system to react to a talk request?

Is there any free software to do something like this? A quick search of the Google Play store didn’t find anything that seems to match.

Related posts:

  1. Globalisation and Phone Calls I just watched an interesting TED talk by Pankaj Ghemawat...
  2. Phone Calls and Other Distractions Harald Welte has written about the distraction of phone calls...
  3. Talking Fast My previous post about my LCA mini-conf talk received an...

Syndicated 2013-05-17 01:53:47 from etbe - Russell Coker

Geographic Sorting – Lessons to Learn from Ingress

I’ve recently been spending a bit of my spare time playing Ingress (see the Wikipedia page if you haven’t heard of it). A quick summary is that Ingress is an Android phone game that involves geo-location of “portals” that you aim to control and most operations on a portal can only be performed when you are within 40 meters – so you do a lot of travelling to get to portals at various locations. One reasonably common operation that can be performed remotely is recharging a portal by using it’s key, after playing for a while you end up with a collection of keys which can be difficult to manage.

Until recently the set of portal keys was ordered alphabetically. This isn’t particularly useful given the fact that portal names are made up by random people who photograph things that they consider to be landmarks. If people tried to use a consistent geographic naming system (which was short enough to fit in large print on a phone display) then it would be really difficult to make it usable. But as joke names are accepted there’s just no benefit in having a sort by name.

A recent update to the Ingress client (the program which runs on the Android phone and is used for all game operations) changed the sort order to be by distance. This makes it really easy to see the portals which are near you (which is really useful) but also means that the order changes whenever you move – which isn’t such a good idea for use on a mobile phone. It’s quite common for Ingress players to recharge portals while on public transport. But with the new Ingress client the list order will change as you move so anyone who does recharging on a train will find the order of the list changing during the process and it’s really difficult to find items in a list which is in a different order each time you look at it.

This problem of ordering by location has a much greater scope than Ingress. One example is collections of GPS tagged photographs, it wouldn’t make any sense to mix the pictures of two different sets of holiday pictures because they were both taken in countries that are the same distance from my current location (as the current Ingress algorithm would do).

It seems to me that the best way of sorting geo-tagged items (Ingress portals, photos, etc) is to base it on the distance from a fixed point which the user can select. It could default to the user’s current location but in that case the order of the list should remain unchanged at least until the user returns to the main menu and I think it would be ideal for the order to remain unchanged until the user requests it.

I think that most Ingress players would agree with me that fixing annoying mis-features of the Ingress client such as this one would be better for the game than adding new features. While most computer games have some degree of make-work (in almost every case a computer could do things better than a person) I don’t think that finding things in a changing list should be part of the make-work.

Also it would be nice if Google released some code for doing this properly to reduce the incidence of other developers implementing the same mistakes as the Ingress developers in this regard.

Related posts:

  1. Ingress Today Google sent me an invite for Ingress – their...
  2. Security Lessons from a Ferry On Saturday I traveled from Victoria to Tasmania via the...
  3. Cyborgs solving Protein Folding problems Arstechnica has an interesting article about protein folding problems being...

Syndicated 2013-05-11 13:38:04 from etbe - Russell Coker

1050 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!