Older blog entries for etbe (starting at number 1007)

Asperger Syndrome – Disability vs “Over Pathologising”

Is Asperger Syndrome a Disability?

Some people tell me that I’m disabled. Usually it’s an unstated implication such as referring to Asperger Syndrome as a disability with the assumption that I’ll agree. One time I had someone assume that I had never had a paid job because they knew I’m an Aspie, maybe I should boast more about my career successes.

One interesting take on this is represented by Maco’s bost about Disablism/Ablism where she says “Vocab note: A person has an impairment. Society’s treatment of that impairment is what disables the person” [1]. The same concept is presented by BRAINHE in their Social Model of Disability document [2].

The Wikipedia page on Ableism says “The ableist worldview holds that disability is an error, a mistake, or a failing, rather than a simple consequence of human diversity, akin to race, ethnicity, sexual orientation or gender” [3]. This is fairly close to the position that Neurodiversity [4] advocates take on Autism.

Jaarsma P and Welin S wrote an interesting paper titled “Autism as a Natural Human Variation: Reflections on the Claims of the Neurodiversity Movement” [5] which considers these issues in depth and comes to the conclusion that High Functioning Autism (for which Asperger Syndrome is generally regarded as a synonym) is a difference while Low Functioning Autism is a disability.

I think that generally we should accept the opinion of the person in question. Someone who is unable to communicate or is too young to make an informed decision could have their disability status determined by carers. But anyone who is capable of making an informed decision and communicating it should have their opinion respected. I am not going to argue with any of the people who claim that they are disabled due to an Autism Spectrum Disorder. But I don’t think that I am disabled and I think that people shouldn’t argue with me about this.

Over Pathologising

Lynne Soraya wrote an interesting article for Psychology Today about one aspect of the supposed over-diagnosis [6]. She responds to Paul Steinberg, a psychiatrist who made a number of claims about Asperger Syndrome which lack evidence. Paul’s main idea seems to be that anyone who has social problems but who seems to be successful regardless shouldn’t have an Autism Spectrum diagnosis and he claims that such people should be regarded as having a “social disability” instead. His main idea seems to be that having a diagnosis is a bad thing, but his idea of having a “social disability” diagnosis instead doesn’t seem so great.

In many other discussions I’ve seen people claim that a large number of diagnosis of anything is a problem. Their idea seems to be that the vast majority of the population shouldn’t have a diagnosis for anything and that whenever a significant number of people are diagnosed with a psychological condition (and 1% of the population seems to be a significant number) then it’s a problem. I don’t think that having a large portion of the population diagnosed is necessarily a problem, I think that it would be OK if the majority of the population was diagnosed with something. The issue is not whether people are diagnosed but what happens after the diagnosis.

When a child is diagnosed their parents can help them deal with whatever the issues are – this may or may not require further involvement with psychologists or special schools. For the milder cases (of Autism, ADHD, and other conditions) merely knowing what areas will cause difficulty and teaching kids how to deal with them will be enough to solve many problems. When someone is diagnosed as a child but doesn’t have obvious symptoms as an adult that is more likely to be an indication that they were taught good coping mechanisms and protected from bad situations as a child – not that the diagnosis was wrong. There are some serious issues with special schools and psychiatric drugs, but diagnosis doesn’t necessarily imply mistreatment and avoiding a diagnosis is not the correct way to avoid such mistreatment.

When someone is diagnosed as an adult they have to learn to deal with it. The general lack of psychologists (waiting times as long as 6 months are common) and the fact that most psychologists won’t do any good for someone on the Autism Spectrum is a real problem. But merely knowing the source of your problems is a major step towards alleviating or solving them.

One of the arguments that is commonly used against so-called over-diagnosis is that adults don’t show apparent symptoms. The issue here is that with some effort and planning adults on the spectrum can act like NTs. Acting like an NT doesn’t imply being an NT, it usually requires a lot of ongoing effort that could be applied to other things if society didn’t expect us to act like NTs all the time. Here is a Youtube video explaining how hard it is to act like an NT [7], I think it’s worse for women as the social expectations for women are generally a lot higher than for men.

Conclusion

I wish people would stop telling me that I’m either disabled or too “high functioning” to be on the Autism Spectrum. I will never think like an NT and I don’t want to, so I’ll always be an Aspie. By most objective measures I’m at least as successful as the general population in all things that require social skills, so unless something like always losing at Poker is considered a disability I don’t think that it’s reasonable to consider me to be disabled.

It would be nice if I could lock the people who claim that Autism is always a disability in a room with the people who think it’s over-diagnosed and let them debate it, no matter which side lost the debate the result would be good!

Related posts:

  1. Autism vs Asperger Syndrome Diagnostic Changes for Autism Spectrum Disorders Currently Asperger Syndrome (AS)...
  2. Is Asperger Syndrome a Good Thing? A meme that keeps going around is that Asperger Syndrome...
  3. Autism Awareness and the Free Software Community It’s Autism Awareness Month April is Autism Awareness month, there...

Syndicated 2012-10-02 10:22:34 from etbe - Russell Coker

Links September 2012

Scott Adams has an interesting idea for political debates he calls “Fact Bubbler” [1]. It sounds implausible the first time you read it, but then so did Wikipedia.

Arstechnica has an interesting article on software defined radio [2]. This could change many things.

The Nieder Family has another update on the way patents threaten their daughter’s ability to communicate [3]. Also Apple is making things worse by taking pre-emptive strikes against iPhone apps which are involved in legal disputes – never use an iPhone (or other proprietary system) for anything important if you have a choice.

In more positive news crowd-funded gene sequencing has found an explanation for Maya Nieder’s developmental delays [4]. This will revolutionise medicine!

Steven Cherry of IEEE Spectrum has an interesting interview with Peter Cappelli about the difficulties that computer and engineering companies have in hiring talented people [5].

The Guardian has an interesting article about the non-profit investigative reporting organisation ProPublica [6]. See the ProPublica.org site for some quality news reports [7].

Michael O.Church wrote an interesting and insightful article about the fate of a “Just a Programmer” in a startup funded by Venture Capitalists [8]. It doesn’t sound good at all.

EyeNetra is developing a smart-phone based system for testing eyes [9]. It’s apparently possible to manufacture glasses for $0.75 so the cost and difficulty of performing eye tests is the main factor that prevents poor people in developing countries from getting glasses. So a cheap portable eye testing system is going to help many people get the glasses they need.

Bryan Gardiner wrote an interesting article for Wired about the “Gorilla” glass that is used in most mobile phones and tablets [10].

Maco wrote an interesting article about crochet and reverse-engineering along with a Python program to print a crochet pattern [11]. I wonder whether anyone has tried to make a crochet robot, something like a 3D printer but which crochet’s things rather than printing them. Holding the wool would be a real challenge, it’s not nearly as easy as printing on something that’s stuck down and incapable of movement.

  1. [1] http://dilbert.com/blog/entry/fact_bubbler/
  2. [2] http://arstechnica.com/tech-policy/2012/07/how-software-defined-radio-could-revolutionize-wireless/
  3. [3] http://niederfamily.blogspot.com.au/2012/06/silencing-of-maya.html
  4. [4] http://blog.ted.com/2012/07/17/newly-discovered-gene-may-explain-4-year-olds-rare-disease-thanks-to-ted-fellow-jimmy-lin/
  5. [5] http://spectrum.ieee.org/podcast/at-work/tech-careers/why-bad-jobsor-no-jobshappen-to-good-workers
  6. [6] http://www.guardian.co.uk/technology/2012/jul/02/propublica-investigative-reporting
  7. [7] http://www.propublica.org/
  8. [8] http://michaelochurch.wordpress.com/2012/07/08/dont-waste-your-time-in-crappy-startup-jobs/
  9. [9] http://eyenetra.com/
  10. [10] http://www.wired.com/wiredscience/2012/09/ff-corning-gorilla-glass/all/
  11. [11] http://ubuntulinuxtipstricks.blogspot.com.au/2010/10/algorithms-reverse-engineering-and.html

Related posts:

  1. Links April 2012 Karen Tse gave an interesting TED talk about how to...
  2. Links May 2012 Vijay Kumar gave an interesting TED talk about autonomous UAVs...
  3. Links July 2012 The New York Times has an interesting article about “hacker...

Syndicated 2012-09-30 13:44:00 from etbe - Russell Coker

Server Use Per Person

I’ve just read Diego’s response to an ill-informed NYT article about data-center power efficiency [1]. This makes me wonder, how much server use does each person have?

Google

Almost everyone uses Google, most of us use it a lot. The main Google product is also probably the most demanding, their search engine.

In a typical day I probably do about 50 to 100 Google searches, that sounds like a lot, but half of them would probably be for one topic that is difficult to find. I don’t think that I do that many Google searches because I generally know what I’m doing and when I find what I need I spend a lot of time reading it. I’m sure that many people do a lot more.

Each Google search takes a few seconds to complete (or maybe more if it’s an image search and I’m on a slow link), but I think it’s safe to assume that more than a few seconds of CPU time are involved. How much work would each Google search take if performed on a single system? Presumably Google uses the RAM of many systems as cache which gives a result more similar to a NUMA system than one regular server working for a longer time so there is no way of asking how long it would take to do a Google search with a single server. But I’m sure that Google has some ratio of servers to the rate of requests coming in, it’s almost certainly a great secret, but we can make some guesses. If the main Google user base comprises people who collectively do an average of 100 searches per day then we can probably guess at the amount of server use required for each search based on the number of servers Google would run. I think it’s safe to assume that Google doesn’t plan to buy one server for every person on the planet and that they want to have users significantly outnumbering servers. So even for core users they should be aiming to have each user only take a fraction of the resources that one server adds to the pool.

So 100 searches probably each take more than 1 second of server use. But they almost certainly take a lot less than 864 seconds (the server use if Google had one server for every 100 daily requests which would imply one server for each of the heavier users). Maybe it takes 10 seconds of server use (CPU, disk, or network – whichever is the bottleneck) to complete one search request. That would mean that if the Google network was at 50% utilisation on average then they would have 86400*.5/10/100 == 43 users per server for the core user base who average 100 daily requests. If there are 80M core users that would be about 2M servers, and then maybe something like another 4M servers for the rest of the world.

So I could be using 1000 seconds of server time per day on Google searches. I also have a Gmail account which probably uses a few seconds for storing email and giving it to Fetchmail, and I have a bunch of Android devices which use Google calendars, play store, etc. The total Google server use on my behalf for everything other than search is probably a rounding error.

But I could be out by an order of magnitude, if it only took 1 second of server use for a Google search then I would be at 100 server seconds per day and Google would only need one server for every 430 users like me.

Google also serves lots of adverts on web sites that I visit, I presume that serving the adverts doesn’t take much resources by Google standards. But accounting for it, paying the people who host content, and detecting fraud probably takes some significant resources.

Other Big Services

There are many people who spend hours per day using services such as Facebook. No matter how I try to estimate the server requirements it’s probably going to be fairly wrong. But I’ll make a guess at a minute of server time per hour. So someone who averages 3 hours of social networking per day (which probably isn’t that uncommon) would be using 180 seconds of server time.

Personal Servers

The server that hosts my blog is reasonably powerful and has two other people as core users. So that could count as 33% of a fairly powerful server in my name. But if we are counting server use per USER then most of the resources of my blog server would be divided among the readers. My blog has about 10,000 people casually reading it through Planet syndication, that could mean that each person who casually reads my blog has 1/30,000 of a server allocated to them for that. Another way of considering it is that 10% of a server (8640 seconds) is covered by me maintaining my blog and writing posts, 20% is for users who visit my blog directly, and 3% is for the users who just see a Planet feed. That would mean that a Planet reader gets 1/330,000 of a server (250ms per day) and someone who reads directly gets 1/50,000 of a server (1.72s per day) as I have about 10,000 people visiting my blog directly in a month.

My mail server which is also shared by a dozen or so people (maybe that counts as 5% of a server for me or 4320 seconds per day). Then there’s the server I use for SE Linux development (including my Play Machine) and a server I use as a DNS secondary and a shell server for various testing and proxying.

Other People’s Servers

If every reader of a Planet instance like Planet Debian and Planet Linux Australia counts as 1/330,000 of a server for their usage of my blog, then how would that count for my own use of blogs? I tend to read blogs written by the type of people who like to run things themselves, so there would be a lot of fairly under-utilised servers that run blogs. Through Planet Debian and Planet Linux Australia I could be reading 100 or more blogs which are run in the same manner as mine, and in a typical day I probably directly visit a dozen blogs that are run in such a manner. This could give me 50 seconds of server time for blog reading.

Home Servers

I have a file server at home which is also a desktop system for my wife. In terms of buying and running systems that doesn’t count as an extra server as she needs to have a desktop system anyway and using bigger disks doesn’t make much difference to the power use (7W is the difference between a RAID-1 server and a single disk desktop system). I also have a PC running as an Internet gateway and firewall.

Running servers at home isn’t making that much of an impact on my computer power use as there is only one dedicated 24*7 server and that is reasonably low power. But having two desktop systems on 24*7 is a significant factor.

Where Power is Used/Wasted

No matter how things are counted or what numbers we make up it seems clear that having a desktop system running 24*7 is the biggest use of power that will be assigned to one person. Making PCs more energy efficient through better hardware design and better OS support for suspending would be the best way of saving energy. Nothing that can be done at the server side can compare.

Running a server that is only really used by three people is a significant waste by the standards of the NYT article. Of course the thing is that Hetzner is really cheap (and I’m not contributing any money) so there isn’t a great incentive to be more efficient in this regard. Even if I allocate some portion of the server use to blog readers then there’s still a significant portion that has to be assigned to me for my choice to not use a managed service. Running a mail server for a small number of users and running a DNS server and a SE Linux development server are all ways of wasting more power. But the vast majority of the population don’t have the skills to run their own server directly, so this sort of use doesn’t affect the average power use for the population.

Nothing else really matters. No matter what Google does in terms of power use it just doesn’t matter when compared to all the desktop systems running 24*7. Small companies may be less efficient, but that will be due to issues of how to share servers among more people and the fact that below a certain limit you can’t save money by using less resources – particularly if you pay people to develop software.

Conclusion

I blame Intel for most of the power waste. Android phones and tablets can do some amazing things, which is hardly surprising as by almost every measure they are more powerful than the desktop systems we were all using 10 years ago and by many measures they beat desktop systems from 5 years ago. The same technology should be available in affordable desktop systems.

I’d like to have a desktop system running Debian based on a multi-core ARM CPU that can drive a monitor at better than FullHD resolution and which uses so little power that it is passively cooled almost all the time. A 64bit ARM system with 8G of RAM a GPU that can decode video (with full Linux driver support) and a fast SSD should compete well enough with typical desktop systems on performance while being quiet, reliable, and energy efficient.

Finally please note that most of this post relies on just making stuff up. I don’t think that this is wrong given the NYT article that started this. I also think that my estimates are good enough to draw some sensible conclusions.

Related posts:

  1. Name Server IP and a Dead Server About 24 hours ago I rebooted the system that runs...
  2. Google Custom Search Engine I’ve just been experimenting with Google Custom Search [1]. Below...
  3. Google Server Design Cnet has an article on the design of the Google...

Syndicated 2012-09-26 15:37:05 from etbe - Russell Coker

Australian Mobile Phone Costs

I previously wrote about the changes to bundling prices by Virgin (which are similar to changes by other telcos) which make it uneconomical to take a bundled phone [1]. This makes it a little easier to compare telcos as there is no need to be concerned about which telcos offer phones you like. Now it’s just a matter of call costs and this is easier to assess when the cost of a phone isn’t adding to the confusion.

Comparing Australian Telcos

I’m currently with Virgin and I’ve found them to be OK in every way. They use the Optus network which isn’t as good as the Telstra network but which supports more phones. Virgin charge $19 per month for a SIM without a phone, that gives you $450 worth of calls at a rate of $0.99 per minute + $0.40 connection and $0.28 SMS – as well as free calls to other Virgin customers [2]. If the people you call most are on Virgin it’s going to be almost impossible to use the $450 at those rates. The $19 plan only includes 250MB of data per month, for an extra $5 you can get another 500MB or for $10 you can get 2GB. For my use $29 is probably the right plan, for my wife $19 would do.

Vodafone doesn’t offer free calls to other Vodafone customers unless you are on a $30 per month plan, but that plan only gives 500MB of data measured in 12KB increments so that’s going to be expensive. Also Vodafone have had some quality problems recently so I’m not going to link to them.

Optus has some interesting options, they start with a $14 per month plan that includes 50MB of data and 500SMS – that could be good value if you do lots of SMS [3]. Also all their plans give free 5 minute calls to other phones on the same account – this would be good if your SO is on the same phone account as you. Their plans offer small amounts of data and no apparent option for adding more which is a real problem for anyone who wants to use the Internet seriously.

Telstra are really expensive, their web site is poorly designed, and they tell me to use Windows or a Mac. Everyone who spends most of their time in urban areas shouldn’t consider them, the only reason for using Telstra is their coverage of rural areas.

Internode have a new mobile phone service based on the Optus network which offers good value for money [4]. They start with a $10 per month plan that includes $165 of calls and SMS. The call cost is $0.90 per minute plus $0.35 flagfall and the SMS cost is $0.25. It also includes 100MB of data charged at 1KB increments. The $20 per month plan from Internode includes $450 of calls $1000 of free calls to other Internode mobile phones, and 1.5G of data transfer. Internode also has a $15 charge for sale and delivery of the SIM. Internode also offer 150GB of free “social networking” traffic, I wonder whether it would be viable to tunnel some other protocol over Twitter or Facebook…

TPG also has some good offers, $18 per month gives you 1.5GB of data measured in 10KB increments and $550 of phone calls at a cost of $0.90 per minute with $0.35 flagfall and $0.253 SMS cost and unlimited calls and SMS to other TPG mobile and land-line phones [5]. The $10 plan from TPG offers $2000 of free calls to other TPG phones, so if you only need 250MB of data in 10KB increments (which is a lot less than 250MB in 1KB increments) and want lots of free calls to your SO then TPG could be really good. I’d rather pay an extra $2 to Internode and get data measured in 1KB increments.

Conclusion

My usage pattern includes a reasonably large number of calls to my wife and more than 500MB of data use every month. For this pattern the Internode plan is the cheapest for me and for my wife. It seems that a large portion of the phone using population who use the Internet a lot would find this to be an ideal plan.

TPG is another good option, particularly for people who use TPG ADSL as they get a discount on the call rates and free calls to their land-line.

It seems to me that anyone who uses a mobile phone enough that a pre-paid option isn’t cheaper and who doesn’t need the coverage that only Telstra can provide will be best served by Internode or TPG.

I plan to transition to Internode some time after my current Virgin contract ends. I will probably delay the transition until the contracts for some of my wife’s relatives expire. If we all migrate at the same time then we keep getting free calls to each other – my relatives don’t use mobile phones much so there’s no money to save on calling them for free.

Related posts:

  1. Is Lebara the Cheapest Mobile Phone company in Australia? My parents have just got a mobile phone with a...
  2. Choosing an Australian Mobile Telco for use with Android Since playing with the IBM Seer augmented reality software [1]...
  3. Changing Phone Prices in Australia 18 months ago when I signed up with Virgin Mobile...

Syndicated 2012-09-24 05:12:28 from etbe - Russell Coker

Changing Phone Prices in Australia

18 months ago when I signed up with Virgin Mobile [1] the data transfer quotas were 200MB on the $29 per month plan and 1.5G on the $39 per month plan. About 4 months ago when I checked the prices the amounts of data had gone up on the same plans (2.25G for $39 per month from memory). Now $39 per month gets only 500MB! It seems that recently Virgin has significantly reduced their value for money.

Virgin does have an option to pay an exgtra $10 per month for 2GB of data which gets you 2GB per month if you sign up for 24 months. That is reasonably good value, when I first signed up with Virgin I paid $39 per month to get extra data transfer, now I could use the $29 plan for phone access and spend $10 per month on data with a Wifi gateway device.

On top of this the phone plans aren’t nearly as good value as they used to be. When I signed up with Virgin the Sony Ericsson Xperia X10 was “free” on a $29 plan, at the time that was a hell of a phone. I believe that the Samsung Galaxy S3 currently occupies a similar market position to the one that the Xperia X10 did 18 months ago – so it shouldn’t be much more expensive. But Virgin are offering the Galaxy S3 for $21 extra per month over 24 months on the $29 plan – a total cost of ($29+$21)*24==$1200 while offering the same amount of calls and data transfer for $19 per month ($19*24==$456) when you don’t get a phone – this makes the price of a Galaxy S3 $1200-$456==$744 while Kogan [2] sells the same phone for $519 + postage!

The cheapest phone that Virgin is offering is a Galaxy S2 for $5 per month on a $29 plan which when compared to $19 per month for the same plan without a phone makes the phone cost ($5+$10)*24==$360. Kogan sells the Galaxy S2 for $399 so there’s a possibility of a Virgin plan saving some money over buying a phone from Kogan. But given the choice of $360 for a Galaxy S2 from Virgin and the Kogan prices of $839 for a Galaxy Note 2, $349 for a Galaxy Nexus, $469 for a Galaxy Note, $529 for a Galaxy S3, and $219 for a HTC One V I find it difficult to imagine that anyone would think that the $360 Galaxy S2 is the best option.

I’ve previously investigated dual-sim phones for cheap calls and data [3] but they didn’t seem like good value at the time because the “free” phones offered by the telcos used to be a good deal. Now it seems that none of the telcos are offering good deals on phones so with my needs the way to go would be to buy a Samsung Galaxy S3 or Samsung Galaxy Note 2 from Kogan, and get the $19 plan from Virgin – probably with a $10 per month extra fee to get an extra 2GB of data. For my wife the best option would be to keep using the Xperia X10 on a $19 per month plan as she doesn’t seem to have any problem with the Xperia X10 that justifies spending hundreds of dollars.

I idly considered getting a portable Wifi-3G device to use a cheap pre-paid 3G data option ($10 per month) and a cheap phone plan without data (maybe $10 per month), but decided that it’s not worth the effort. The Virgin $19 plan gives me free calls to my wife and lots of calls to other numbers (more than I can use) and an extra $10 gives me all the data transfer I need. To use a Wifi-3G device would involve buying such a device and the hassle of carrying it and using it, that wouldn’t save money for at least a year and would be annoying.

The sudden decrease in data quotas is a real concern though. It’s an indication that the telco cartel in Australia is pushing prices up, that’s not a good sign. LTE is nice, but 3G with better quotas would be more generally useful to me.

Related posts:

  1. Dual SIM Phones vs Amaysim vs Contract for Mobile Phones Currently Dick Smith is offering two dual-SIM mobile phones for...
  2. Virgin Refunds Me $200 Virgin Mobile Excessive Billing In my previous post about the...
  3. A Mobile Phone for Sysadmin Use My telco Three have just offered me a deal on...

Syndicated 2012-09-22 13:16:38 from etbe - Russell Coker

Servers in the Office

I just had a conversation with someone who thinks that their office should have no servers.

The office in question has four servers, an Internet gateway/firewall system, the old file server (and also Xen server), the new file server, and the VOIP server.

The Internet gateway system could possibly be replaced by a suitably smart ADSL modem type device, but that would reduce the control over the network and wouldn’t provide much of a benefit.

The VOIP server has to be a separate system for low latency IMHO. In theory you could use a Xen DomU for running Asterisk or you could run Asterisk on the Dom0 of the file/Xen server. But that just makes things difficult. A VOIP server needs to be reliable and is something that you typically don’t want to touch once it’s working, in this case the Asterisk server has been a few more years without upgrades than the Xen server. An Asterisk system could be replaced by a dedicated telephony device which some people might consider to be removing a server, but really a dedicated VOIP server device is just as much of a server as a P4 running Asterisk but with greater expense. A major advantage of a P4 running Asterisk is that you can easily replace the system at no cost if there is a hardware problem.

Having two file servers is excessive for a relatively small office. But running two servers is the common practice when one server is being replaced. The alternative is to just immediately cut things over which has the potential for a lot of people to arrive at work on Monday and find multiple things not working as desired. Having two file servers is a temporary problem.

File Servers

The first real problem when trying to remove servers from an office is the file server.

ADSL links with Annex M can theoretically upload data at 3Mb/s which means almost 400KB/s. So if you have an office with a theoretically perfect ADSL2+ Annex M installation then you could save a 4MB file to a file server on the Internet in not much more than 10 seconds if no-one else is using the Internet connection. Note that 4MB isn’t THAT big by today’s standards, the organisation in question has many files which are considerably bigger than that. Large files include TIFF and RAW files used for high quality image processing, MS-Office documents, and data files for most accounting programs. Saving a 65MB quick-books file in 3 minutes (assuming that your Annex M connection is perfect and no-one else is using the Internet) would have to suck.

Then there’s the issue of reading files, video files (which are often used for training and promotion) are generally larger than 100MB which would be more than 30 seconds of download time at ADSL2+ speed – but if someone sends an email to everyone in the office saying “please watch this video” then the average time to load it would be a lot more. Through quickly examining my collection of Youtube downloads I found a video which averaged 590KB/s, if an office using a theoretically perfect ADSL2+ connection giving 24Mb/s (3MB/s) download speed had such a file on a remote file server then a maximum of five people could view it at one time if no-one else in the office was using the Internet.

Now when the NBN is connected (which won’t happen in areas like the Melbourne CBD for at least another 3 years) it will be possible to get speeds like 100Mb/s download and 25Mb/s upload. That would allow up to 20 people to view videos at once and a 65MB quick-books file could be saved in a mere 22 seconds if everyone else was idle. Of course that relies on the size of data files remaining the same for another 3 years which seems unlikely, currently no Youtube videos use resolutions higher than 1920*1080 (so they don’t take full advantage of a $400 Dell monitor) and there’s always potential for storing more financial data. I expect that by the time we all have 100Mb/25Mb speeds on the NBN it will be as useful to us as 24Mb/3Mb ADSL2+ Annex M speeds are today (great for home use but limited for an office full of people).

There are of course various ways of caching data, but all of them involve something which would be considered to be a “server” and I expect that all of them are more difficult to install and manage than just having a local file server.

Of course instead of crunching the numbers for ADSL speeds etc you could just think for a moment about the way that 100baseT networking to the desktop has been replaced by Gigabit networking. When people expect each workstation to have 1000Mb/s send and receive speed it seems quite obvious that one ADSL connection shared by an entire office isn’t going to work well if all the work that is done depends on it.

Management could dictate that there is to be no server in the office, but if that was to happen then the users would create file shares on their workstations so you would end up with ad-hoc servers which aren’t correctly managed or backed up. That wouldn’t be an improvement and technically wouldn’t achieve the goal of not having servers.

Home Networking Without Servers

It is becoming increasingly common to have various servers in a home network. Due to a lack of space and power and the low requirements a home file server will usually be a workstation with some big disks, but there are cheap NAS devices which some people are installing at home. I don’t recommend the cheap NAS devices, I’m merely noting that they are being used.

Home entertainment is also something that can benefit from a server. A MythTV system for recording TV and playing music has more features than a dedicated PVR box. But even the most basic PVR ($169 for a 1TB device in Aldi now) is still a fairly complex computer which would probably conflict with any aim to have a house free of servers.

The home network design of having a workstation run as a file and print server can work reasonably well as long as the desktop tasks aren’t particularly demanding (IE no games) and the system doesn’t change much (IE don’t track Debian/Testing or otherwise have new versions of software). But this is really something that only works if you only have a few workstations.

Running an office without servers seems rather silly as it seems that none of my friends are able to have a home without a server.

Running Internet Services

Hypothetically speaking if one was to run an office without servers then that would require running all the servers in question on the Internet somewhere. For some things this can work better than a local server, for example most of my clients who insist on running a mail server in their office would probably get a better result if they had a mail server running on Linode or Hetzner – or one of the “Hosted Exchange” offerings if they want a Windows mail sever. But for a file server if you were to get around the issue of bandwidth required to access the files in normal use there’s the issue of managing a server (which is going to take more effort and expense than for a server on the LAN).

Then there’s the issue of backups. In my previous post about Hard Drives for Backup [1] I considered some of the issues related to backing data up over the Internet. The big problem however is a complete restore, if you have even a few dozen gigs of data that you want to transfer to a remote server in a hurry it can be a difficult problem. If you have hundreds of gigs then it becomes a very difficult problem. I’m sure that I could find a Melbourne based Data Center (DC) that gives the option of bringing a USB attached SATA disk for a restore – but even that case would give a significant delay when compared to backing things up on a LAN. If a server on the office LAN breaks in the afternoon my client can make arrangements to let me work in their office in the evening to fix it, but sometimes DCs don’t allow 24*7 access and sometimes when they do allow access there are organisational problems that make it impossible when you want it (EG the people at the client company who are authorised become unavailable).

The Growth of Servers

Generally it’s a really bad idea to build a server that has exactly the hardware you need. The smart thing to do is to install more of every resource (disk, RAM, CPU, etc) than is needed and to allow expansion when possible (EG have some RAM slots and drive bays free). No matter how well you know your environment and it’s users you can get surprised by the way that requirements change. Buying a slightly bigger server at the start costs hardly any money but upgrading a server will cost a lot.

Once you have a server that’s somewhat over-specced you will always find other things to run on it. Many things could be run elsewhere at some cost, but if you have unused hardware then you may as well use it. Xen and other virtualisation systems are really good in this regard as they allow you to add more services without making upgrades difficult. This means that it’s quite common to have a server that is purchased for one task but which ends up being used for many tasks.

Anyone who would aspire to an office without servers would probably regard adding extra features in such a manner to be a problem. But really if you want to allow the workers to do their jobs then it’s best to be able to add new services as needed without going through a budget approval process for each one.

Conclusion

There probably are some offices where no-one does any serious file access and everyone’s work is based around a web browser or some client software that is suited to storing data on the Internet. But for an office where the workers use traditional “Office” software such as MS-Office or Libre-Office a file server is necessary.

Some sort of telephony server is necessary no matter how you do things. If you have a traditional telephone system then you might try not to call the PABX a “server”, but really that’s what it is. Then when the traditional phone service becomes too expensive you have to consider whether to use Asterisk or a proprietary system, in either case it’s really a server.

In almost every case the issue isn’t whether to have a server in the office, but how many servers to have and how to manage them.

Related posts:

  1. Planning Servers for Failure Sometimes computers fail. If you run enough computers then you...
  2. a good security design for an office One issue that is rarely considered is how to deal...
  3. Efficiency of Cooling Servers One thing I had wondered was why home air-conditioning systems...

Syndicated 2012-09-16 11:47:24 from etbe - Russell Coker

Back to the Xperia X10

10 months ago I was given a Samsung Galaxy S Android phone [1] to replace my Sony Ericsson Xperia X10. I also installed Cyanogenmod on it (here is a more detailed comparison of the phones with a focus on liberty [2]). But now I’m using the Xperia X10 again.

Hardware Reliability

Some time ago a friend told me that he bought a Sony phone in preference to a Samsung phone because he didn’t think that Samsung phones were reliable enough. I assured him that Samsung phones would be fine if you used a gel-case, but now I’m not so sure. My mother in law has a Samsung Galaxy S which now has a single crack across the face, it doesn’t appear that her phone was dropped, maybe it just bent a bit – it’s a fairly thin phone. My Galaxy S started crashing over the last few months and now many applications will crash any time I use 3G networking. Currently my Galaxy S is working well as a small Wifi tablet and hasn’t crashed since I replaced the SIM with one that has expired.

I wish that phone designers would make mode solid products with bigger batteries. The fact that the Xperia X10 weighs maybe 20g more than the Galaxy S (according to Wikipedia) isn’t a problem for me. Even with the Mugen Power 1800mah battery [4] to replace the original 1500mah battery it’s still nowhere near the limit of the phone mass that I’m prepared to carry.

Sony Upgrades

Some time ago Sony released an Android 2.3.3 image for the Xperia X10. There is no Cyanogenmod image for the Xperia X10 because it has been locked down which greatly limits what can be done. Also Sony has a proprietary backup program on their Android 2.1 image which isn’t supported on Android 2.3.3 – this inspired my post about 5 principles of backup software [3]. Due to this pain I didn’t even try to upgrade the Xperia X10 phones for me and my wife until recently.

Before upgrading the Xperia X10 phones I was unable to use my wife’s phone. The phone didn’t seem to like recognising my touch so long touch actions (such as unlocking the phone) were almost impossible for me. I think that this is due to the fact that I have fairly dry skin which presumably gives me a higher capacitance. After the upgrade both phones are usable for me, so presumably either Sony or Google upgraded the algorithms for recognising touch to work better with varying screen quality.

Comparing the Galaxy S and the Xperia X10

When I first started running Cyanogenmod on the Galaxy S I noticed that it was a lot faster than the Xperia X10 but I didn’t know why. It was documented that there had been performance improvements in Android 2.2. Now that I’m running Android 2.3.3 on the Xperia X10 I know that the performance difference is not due to the Android version. It could be due to Cyanogenmod optimisations or Sony stupidity, but it’s most likely due to hardware differences.

The Galaxy S has more RAM and storage which allows installing and running more applications. Now that I’m using the Xperia X10 for the bare minimum applications (phone calls, SMS, camera, email, ssh, and web browsing) it works quite well. I still play games on the Galaxy S and use it for more serious web browsing via Wifi. I think that the value I’m getting from the Galaxy S as a tiny wifi tablet is greater than the money I might get from selling a partially broken phone that’s been obsoleted by two significantly better models.

Conclusion

The camera on the Xperia X10 is significantly better than the one on the Galaxy S, so going back to a phone that has a great camera is a real benefit. But being slow and locked down is a real drag. I was tempted to buy a Samsung Galaxy Note or Galaxy S3, but it seemed like a bad idea to buy a phone given that my contract comes up for renewal in about 6 months which means I’ll be offered a “free” phone which while not really free is still going to be cheaper than buying a phone outright.

Also in future given the low opinion I’m now getting of smart phone reliability I’ll try and keep a small stock of spare Android phones to cover the case of broken phones.

Related posts:

  1. Galaxy S vs Xperia X10 and Android Network Access Galaxy S Review I’ve just been given an indefinite loan...
  2. CyanogenMod and the Galaxy S Thanks to some advice from Philipp Kern I have now...
  3. Liberty and Mobile Phones I own two mobile phones at the moment, I use...

Syndicated 2012-09-11 03:58:17 from etbe - Russell Coker

Links August 2012

Google are providing some really good employee benefits including benefits to a life partner of a deceased employee [1]. It’s not known if all those benefits are available outside the US, in any case the US is the first world country with the least social security so they need it most there.

A recent Australian legal case had a father petitioning the court to have his kids take his family name [2]. According to the news report no good reason was given for renaming the kids, merely tradition. The mother won.

GlassDoor.com is a site for reviewing companies [3]. It also has job adverts, it seems that they get people in to read the reviews and then advertise jobs.

Sarah Resnick interviewed Jacob Appelbaum (of Tor fame) about privacy issues and published the article as “Leave Your Cellphone at Home” [4]. It’s very interesting and references some resources such as riseup.net that I have to try using.

Systemd in Fedora 17 has multi-seat support [5]. They support plugging USB terminals in at run-time to dynamically add new consoles for GNOME sessions.

The Coding Horror blog has an amusing and informative post about why people shouldn’t learn to code [6].

Related posts:

  1. Links June 2012 This Youtube video is an amusing satire of EULAs and...
  2. Links January 2012 Cops in Tennessee routinely steal cash from citizens [1]. They...
  3. Links August 2008 Michael Janke is writing a series of posts about estimating...

Syndicated 2012-08-31 13:50:14 from etbe - Russell Coker

Woolworths Maths Fail

picture of discount from $3.99 to $3.00 advertised as 20% off

The above is a picture of the chocolate display at Woolworths, an Australian supermarket that was formerly known as Safeway – it had the same logo as the US Safeway so there’s probably a connection. This is actually a 24.81% discount. It’s possible that some people might consider it a legal issue to advertise something as a 25% discount when it’s 1 cent short of that (even though we haven’t had a coin smaller than 5 cents in Australia since 1991). But then if they wanted to advertise a discount percentage that’s a multiple of 5% they could have made the discount price $2.99, presumably whatever factors made them make the original price $3.99 instead of $4.00 would also apply when choosing a discount price.

So the question is, do Woolworths have a strict policy of rounding down discount rates to the nearest 5% or do they just employ people who failed maths in high school?

Sometimes when discussing education people ask rhetorical questions such as “when would someone use calculus in real life”, I think that the best answer is “people who have studied calculus probably won’t write such stupid signs”. Sure the claimed discount is technically correct as they don’t say “no more than 20% off” and not misleading in a legal sense (it’s OK to claim less than you provide), but it’s annoyingly wrong. Well educated people don’t do that sort of thing.

As an aside, the chocolate in question is Green and Black, that’s a premium chocolate line that is Fair Trade, Organic, and very tasty. If you are in Australia then I recommend buying some because $3.00 is a good price.

Related posts:

  1. fair trade is the Linux way I have recently purchased a large quantity of fair trade...
  2. LUG Meetings etc Recently I was talking to an employee at Safeway (an...
  3. The Sad State of Shopping in Australia Paul Wayper has written a blog post criticising the main...

Syndicated 2012-08-29 09:52:11 from etbe - Russell Coker

SSD for a Workstation

SSDs have been dropping in price recently so I just bought four Intel 120G devices for $115 each. I installed the first one for my mother in law who had been complaining about system performance. Her system boot time went from 90 seconds to 20 seconds and a KDE login went from about 35 seconds to about 10 seconds. The real problem that she had reported was occasional excessive application delay, while it wasn’t possible to diagnose that properly I think it was a combination of her MUA doing synchronous writes while other programs such as Chromium were doing things. To avoid the possibility of a CPU performance problem I replaced her 1.8GHz E4300 system with a 2.66GHz E7300 that I got from a junk pile (it’s amazing what’s discarded nowadays).

I also installed a SSD in my own workstation (a 2.4GHz E4600). The boot time went down from 45s on Ext4 without an encrypted root to 27s with root on BTRFS including the time taken to enter the encryption password (maybe about 23s excluding my typing time). The improvement wasn’t as great, but that’s because my workstation does some things on bootup that aren’t dependent on disk IO such as enabling a bridge with STP (making every workstation a bridge is quieter than using switches). KDE login went from about 27s to about 12s and the time taken to start Chromium and have it be usable (rather than blocking on disk IO) went from 30 seconds to an almost instant response (maybe a few seconds)! Tests on another system indicates that Chromium startup could be improved a lot by purging history, but I don’t want to do that. It’s unfortunate that Chromium only supports deleting recent history (to remove incriminating entries) but doesn’t support deleting ancient history that just isn’t useful.

I didn’t try to seriously benchmark the SSD (changing from Ext4 to BTRFS on my system would significantly reduce the accuracy of the results), I have plans for doing that on more important workloads in the near future. For the moment the most casual tests have shown a significant performance benefit so it’s clear that an SSD is the correct storage option for any new workstation which doesn’t need more than 120G of storage space. $115 for SSD vs $35 for HDD is a fairly easy choice for a new system. For larger storage the price of hard drives increases more slowly than that of SSD.

In spite of the performance benefits I doubt that I will gain a real benefit from this in the next year. The time taken to install the SSD equates to dozens of boot cycles which given a typical workstation uptime in excess of a month is unlikely to happen soon. One minor benefit is that deleting messages in Kmail is an instant operation which saves a little annoyance and there will be other occasional benefits.

One significant extra benefit is that an SSD is quiet and dissipates less heat which might allow the system cooling fans to run more slowly. As noisy computers annoy me an SSD is a luxury feature. Also it’s good to test new technologies that my clients may need.

The next thing on my todo list is to do some tests of ZFS with SSD for L2ARC and ZIL.

Related posts:

  1. How I Partition Disks Having had a number of hard drives fail over the...
  2. Xen and Swap The way Xen works is that the RAM used by...
  3. big and cheap USB flash devices It’s often the case with technology that serious changes occur...

Syndicated 2012-08-28 12:40:05 from etbe - Russell Coker

998 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!