Recent blog entries

28 Feb 2015 StevenRainwater   » (Master)

Men Like Gods by H. G. Wells

Barnstaple receives final instruction before his cross-time journey home. Portion of a George Bellows illustration from the 1923 edition of Men Like Gods.

Barnstaple receives final instruction before his cross-time journey home. Portion of a George Bellows illustration from the 1923 edition of Men Like Gods.

Men Like Gods by H. G. Wells might be subtitled “Mr. Barnstaple takes a holiday” as that’s a pretty good summary of the basic plot. This 1922 book is partially intended as a Utopian novel and follows the usual convention of having an average, modern human transported into a Utopian world to represent the reader as he uncovers the workings and nature of Utopia. As might be expected of Wells, he goes the extra step to give the novel a science fiction wrapper and in the process, establishes not one but several new genres of science fiction. Just as all time travel novels trace their heritage back to Well’s book, The Time Machine, all parallel universe, multiverse, para-time, cross-time, and alternate history novels descend from Men Like Gods.

Let’s get the plot out of the way first as that’s the least interesting aspect of the book. Mr. Barnstaple is a down-trodden enlightenment liberal who writes for a leftist newspaper. He’s given up hope of changing the world. He’s depressed, hates his job, is annoyed by his family. He determines a solo holiday is the only thing that will save his sanity and sets out for no where in particular in the Yellow Peril, his little two seater car. Coming around a curve in the countryside, he and two other vehicles are suddenly swept out of this world and find themselves in a strange land near the smoking wreckage of a scientific experiment gone wrong. They soon meet some inhabitants of this new world and find it’s similar to Earth but a thousand years in the future. Needing a name for the place, they decide to refer to it as, wait for it, Utopia!

As Barnstaple learns about the amazing world, he realizes it embodies all the ideals he believes in. The others in his party, being more conservative, particularly a narrow minded priest, see the world as degenerate. They make nothing of the peace, prosperity and happiness all around them. Instead they see people who don’t wear enough clothing, don’t have religion, aren’t capitalists, and offend in numerous other ways. With the exception of Barnstaple, the Earthlings soon hatch an ill-conceived plot to take some Utopians hostage, thinking they can use that as a spring board to world-domination and remake Utopia in the image of Earth. I won’t give away too much but there’s never any doubt Barnstaple will survive the goings-on and soon enough is sent back to Earth all the wiser and now with a renewed sense of hope that Earth can someday become like Utopia if we all work hard at improving things.

What sets the book apart from other Utopian novels and gives it an honored place in the annals of science fiction is the first description of the multiverse, the first hint that multiple universes could be “parallel” to and even duplicates of our own; in this case only time-shifted some thousand years. Utopia is in a universe that is essentially an alternate time line of Earth’s universe. The book also postulates that while some universes are nearly identical, others may be wildly different. It’s also the first description of a technological method of cross-timeline travel between parallel universes. As if that’s not enough, there’s a description towards the end of the Utopian’s plans to leave their planet and explore the stars using space travel technology that allows them to bypass normal spatial distances by taking a shortcut; it’s essentially an early description of hyperspace, subspace, warp drive or something along those lines. And for his last trick, Wells explains away the ability of the Earthlings to communicate with the Utopians (who obviously are unlikely to speak English) by explaining that they evolved telepathic abilities. They speak using their minds and we hear them in whatever language we naturally understand, provided we know a word that fits the concept they’re thinking to us.

Here’s the actual description of the multiverse:

Serpentine proceeded to explain that just as it would be possible for any number of practically two-dimensional universes to lie side by side, like sheets of paper, in three dimensional space, so in the many dimensional space about which the ill equipped human mind is still slowly and painfully acquiring knowledge, it is possible for an enumerable quantity of practically three dimensional universes to lie, as it were, side by side and to undergo a roughly parallel movement through time.

Travel between parallel universes is accomplished using a machine that takes a cube-shaped chunk of the universe you’re in and “rotates” it through a higher dimension, causing it to come into contact with some nearby universe. The first test of the technology works but the machine explodes killing the operators. By the end of the book, the machine is not only rebuilt but improved, made portable and, as an added bonus, can even control which Universe it connects with, conveniently allowing Barnstaple to be sent home. Interestingly, because Barnstaple arrived accidentally in moving car and the Utopians wish to return him the same way, they set up an arrangement reminiscent of Back to the Future in which Barnstaple must drive along a segment of roadway, hitting a trip wire strung across the road, triggering the cross-time machine at precisely the right instant to transport his moving car.

Wells make a variety of political observations about the failings of our own world including his complaints with the capitalism, Marxism, and socialism of his day. He describes an economic system in which each Utopian citizen lives a government-funded life up to the completion of a very elaborate and detailed education, after which they must choose a path in life that contributes to the world’s economy. They can choose to do anything they like, ranging from a required minimum that allows them to spend most of their life goofing off, to pursuing any career or endeavor, even acquiring wealth and using it as they choose. The Utopians lack any formal government or rulers. Much of the world operates on the “do-ocracy” principle common in hackerspaces. If you see something in the world that needs improvement, it’s up to you to do it, organizing the doing of it, or pay someone to do it. At one point Crystal, a Utopian student who befriends Barnstaple, explains that society is based on The Five Principles of Liberty:

  1. Privacy – All individual personal facts are private between the citizen and the public organization to which he entrusts them, and can be used only for his convenience and with his sanction (and anonymously for statistical purposes only)
  2. Free Movement – A citizen, subject to discharge of his public obligations, may go without permission or explanation to any part of the planet.
  3. Unlimited Knowledge – All that is known, except individual personal facts about living people, is on record and easily available to everyone. Nothing may be kept from a citizen nor misrepresented to him.
  4. Lying is the Blackest Crime – Where there are lies there cannot be freedom. Facts may not be suppressed nor stated inexactly
  5. Free Discussion and Criticism – Any citizen is free to criticize and discuss anything in the whole universe provided he tells no lies either directly or indirectly. A citizen may discuss respectfully or disrespectfully, with any intent, however subversive. A citizen may express ideas in any literary or artistic form desired.

Before Barnstaple leaves, he makes one appeal to stay, speaking to a wise, old Utopian who explains that he must go back and that Earth will eventually follow the same course of history to become Utopian in it’s own time. He warns Barnstaple against attempting premature contact between the two universes until Earth has gotten its house in order:

What could Utopians do with the men of Earth? … You would be too numerous for us to teach … Your stupidities would get in our way, your quarrels and jealousies and traditions, your flags and religions, and all your embodied spites and suppressions, would hamper us in everything we should want to do. We should be impatient with you, unjust and overbearing. You are too like us for us to be patient with your failures … We might end by exterminating you.

Given the way their economy works, it’s fairly clear that it would fall apart pretty quickly if flooded with citizens who have the typical nature of modern humans. In the end, Men like Gods presents a Utopia that needs better humans to be workable, but at least it recognizes that, a fact that sets it above much of the Utopian literature that preceded it.

Syndicated 2015-02-28 18:52:57 from Steevithak of the Internet

28 Feb 2015 eMBee   » (Journeyer)

Building an API with Zinc-REST in Pharo Smalltalk

In this session we are going to build a simple RESTful API using the Zinc-REST package.

The base image is again Moose, now the latest build of Moose 5.1

You may watch part one, part two and part three of this series if you are interested to find out what lead to this point. They are however not needed to be able to follow this session.

Syndicated 2015-02-28 17:23:13 from DevLog

28 Feb 2015 dmarti   » (Master)

Personal data, politics, and an opportunity

Charles Stross, in A different cluetrain:

"Our mechanisms for democratic power transfer date to the 18th century. They are inherently slower to respond to change than the internet and our contemporary news media."

Bruce Schneier, on Ars Technica:

"Facebook could easily tilt a close election by selectively manipulating what posts its users see. Google might do something similar with its search results."

The bias doesn't have to be deliberate, though. Eric Raymond posted an example on Google Plus.

G+ may be engaging in non-viewpoint-neutral censorship of news articles relating to firearms.

Turned out that there was a bug in how Google Plus interacted with the CMS on a pro-Second-Amendment site. Not a deliberate political conspiracy, but software is full of bugs, especially when independently developed projects interact. When bugs affecting some political content are quietly fixed faster than bugs affecting others, it's not a sneaky conspiracy. It's just the natural result of programmers and early adopters choosing to test with less of the content that isn't a "cultural fit". Software developers have political views, and those views tend to escape into their software, and affect the software's users.

Google and Facebook don't have to decide to manipulate elections. Manipulation is an emergent property of networked software development. On the Planet of Classical Economics, Facebook and Google would sell their user-manipulating power to the highest bidder. But here isn't there. In the USA, the Data Party (mostly for mental extraction, mostly "blue") has the mainstream Internet businesses, and the Carbon Party (mostly for resource extraction, mostly "red") doesn't.

Which is the same problem that Roger Ailes had for TV in 1970, and we know how he ended up solving that one.

Today, is somebody on the Carbon Party side doing for their "SJW in our people's pockets" problem what Ailes did for their "liberal in our people's living rooms" problem? Yes, a Data Party has a head start over a Carbon Party in a race to build a mobile platform, but plenty of "red state" people can code, write checks, and place orders from the countries that still know how to make things.

Are we going to get two parallel user-tracking industries in the USA, the same way we have two factions in broadcast and cable media? And will each one offer tools to protect users from the other? I might buy a Koch-o-Phone just to watch the OS and the inevitable PLA spyware fight over my Facebook timeline.

Syndicated 2015-02-28 15:45:52 from Don Marti

28 Feb 2015 shlomif   » (Master)

Tech Tip: How to Configure Qt 5 Behaviour When Running on KDE4

Recently, I noticed that when running the VLC-2.2.0 prerelease, which is based on Qt 5 for its GUI, on my Mageia Linux 5 system on top of KDE 4, then in the playlist a single-click immediately played a file instead of selecting it, while reserving a double click for activation. After a long amount of research and thought, I figured out a way to configure Qt 5 on top of KDE.

To do so:

  1. Install lxqt-config and the “lxqt-qtplugin”.

  2. Add the line “export QT_QPA_PLATFORMTHEME=lxqt” somewhere before the desktop startup in your “.Xclients” or “.xinitrc” file (or in your “.bashrc”).

  3. Restart the X/KDE environment.

  4. Run “lxqt-config” to configure the appropriate behaviour.

This way one can use the Qt5 customisations of lxqt in KDE 4. Enjoy!

Licence

You can reuse this entry under the Creative Commons Attribution 3.0 Unported licence, or at your option any later version. See the instructions of how to comply with it.

Syndicated 2015-02-28 12:12:47 from shlomif

27 Feb 2015 Killerbees   » (Journeyer)

Put code in your google docs

I'm writing a technical document using google docs, and I want to put code snippets into it.
I thought this would be hard, before I discovered http://hilite.me/

This awesome, and awesomely simple, web app converts snippets of code into coloured HTML.
It can format a comprehensive range of languages and apply an equally impressive variety of styles.

Here's a snip of javascript, I just copied and pasted it straight in here, so it doesn't handle the overflow well, but you have to admit that its pretty cool for something that has a quick copy/paste/click/copy/paste workflow:

for (var i = 0; i < dataObj.eventData.children.length; i++) {
if (dataObj.isProduction){
dataObj.eventData.children[2].outcome.coupling = "1";
}
if (dataObj.eventData.children[i].outcome.coupling.length > 0) {
outcomeData = dataObj.eventData.children[i].outcome;
priceBoost[counter] = Object.create(priceBoostObject);
priceBoost[counter].createPriceBoostHTML(outcomeData);
priceBoost[counter].applyPriceBoost(outcomeData);
priceBoost[counter].applyExtraCss(outcomeData);
priceBoost[counter].initialiseAnimatedImages('animated-' + outcomeData.id, 'images/test.png', 'test.html');
priceBoost[counter].checkTextWidth(outcomeData.id);
counter++;
}
}

Syndicated 2015-02-27 11:42:00 (Updated 2015-02-27 11:42:57) from Danny Angus

26 Feb 2015 dmarti   » (Master)

Ad blocking, bullshit and a point of order

(Bob Hoffman says that the B word in a post title is good for more traffic so let's try it.)

Alex Kantrowitz for Advertising Age: Publishers Watch Closely as Adoption of Ad Blocking Tech Grows.

Adblock Plus, for instance, recently surpassed 300 million installs, according to spokesman Mark Addison, who said it stood at 200 million roughly a year ago. Mozilla has seen more than 200,000 downloads of Adblock Plus nearly every day since Sept. 1. Mr. Addison attributed the extension's popularity primarily to the fact that it is now available on every browser.

Lots of stuff is "available on every browser" but sank without a splash. There must be something more going on.

No One Should Be Outed By an Ad: Marc Groman of the Network Advertising Initiative points out that

A young man or (woman) searches on his computer in the privacy of his home for information about sexual orientation or coming out as gay. Hours or days later, he receives ads for gay-related products or services while surfing on totally unrelated websites. Maybe this happens while at school, in the office or when sharing his computer with family members. Recent developments in cross-device tracking mean that ads for gay events or venues could surface not only on his home computer where he originally searched for the information, but on his work laptop or tablet. In addition, the ads could even be displayed on his parents’ computers, which could unknowingly be linked to his PC because they appear to be part of the same household.

According to Groman, "nearly 100 of the most responsible companies in online advertising today" won't do this.

But as for the remaining, less scrupulous adtech firms, the take-away is: better get your ad blocker on.

Brian Merchant on Motherboard:

72 percent of US internet users look up health-related information online. But an astonishing number of the pages we visit to learn about private health concerns—confidentially, we assume—are tracking our queries, sending the sensitive data to third party corporations, even shipping the information directly to the same brokers who monitor our credit scores.

What could possibly go wrong?

That's just a couple of targeted advertising stories from the past week. And the IAB is worried that ad blockers are a thing? That's like crapping on the sidewalk and complaining about people wearing rubber boots.

"Online advertising" is turning into a subset of "creepy scary stuff on the Internet." Advertising done right can be a way to pay for things that people want to read, but it's not working.

So why do publishers put up with this? Why not just run only first-party ads? It's a long story, but basically because other publishers do.

If websites could coordinate on targeting, proposition 1 suggests that they might want to agree to keep targeting to a minimum. However, we next show that individually, websites win by increasing the accuracy of targeting over that of their competitors, so that in the non- cooperative equilibrium, maximal targeting results.

So the gamesmanship of it all means that publishers end up in a spiral of crap.

Ad blocking isn't helping. The AdBlock Plus "acceptable ads" racket will pass ads that are superficially less annoying, but still have fundamental tracking problems. It's "acceptable" to split a long article into multiple annoying pages to put ads at top and bottom, but not to put ads within the flow of a modern long-scrolling article. "Acceptable ads" requires 1990s-vintage design and avoids fixing the real problems.

Fortunately, there's a solution that works for users and for publishers. Tracking protection is a safe, publisher-friendly alternative to ad blocking. Blocks the creepy stuff, to help publishers, without dictating design or interfering with quality ads.

  • Tracking Protection on Firefox filters out tracking, while letting quality ads through. There's no "acceptable" program to join, and no limits on design.

  • Disconnect is a browser extension to protect users from the "web of invisible trackers."

Tracking protection helps publishers solve the big problem, the problem that the IAB doesn't want to talk about. Data leakage.

The prime "bovine-fertilizer-based information solution" here is all the verbiage about trying to break out the ad blocking problem from the ad fraud problem from the "print dollars to digital dimes" problem. It's all connected. Shovel through it all and you get something like:

  • Adtech as we know it is based on data leakage.

  • Ad blocking, along with adtech fraud, is a side-effect of the data leakage problem.

  • In the short term, data leakage is bad for publishers and good for adtech.

Having meetings to express grave concern about ad blocking isn't the answer, any more than having meetings to express grave concern about ad fraud is the answer.

Arguing about how to clean the carpet while the sewer pipe is still broken is not the answer.

Getting more users onto tracking protection, as an alternative to ad blocking? A way to fix data leakage at the source? For publishers, that's a good step toward the answer.

Point of order: I'm now avoiding the word "privacy" except in a direct quotation or a "Privacy Policy" document.

If I say it again, it's $1 in the jar for the EFF.

Terms to try to use instead:

  • tracking protection

  • data leakage

  • brand safety

Privacy is a big hairy problem, like the "freedom" in "free software." Plenty of people are philosophizing about it. But working with the web every day, the fixes that need to happen are not in the philosophy department, but in plugging the leaks that enable dysfunctional ads and building the systems to enable better ones.

Syndicated 2015-02-26 14:44:08 from Don Marti

26 Feb 2015 amits   » (Journeyer)

FUDCon Pune: Now Accepting Subsidy Requests

If you’re planning on attending FUDCon Pune, and are going to need subsidy for travel and accomodation, you should head to this link to fill out the form requesting for one.

You may have some questions about this, and we already have some answers.  Feel free to hop on to the fedora-india list or the #fedora-india IRC channel on Freenode if you have other questions.

Syndicated 2015-02-26 15:27:31 (Updated 2015-02-26 15:28:12) from Think. Debate. Innovate.

26 Feb 2015 mbanck   » (Journeyer)

My recent Debian LTS activities

Over the past months, my employer credativ has sponsored some of my work time to keep PostgreSQL updated for squeeze-lts. Version 8.4 of PostgreSQL was declared end-of-life by the upstream PostgreSQL Global Development Group (PGDG) last summer, around the same time official squeeze support ended and squeeze-lts took over. Together with my colleagues Christoph Berg (who is on the PostgreSQL package maintainer team) and Bernd Helmle, we continued backpatching changes to 8.4. We tried our best to continue the PGDG backpatching policy and looked only at commits at the oldest still maintained branch, REL9_0_STABLE.

Our work is publicly available as a separate REL8_4_LTS branch on Github. The first release (called 8.4.22lts1) happened this month mostly coinciding with the official 9.0, 9.1, 9.2, 9.3 and 9.4 point releases. Christoph Berg has uploaded the postgresql-8.4 Debian package for squeeze-lts and release tarballs can be found on Github here (scroll down past the release notes for the tarballs).

We intend to keep the 8.4 branch updated on a best-effort community basis for the squeeze-lts lifetime. If you have not yet updated from 8.4 to a more recent version of PostgreSQL, you probably should. But if you are stuck on squeeze, you should use our LTS packages. If you have any questions or comments concerning PostgreSQL for squeeze-lts, contact me.

26 Feb 2015 mikal   » (Journeyer)

Tuggeranong Hill (again)

I walked up Tuggeranong Hill again, this time as a geocaching run. This is the first trig I've visited twice!

   

Interactive map for this route.

Tags for this post: blog pictures 20150225-tuggeranong_hill photo canberra tuggeranong bushwalk trig_point
Related posts: Big Monks; A walk around Mount Stranger; Forster trig; Two trigs and a first attempt at finding Westlake; Taylor Trig; Oakey trig

Comment

Syndicated 2015-02-25 16:06:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

26 Feb 2015 AlanHorkan   » (Master)

Krita 2.9

Congratulations to Krita on releasing version 2.9 and a very positive write-up for Krita by Bruce Byfield writing for Linux Pro Magazine.

I'm amused by his comment comparing Krita to "the cockpit of a fighter jet" and although there are some things I'd like to see done differently* I think Krita is remarkably clear for a program as complex as it is and does a good job of balancing depth and breadth. (* As just one example: I'm never going to use "File, Mail..." so it's just there waiting for me to hit it accidentally, but far as I know I cannot disable or hide it.)

Unfortunately Byfield writes about Krita "versus" other software. I do not accept that premise. Different software does different things, users can mix and match (and if they can't that is a different and bigger problem). Krita is another weapon in the arsenal. Enjoy Krita 2.9.

Syndicated 2015-02-25 23:42:50 from Alan Horkan

25 Feb 2015 teknopup   » (Apprentice)

Seeking supporters for a #opensource #linux @Raspberry_Pi PI 2 powered VR headset


https://www.kickstarter.com/projects/axiomfinity/tekn-vr-headset-running-raspberry-pi-2-w-rasbian-l

25 Feb 2015 yosch   » (Master)

Libre Graphics Magazine issue on fonts

Go check out the latest edition of the Libre Graphics Magazine.

The issue (2.3) is about type, libre/open fonts and related topics from the perspective of a fairly wide selection of authors.

Go ahead: preview, buy, subscribe :-)

24 Feb 2015 bagder   » (Master)

curl, smiley-URLs and libc

Some interesting Unicode URLs have recently been seen used in the wild – like in this billboard ad campaign from Coca Cola, and a friend of mine asked me about curl in reference to these and how it deals with such URLs.

emojicoke-by-stevecoleuk-450

(Picture by stevencoleuk)

I ran some tests and decided to blog my observations since they are a bit curious. The exact URL I tried was ‘www.😃.ws’ – it is really hard to enter by hand so now is the time to appreciate your ability to cut and paste! It appears they registered several domains for a set of different smileys.

These smileys are not really allowed IDN (where IDN means International Domain Names) symbols which make these domains a bit different. They should not (see below for details) be converted to punycode before getting resolved but instead I assume that the pure UTF-8 sequence should or at least will be fed into the name resolver function. Well, either way it should either pass in punycode or the UTF-8 string.

If curl was built to use libidn, it still won’t convert this to punycode and the verbose output says “Failed to convert www.😃.ws to ACE; String preparation failed

curl (exact version doesn’t matter) using the stock threaded resolver

  • Debian Linux (glibc 2.19) – FAIL
  • Windows 7 - FAIL
  • Mac OS X 10.9 – SUCCESS

But then also perhaps to no surprise, the exact same results are shown if I try to ping those host names on these systems. It works on the mac, it fails on Linux and Windows. Wget 1.16 also fails on my Debian systems (just as a reference and I didn’t try it on any of the other platforms).

My curl build on Linux that uses c-ares for name resolving instead of glibc succeeds perfectly. host, nslookup and dig all work fine with it on Linux too (as well as nslookup on Windows):

$ host www.😃.ws
www.\240\159\152\131.ws has address 64.70.19.202
$ ping www.😃.ws
ping: unknown host www.😃.ws

While the same command sequence on the mac shows:

$ host www.😃.ws
www.\240\159\152\131.ws has address 64.70.19.202
$ ping www.😃.ws
PING www.😃.ws (64.70.19.202): 56 data bytes
64 bytes from 64.70.19.202: icmp_seq=0 ttl=44 time=191.689 ms
64 bytes from 64.70.19.202: icmp_seq=1 ttl=44 time=191.124 ms

Slightly interesting additional tidbit: if I rebuild curl to use gethostbyname_r() instead of getaddrinfo() it works just like on the mac, so clearly this is glibc having an opinion on how this should work when given this UTF-8 hostname.

Pasting in the URL into Firefox and Chrome works just fine. They both convert the name to punycode and use “www.xn-h28h.ws” which then resolves to the same IPv4 address.

What do the IDN specs say?

The U-263A smileyThis is not my area of expertise. I had to consult Patrik Fältström here to get this straightened out (but please if I got something wrong here the mistake is still all mine). Apparently this smiley is allowed in RFC 3940 (IDNA2003), but that has been replaced by RFC 5890-5892 (IDNA2008) where this is DISALLOWED. If you read the spec, this is 263A.

So, depending on which spec you follow it was a valid IDN character or it isn’t anymore.

What does the libc docs say?

The POSIX docs for getaddrinfo doesn’t contain enough info to tell who’s right but it doesn’t forbid UTF-8 encoded strings. The regular glibc docs for getaddrinfo also doesn’t say anything and interestingly, the Apple Mac OS X version of the docs says just as little.

With this complete lack of guidance, it is hardly any additional surprise that the glibc gethostbyname docs also doesn’t mention what it does in this case but clearly it doesn’t do the same as getaddrinfo in the glibc case at least.

What’s on the actual site?

A redirect to www.emoticoke.com which shows a rather boring page.

emoticoke

Who’s right?

I don’t know. What do you think?

Syndicated 2015-02-24 19:26:30 from daniel.haxx.se

23 Feb 2015 jas   » (Master)

Laptop Buying Advice?

My current Lenovo X201 laptop has been with me for over four years. I’ve been looking at new laptop models over the years thinking that I should upgrade. Every time, after checking performance numbers, I’ve always reached the conclusion that it is not worth it. The most performant Intel Broadwell processor is the the Core i7 5600U and it is only about 1.5 times the performance of my current Intel Core i7 620M. Meanwhile disk performance has increased more rapidly, but changing the disk on a laptop is usually simple. Two years ago I upgraded to the Samsung 840 Pro 256GB disk, and this year I swapped that for the Samsung 850 Pro 1TB, and both have been good investments.

Recently my laptop usage patterns have changed slightly, and instead of carrying one laptop around, I have decided to aim for multiple semi-permanent laptops at different locations, coupled with a mobile device that right now is just my phone. The X201 will remain one of my normal work machines.

What remains is to decide on a new laptop, and there begins the fun. My requirements are relatively easy to summarize. The laptop will run a GNU/Linux distribution like Debian, so it has to work well with it. I’ve decided that my preferred CPU is the Intel Core i7 5600U. The screen size, keyboard and mouse is mostly irrelevant as I never work longer periods of time directly on the laptop. Even though the laptop will be semi-permanent, I know there will be times when I take it with me. Thus it has to be as lightweight as possible. If there would be significant advantages in going with a heavier laptop, I might reconsider this, but as far as I can see the only advantage with a heavier machine is bigger/better screen, keyboard (all of which I find irrelevant) and maximum memory capacity (which I would find useful, but not enough of an argument for me). The only sub-1.5kg laptops with the 5600U CPU on the market right now appears to be:

Lenovo X250 1.42kg 12.5″ 1366×768
Lenovo X1 Carbon (3rd gen) 1.44kg 14″ 2560×1440
Dell Latitude E7250 1.34kg 12.5″ 1366×768
Dell XPS 13 1.26kg 13.3″ 3200×1800
HP EliteBook Folio 1040 G2 1.49kg 14″ 1920×1080
HP EliteBook Revolve 810 G3 1.4kg 11.6″ 1366×768

I find it interesting that Lenovo, Dell and HP each have two models that meets my 5600U/sub-1.5kg criteria. Regarding screen, possibly there exists models with other screen resolutions. The XPS 13, HP 810 and X1 models I looked had touch screens, the others did not. As screen is not important to me, I didn’t evaluate this further.

I think all of them would suffice, and there are only subtle differences. All except the XPS 13 can be connected to peripherals using one cable, which I find convenient to avoid a cable mess. All of them have DisplayPort, but HP uses DisplayPort Standard and the rest uses miniDP. The E7250 and X1 have HDMI output. The X250 boosts a 15-pin VGA connector, none of the others have it — I’m not sure if that is a advantage or disadvantage these days. All of them have 2 USB v3.0 ports except the E7250 which has 3 ports. The HP 1040, XPS 13 and X1 Carbon do not have RJ45 Ethernet connectors, which is a significant disadvantage to me. Ironically, only the smallest one of these, the HP 810, can be memory upgraded to 12GB with the others being stuck at 8GB. HP and the E7250 supports NFC, although Debian support is not certain. The E7250 and X250 have a smartcard reader, and again, Debian support is not certain. The X1, X250 and 810 have a 3G/4G card.

Right now, I’m leaning towards rejecting the XPS 13, X1 and HP 1040 because of lack of RJ45 ethernet port. That leaves me with the E7250, X250 and the 810. Of these, the E7250 seems like the winner: lightest, 1 extra USB port, HDMI, NFC, SmartCard-reader. However, it has no 3G/4G-card and no memory upgrade options. Looking for compatibility problems, it seems you have to be careful to not end up with the “Dell Wireless” card and the E7250 appears to come in a docking and non-docking variant but I’m not sure what that means.

Are there other models I should consider? Other thoughts?

Syndicated 2015-02-23 22:49:21 from Simon Josefsson's blog

23 Feb 2015 marnanel   » (Journeyer)

Why I hate Valentine's day

In answer to someone complaining about people complaining about Valentine's ( http://catvalente.livejournal.com/434149.html?page=3 ):

I don't *want* to take happiness away from anyone who's happy on Valentine's day-- why would I want to take happiness away from other people? Good luck to them! But *I* hate Valentine's day because it reminds me of the years and years of Valentine's days filled with loneliness and despair, and if I allow myself to think about it, I'll fall apart. I suppose "triggering" is the word I'm looking for. Maybe one day I'll get over that, and I really don't like being this bitter, but for now I hate Valentine's day because of what it does to me. Every. Single. Year.

This entry was originally posted at http://marnanel.dreamwidth.org/328960.html. Please comment there using OpenID.

Syndicated 2015-02-23 15:25:25 from Monument

23 Feb 2015 bagder   » (Master)

Bug finding is slow in spite of many eyeballs

“given enough eyeballs, all bugs are shallow”

The saying (also known as Linus’ law) doesn’t say that the bugs are found fast and neither does it say who finds them. My version of the law would be much more cynical, something like: “eventually, bugs are found“, emphasizing the ‘eventually’ part.

(Jim Zemlin apparently said the other day that it can work the Linus way, if we just fund the eyeballs to watch. I don’t think that’s the way the saying originally intended.)

Because in reality, many many bugs are never really found by all those given “eyeballs” in the first place. They are found when someone trips over a problem and is annoyed enough to go searching for the culprit, the reason for the malfunction. Even if the code is open and has been around for years it doesn’t necessarily mean that any of all the people who casually read the code or single-stepped over it will actually ever discover the flaws in the logic. The last few years several world-shaking bugs turned out to have existed for decades until discovered. In code that had been read by lots of people – over and over.

So sure, in the end the bugs were found and fixed. I would argue though that it wasn’t because the projects or problems were given enough eyeballs. Some of those problems were found in extremely popular and widely used projects. They were found because eventually someone accidentally ran into a problem and started digging for the reason.

Time until discovery in the curl project

I decided to see how it looks in the curl project. A project near and dear to me. To take it up a notch, we’ll look only at security flaws. Not only because they are the probably most important bugs we’ve had but also because those are the ones we have the most carefully noted meta-data for. Like when they were reported, when they were introduced and when they were fixed.

We have no less than 30 logged vulnerabilities for curl and libcurl so far through-out our history, spread out over the past 16 years. I’ve spent some time going through them to see if there’s a pattern or something that sticks out that we should put some extra attention to in order to improve our processes and code. While doing this I gathered some random info about what we’ve found so far.

On average, each security problem had been present in the code for 2100 days when fixed – that’s more than five and a half years. On average! That means they survived about 30 releases each. If bugs truly are shallow, it is still certainly not a fast processes.

Perhaps you think these 30 bugs are really tricky, deeply hidden and complicated logic monsters that would explain the time they took to get found? Nope, I would say that every single one of them are pretty obvious once you spot them and none of them take a very long time for a reviewer to understand.

Vulnerability ages

This first graph (click it for the large version) shows the period each problem remained in the code for the 30 different problems, in number of days. The leftmost bar is the most recent flaw and the bar on the right the oldest vulnerability. The red line shows the trend and the green is the average.

The trend is clearly that the bugs are around longer before they are found, but since the project is also growing older all the time it sort of comes naturally and isn’t necessarily a sign of us getting worse at finding them. The average age of flaws is aging slower than the project itself.

Reports per year

How have the reports been distributed over the years? We have a  fairly linear increase in number of lines of code but yet the reports were submitted like this (now it goes from oldest to the left and most recent on the right – click for the large version):

vuln-trend

Compare that to this chart below over lines of code added in the project (chart from openhub and shows blanks in green, comments in grey and code in blue, click it for the large version):

curl source code growth

We received twice as many security reports in 2014 as in 2013 and we got half of all our reports during the last two years. Clearly we have gotten more eyes on the code or perhaps users pay more attention to problems or are generally more likely to see the security angle of problems? It is hard to say but clearly the frequency of security reports has increased a lot lately. (Note that I here count the report year, not the year we announced the particular problems, as they sometimes were done on the following year if the report happened late in the year.)

On average, we publish information about a found flaw 19 days after it was reported to us. We seem to have became slightly worse at this over time, the last two years the average has been 25 days.

Did people find the problems by reading code?

In general, no. Sure people read code but the typical pattern seems to be that people run into some sort of problem first, then dive in to investigate the root of it and then eventually they spot or learn about the security problem.

(This conclusion is based on my understanding from how people have reported the problems, I have not explicitly asked them about these details.)

Common patterns among the problems?

I went over the bugs and marked them with a bunch of descriptive keywords for each flaw, and then I wrote up a script to see how the frequent the keywords are used. This turned out to describe the flaws more than how they ended up in the code. Out of the 30 flaws, the 10 most used keywords ended up like this, showing number of flaws and the keyword:

9 TLS
9 HTTP
8 cert-check
8 buffer-overflow

6 info-leak
3 URL-parsing
3 openssl
3 NTLM
3 http-headers
3 cookie

I don’t think it is surprising that TLS, HTTP or certificate checking are common areas of security problems. TLS and certs are complicated, HTTP is huge and not easy to get right. curl is mostly C so buffer overflows is a mistake that sneaks in, and I don’t think 27% of the problems tells us that this is a problem we need to handle better. Also, only 2 of the last 15 flaws (13%) were buffer overflows.

Syndicated 2015-02-23 06:39:56 from daniel.haxx.se

23 Feb 2015 mikal   » (Journeyer)

Oakey trig

I've got to say, this trig was disappointing. It was a lunch time walk, so a bit rushed, but the trig was just boring. Not particularly far, or particularly steep, or in a particularly interesting area. That said, it wasn't terrible. It just felt generic compared with other trigs I've walked to.

         

Interactive map for this route.

Tags for this post: blog pictures 20150223-oakey_trig photo canberra tuggeranong bushwalk trig_point
Related posts: Big Monks; A walk around Mount Stranger; Forster trig; Two trigs and a first attempt at finding Westlake; Taylor Trig; Urambi Trig

Comment

Syndicated 2015-02-22 20:50:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

22 Feb 2015 dmarti   » (Master)

Reactions from developers

When I explain the whole Targeted Advertising Considered Harmful thing to software developers who work in adtech, I keep expecting a "well, actually" from somebody. After all, the Lumascape is large so there's no way the general points I'm bringing up can possibly apply to every single company on the chart.

#NotAllAdtech, right?

Instead, I've been getting two main reactions from developers.

  • You're right, adtech is a racket, I'm surprised that clients and publishers put up with it.

  • You're missing something—another really messed-up thing about adtech is...

(example: The problem with anti-fraud measures so far is that their impact falls hardest on small legit publishers. Not only does adtech move ad revenue away from sites with real users toward fraudulent ones, but when networks attempt to stop it, they hurt the legit sites worse.)

Anyway, ad agency clients (not just CEOs) go read What Every CEO Needs To Know About Online Advertising by Bob Hoffman.

Web publishers, watch this space.

Syndicated 2015-02-22 16:03:49 from Don Marti

28 Feb 2015 shlomif   » (Master)

“Out of the Strong, Something Sweet” - How a Bug Led to a Useful Optimisation

The book Fortune or Failure: Missed Opportunities and Chance Discoveries (which my family used to own, but which I did not read) gives the case to the important role of luck and chance in scientific discoveries. Recently, when working on Project Euler Problem No. 146 I came up with a case of an accidental bug, that in turn led to an idea for a significant optimisation.

The C code with the bug (which was in turn translated from some Perl code) looked something like that:

#define DIV 9699690
#define NUM_MODS 24024
#define NUM_PRIMES 8497392

int primes[NUM_PRIMES];
int mods[NUM_MODS];

typedef long long LL;

static inline bool is_prime(LL n)
{
    LL lim = (LL)(sqrt(n));

    for (int p_idx=0; p_idx < NUM_MODS ; p_idx++)
    {
        typeof (primes[p_idx]) p = primes[p_idx];
        if (p > lim)
        {
            return true;
        }
        if (n % p == 0)
        {
            return false;
        }
    }
    return true;
}

.
.
.
            for (int y_idx=0;y_idx<sizeof(y_off)/sizeof(y_off[0]);y_idx++)
            {
                if (! is_prime(sq + y_off[y_idx]))
                {
                    goto fail;
                }
            }
            for (int n_idx=0;n_idx<sizeof(n_off)/sizeof(n_off[0]);n_idx++)
            {
                if (is_prime(sq + n_off[n_idx]))
                {
                    goto fail;
                }
            }

As you can notice eventually, the problem was that in the p_idx loop, NUM_MODS should have been the larger NUM_PRIMES. This caused the test for primality to finish faster, but to sometimes return true instead of false. As a result, I noticed that some numbers were erroneously reported as suitable, but the program finished much faster.

I corrected it and reran the program which was now much slower, but this led me to think that maybe the lower limit to the count of primes can be a pre-filter for primality for the “y_idx”/“y_off” numbers, that will run quicker and eliminate some numbers. As a result, I did this:

#define NUM_PRIMES__PRE_FILTER 24024

static inline bool is_prime__pre_filter(LL n)
{
    LL lim = (LL)(sqrt(n));

    for (int p_idx=0; p_idx < NUM_PRIMES__PRE_FILTER ; p_idx++)
    {
        typeof (primes[p_idx]) p = primes[p_idx];
        if (p > lim)
        {
            return true;
        }
        if (n % p == 0)
        {
            return false;
        }
    }
    return true;
}

.
.
.
            for (int y_idx=0;y_idx<sizeof(y_off)/sizeof(y_off[0]);y_idx++)
            {
                if (! is_prime__pre_filter(sq + y_off[y_idx]))
                {
                    goto fail;
                }
            }
            for (int y_idx=0;y_idx<sizeof(y_off)/sizeof(y_off[0]);y_idx++)
            {
                if (! is_prime(sq + y_off[y_idx]))
                {
                    goto fail;
                }
            }
            for (int n_idx=0;n_idx<sizeof(n_off)/sizeof(n_off[0]);n_idx++)
            {
                if (is_prime(sq + n_off[n_idx]))
                {
                    goto fail;
                }
            }

This made the program finish in under a minute, while yielding the correct solution. The original program, with the bug fix, was still running after several minutes.

So the bug proved to be useful and insightful. One possible future direction is to merge the two “y_idx” loops into a single function that will accept an array of numbers, and will check them all for primality using the same divisors simultaneously, so as soon as one of them is found to be non-prime, a verdict will be reached.

Licence

You can reuse this entry under the Creative Commons Attribution Noncommercial 3.0 Unported licence, or at your option any later version. See the instructions of how to comply with it.

Syndicated 2015-02-26 18:23:14 from shlomif

22 Feb 2015 mikal   » (Journeyer)

Geocaching

I've been trapped at home with either a sick child or a sick me for the last four or five days. I was starting to go a bit stir crazy, so I ducked out for some local geocaching. An enjoyable shortish walk around the nearby nature park.

Interactive map for this route.

Tags for this post: blog canberra tuggeranong bushwalk geocaching
Related posts: Another lunch time walk; Lunchtime geocaching; Big Monks; Confessions of a middle aged orienteering marker; Geocaching in the evening, the second; Geocaching in the evening

Comment

Syndicated 2015-02-22 01:14:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

22 Feb 2015 Hobart   » (Journeyer)

Atari 8-bit "Archimedes Spiral" demo - Found again!

Sometimes you stumble upon what you were looking for by accident ...

When I was 9 or 10 years old, I didn't have a modem, much less access to the Internet. The few computer magazines I had, I read over and over - and would have to type in games from program listings. I remembered typing in a BASIC program full of complicated math I didn't understand. The resulting program would take hours to run, but produced an impressive 3-D wireframe image. (With hidden line removal!)

7 years ago (mid-2008) I decided to poke around the Internet and ask in various places if anyone had seen it ... with no luck.

I had a bit of luck a year later, and posted my findings here on LiveJournal.

Today I was reading through some .PDFs of old Atari magazines, not even thinking of this, when lo-and-behold, there was the article. Hazzoo-huzzah! It turned out not to be MACE Journal or Compute, but a 1982 issue of ANALOG Computing - #7, the one with the awesome Blade-Runner inspired cover art. Many thanks to Charles Bachand, and editor Lee Pappas for the article!

I wonder if Charles is reachable... and if he remembers where he got the code for the demo... The image I found before (in a Commodore ad) appears in Compute! issue 12 from May 1981 ... the ad is from Micro Technology Unlimited ... and that same issue has a screen-dump utility by that company's employee, Martin J. Cohen, Ph.D. who is the author of their Keyword Graphics Package. Hmm! (Neat: in that issue he thanks Gregory Yob for help in part of his code!)

Those with too much time on their hands are encouraged to look at the issue on Internet Archive - A.N.A.L.O.G. Computing magazine, issue 7 (1982) pp60-61. (Thanks to Brewster Kahle, Jason Scott, and others for their work there!)

Analog_Computing_07_1982-p60 Analog_Computing_07_1982-p61
A.N.A.L.O.G. Computing magazine, issue 7 (1982) pp60-61
NON-TUTORIAL VI
by Charles Bachand
HATS OFF TO ATARI!!

This article contains a graphics program called "Archimedes Spiral". The program, although quite short, takes nearly three hours to run! This is definitely not a quick demo. (To produce the transparent version of the spiral, delete line 240.)(It still looks like a hat to me. Ed.)

100 REM ARCHIMEDES SPIRAL
110 REM 
120 REM ANALOG MAGAZINE
130 REM 
140 GRAPHICS 8+16:SETCOLOR 2,0,0
150 XP=144:XR=4.71238905:XF=XR/XP
160 FOR ZI=-64 TO 64
170 ZT=ZI*2.25:ZS=ZT*ZT
180 XL=INT(SQR(20736-ZS)+0.5)
190 FOR XI=0-XL TO XL
200 XT=SQR(XI*XI+ZS)*XF
210 YY=(SIN(XT)+SIN(XT*3)*0.4)*56
220 X1=XI+ZI+160:Y1=90-YY+ZI
230 TRAP 250:COLOR 1:PLOT X1,Y1
240 COLOR 0:PLOT X1,Y1+1:DRAWTO X1,191
250 NEXT XI:NEXT ZI
260 GOTO 260

It would be so much simpler if you could hand out a hardcopy of the graphics to demonstrate your prowess with the computer. Your friends will be doing cartwheels and going hazoo-huzzah over your printing expertise. (Hazoo-huzzah?! Ed.)

(Ed. Note: No one here at A.N.A.L.O.G is responsihle for Charlie's state of mind when he writes these non-tutorials. Just thought you people would like to know.)


Syndicated 2015-02-22 05:02:56 from jon's blog

21 Feb 2015 teknopup   » (Apprentice)

I posted my parts list and initial design for a Raspberry Pi 2 powered VR headset.

http://tawhakisoft.com/tekn-vr-headset.html

21 Feb 2015 mikal   » (Journeyer)

Command and Control




ISBN: 9780141037912
LibraryThing
I finished this book a while ago and it appears that I forgot to write it up. This book is by the author of Fast Food Nation and it is just as good as his other book. The history of America's nuclear weapons and their security (or lack thereof) is as compelling as it is terrifying. I found this book hard to put down while reading it, and would recommend it to others.

Tags for this post: book eric_schlosser nuclear weapons safety
Related posts: Random linkage; Fast Food Nation; Starfish Prime; Why you should stand away from the car when the cop tells you to; Random fact for the day; More nuclear bunkers


Comment

Syndicated 2015-02-20 20:27:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

21 Feb 2015 mikal   » (Journeyer)

Forster trig

Its been too long since I've attempted a trig walk -- 15 days to be exact. That's mostly because I've been really busy at work these last couple of weeks. That said, it was time for another trig, and this one was a bit of an adventure.

Forster Trig is in the Bullen Nature Reserve and is one of the least urban trigs I've attempted so far, which is why this post is a bit more detailed than normal. Big Monks is probably the other trig walk most similar to this one. One of the challenges with this trig is that there is no track to the trig point. Reading John Evan's walk notes from his single assent of this trig, it seems that many people follow the 132kV power lines to the trig, but I consider this "cheating" as the power line is on private land and I didn't want to spend effort on getting permission to walk on someone's farm.

Instead, I followed the Kambah Pool to Cassurina Sands track, and then turned right to bush bash to the trig when I got reasonably close. There wasn't any formed track this way, so I don't think this is a common approach. On the map you'll notice a fence marked -- that's where I had to jump a barbed wire fence, which wasn't the best plan ever. On the way back down from the summit I found a vehicle track, and I'd recommend that others follow that route (the one on the map with two gates marked and some stairs). The stairs are interesting -- a previous walker has mounded stones on both sides of the fence to make it easier to cross.

Either way, its a bush bash up the hill itself, which is covered in reasonably dense spiky vegetation. You're going to want gaiters or long pants.

             

Interactive map for this route.

Tags for this post: blog pictures 20150220-forster_trig photo canberra tuggeranong bushwalk trig_point
Related posts: Big Monks; A walk around Mount Stranger; Two trigs and a first attempt at finding Westlake; Taylor Trig; Urambi Trig; Walk up Tuggeranong Hill

Comment

Syndicated 2015-02-20 16:39:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

20 Feb 2015 slef   » (Master)

Rebooting democracy? The case for a citizens constitutional convention.

I’m getting increasingly cynical about our largest organisations and their voting-centred approach to democracy. You vote once, for people rather than programmes, then you’re meant to leave them to it for up to three years until they stand for reelection and in most systems, their actions aren’t compared with what they said they’d do in any way.

I have this concern about Cooperatives UK too, but then its CEO publishes http://www.uk.coop/blog/ed-mayo/2015-02-18/rebooting-democracy-case-citizens-constitutional-convention and I think there may be hope for it yet. Well worth a read if you want to organise better groups.

Syndicated 2015-02-20 04:03:00 from Software Cooperative News » mjr

19 Feb 2015 mjg59   » (Master)

It has been 0 days since the last significant security failure. It always will be.

So blah blah Superfish blah blah trivial MITM everything's broken.

Lenovo deserve criticism. The level of incompetence involved here is so staggering that it wouldn't be a gross injustice for the company to go under as a result[1]. But let's not pretend that this is some sort of isolated incident. As an industry, we don't care about user security. We will gladly ship products with known security failings and no plans to update them. We will produce devices that are locked down such that it's impossible for anybody else to fix our failures. We will hide behind vague denials, we will obfuscate the impact of flaws and we will deflect criticisms with announcements of new and shinier products that will make everything better.

It'd be wonderful to say that this is limited to the proprietary software industry. I would love to be able to argue that we respect users more in the free software world. But there are too many cases that demonstrate otherwise, even where we should have the opportunity to prove the benefits of open development. An obvious example is the smartphone market. Hardware vendors will frequently fail to provide timely security updates, and will cease to update devices entirely after a very short period of time. Fortunately there's a huge community of people willing to produce updated firmware. Phone manufacturer is never going to fix the latest OpenSSL flaw? As long as your phone can be unlocked, there's a reasonable chance that there's an updated version on the internet.

But this is let down by a kind of callous disregard for any deeper level of security. Almost every single third-party Android image is either unsigned or signed with the "test keys", a set of keys distributed with the Android source code. These keys are publicly available, and as such anybody can sign anything with them. If you configure your phone to allow you to install these images, anybody with physical access to your phone can replace your operating system. You've gained some level of security at the application level by giving up any real ability to trust your operating system.

This is symptomatic of our entire ecosystem. We're happy to tell people to disable security features in order to install third-party software. We're happy to tell people to download and build source code without providing any meaningful way to verify that it hasn't been tampered with. Install methods for popular utilities often still start "curl | sudo bash". This isn't good enough.

We can laugh at proprietary vendors engaging in dreadful security practices. We can feel smug about giving users the tools to choose their own level of security. But until we're actually making it straightforward for users to choose freedom without giving up security, we're not providing something meaningfully better - we're just providing the same shit sandwich on different bread.

[1] I don't see any way that they will, but it wouldn't upset me

comment count unavailable comments

Syndicated 2015-02-19 19:43:04 from Matthew Garrett

19 Feb 2015 amits   » (Journeyer)

This Saturday: Fedora 21 Release Party in Pune

Fedora Ambassadors from Pune are hosting the F21 release party at the MIT COE, on Saturday, 21st Feb, from 10:30.  Details on the party page.

It’s been a while since F21 released, but with the FUDCon preparations + planning and travel of the ambassadors for conferences, hosting the release party was delayed.

This is also a good opportunity for us to visit MIT COE, the FUDCon venue, and interact with the folks there and prepare them for what’s coming in June.

PS: CFP for the FUDCon is open as well!

Syndicated 2015-02-19 07:04:24 from Think. Debate. Innovate.

18 Feb 2015 AlanHorkan   » (Master)

OpenRaster Python Plugin

OpenRaster Python Plugin

Early in 2014, version 0.0.2 of the OpenRaster specification added a requirement that each file should include a full size pre-rendered image (mergedimage.png) so that other programs could more easily view OpenRaster files. [Developers: if your program can open a zip file and show a PNG you could add support for viewing OpenRaster files.*]

The GNU Image Manipulation Program includes a python plugin for OpenRaster support, but it did not yet include mergedimage.png so I made the changes myself. You do not need to wait for the next release, or for your distribution to eventually package that release you can benefit from this change immediately. If you are using the GNU Image Manipulation Program version 2.6 you will need to make sure you have support for python plugins included in your version (if you are using Windows you wont), and if you are using version 2.8 it should already be included. (If the link no longer works, see instead https://gitorious.org/openraster/gimp-plugin-file-ora/ as I hope the change will be merged there soon.)

It was only a small change but working with Python and not having to wait for code to compile make it so much easier.

* Although it would probably be best if viewer support was added at the toolkit level, so that many applications could benefit.

Syndicated 2015-02-18 19:14:34 from Alan Horkan

18 Feb 2015 olea   » (Master)

¿Cambios en la AC FNMT-RCM? Probando el servidor OCSP

A raíz de un hilo en Twitter sobre la inexistencia del servicio de CRL de la Autoridad de Certificación de la FNMT sólo se me ha ocurrido que en vez de dedicarme a mis responsabilidades me era imperioso saber qué se estaba cociendo... La noticia más chocante es la actividad reciente en la infame entrada #435736 del bugzilla de Mozilla (abierta desde 2008 para que Mozilla acepte el certificado raiz de la AC FNMT). Parece que por fin alguien se está aplicando a resolver el problema. Y entre las perlas más suculentas allí recogidas está un documento titulado General Certification Practices Statement en el que aparecen detalles como, precisamente, las URIs de publicación de los CRL o los varios servicios OCSP aparentemente disponibles. Si eres de los que han estudiado anteriormente el uso de CERES-FNMT probablemente hayas levantado una ceja. Sí: parece que están habilitando por fin estos servicios. El que no conozca la causa de nuestra sorpresa deberá saber que hasta ahora el servicio de validación de certificados de CERES FNMT para certificados de usuario final (por ejemplo de ciudadanos corrientes y molientes) ha sido de pago (ver ilustración). Para algunos esta ha sido otra de las causas que han lastrado la adopción de la firma digital en España.

Entre los detalles me han llamado la atención las URIs de los servicios OCSP:

y claro, inmediatamente he querido verificar si ya estaban operativos, con la triste circunstancia de que no tengo ni idea de cómo hacerlo. Tras algo de investigación y con oportunismo de corta y pega he dado con una orden que creo serviría:

openssl ocsp -issuer AC_Raiz_FNMT-RCM_SHA256.cer -serial 0x36f11b19 -url http://ocspfnmtrcmca.cert.fnmt.es/ocspfnmtrcmca/OcspResponder -CAfile AC_Raiz_FNMT-RCM_SHA256.cer 
Siendo:
  • -issuer AC_Raiz_FNMT-RCM_SHA256.cer el certificado raiz de la AC en cuestión;
  • -serial 0x36f11b19 el número de serie de un certificado FNMT emitido en 2005 y caducadísimo;
  • -url http://ocspfnmtrcmca.cert.fnmt.es/ocspfnmtrcmca/OcspResponder la URI del servicio OCSP que usaremos, en este caso he elegido el que arriba aparece denominado «ROOT AC. Access» porque me ha parecido el más general para el servicio de particulares comparado con los otros dos;
  • -CAfile AC_Raiz_FNMT-RCM_SHA256.cer realmente no sé porqué habría de usar este parámetro, entiendo que es para verificar el resultado ofrecido por el servicio OCSP y todos los ejemplos que he encontrado lo usan de alguna forma; curiosamente sólo he conseguido eliminar los mensajes de error usando el mismo certificado que en -issuer pero no sé si es el comportamiento correcto o si en este caso funciona así por ser un certificado raiz autofirmado.

El resultado obtenido es el siguiente:

Response verify OK
0x36f11b19: good
    This Update: Nov 18 12:11:20 2014 GMT
    Next Update: May 17 11:11:20 2015 GMT

Y podréis decir «pues qué bien, ¿no?». O no. No lo sé. Ignoro los intríngulis de protocolo OCSP pero me esperaba  otra respuesta para un certificado caducado hace más de ocho años. El  caso es el que servicio sí está levantado y podemos ver más detalles usando la opción -text de openssl oscp:

OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: C = ES, O = FNMT-RCM, OU = AC RAIZ FNMT-RCM, CN = SERVIDOR OCSP AC RAIZ FNMT-RCM
    Produced At: Feb 18 16:27:29 2015 GMT
    Responses:
    Certificate ID:
      Hash Algorithm: sha1
      Issuer Name Hash: BADF8AE3F7EB508C94C1BAE31E7CDC3A713D4437
      Issuer Key Hash: F77DC5FDC4E89A1B7764A7F51DA0CCBF87609A6D
      Serial Number: 36F11B19
    Cert Status: good
    This Update: Nov 18 12:11:20 2014 GMT
    Next Update: May 17 11:11:20 2015 GMT

El caso es que he probado a usar variantes del número serie aleatorias así como de certificados en vigor y siempre da un «good» por respuesta. Y lo poco que me ha podido contar alguien más familiarizado con la tecnología de AC es que este tipo de comportamiento en un servicio OCSP sería normal.

Dudas:

  • El servicio OCSP de FNMT, o al menos el que he usado, está levantado, sí, pero ¿ya está realmente operativo?
  • ¿Es correcta mi manera de invocarlo desde openssl? no estoy seguro;

Otras conclusiones:

Diría que efectivamente parece FNMT se ha tomado en serio configurarse como una autoridad de certificación seria. Por fin. Supongo que ha podido la presión de al menos los usuarios corporativos públicos que últimamente están emitiendo sus certifcados X509 de servidor a través de Camerfirma (verbigracia la Agencia Tributaria), supongo que cansados de que los usuarios menos avezados se hagan un lío con el proceso de instalación del certificado raiz adecuado y de no saber interpretar correctamente los mensajes de precaución de los navegadores. También parece que empiezan a dejar de usar el nombre Ceres para referirse al servicio. Al menos ha sido mi impresión.

Si alguien detecta errores en lo aquí mostrado estaré encantado de corregir lo que haga falta.





#X509 #SSL #FNMT #CERES #OCSP


Syndicated 2015-02-18 22:05:00 (Updated 2015-02-18 18:07:10) from Ismael Olea

18 Feb 2015 bagder   » (Master)

HTTP/2 talk on Packet Pushers

http2 logoI talked with Greg Ferro on Skype on January 15th. Greg runs the highly technical and nerdy network oriented podcast Packet Pushers. We talked about HTTP/2 for well over an hour and we went through a lot stuff about the new version of the most widely used protocol on the Internet.

Listen or download it.

Very suitably published today, the very day the IESG approved HTTP/2.

Syndicated 2015-02-18 11:57:36 from daniel.haxx.se

18 Feb 2015 mikal   » (Journeyer)

Confessions of a middle aged orienteering marker

I was an orienteering marker for the kid's scout troop tonight -- I guess it could have been a trick, but I think they were genuine. The basic idea was I went and stood at where the mark on the map was, and then noted which kids found me. Nice little hill in MacArthur, with pleasant views. I think I've found a good place for a geocache as well.

Interactive map for this route.

Tags for this post: blog canberra bushwalk tuggeranong
Related posts: Big Monks; Point Hut Cross to Pine Island; A walk around Mount Stranger; Another lunch time walk; Two trigs and a first attempt at finding Westlake; Taylor Trig

Comment

Syndicated 2015-02-18 02:59:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

18 Feb 2015 Pizza   » (Master)

Progress on the Shinko S1245, S6145, and S6245

A few weeks ago, a kind gentleman at Sinfonia sent me a pile of documentation on their S1245, S6145, and S6245 printers.

The S6145 and S6245 use a similar command language to the S2145, but the S1245 is quite different. So I decided to start with the latter, and created a new backend for it. It's now complete, but needs testing.

Support for the S6245 will probably follow, likely added into the existing S2145 backend as most of their code will be shared.

Unfortunately, the S6145 is another matter. While its command language is quite similar to the S2145, it has some peculiar data format requirements.

While the spool data is packed 8-bit RGB, the printer driver (aka our backend) is expected to convert it to 16-bit planar YMC+L data. That is easy enough to accomplish, except the data also needs to be massaged via an unknown algorithm combined with an opaque data blob that the printer supplies.

If this sounds familiar, it's because that sounds eerily similar to what the Mitsubishi K60/D70/D707/D80 printers require, complete with a file providing the raw lamination data and pile of tabular data that feeds into the transformation algorithm. This is strong evidence that the S6145, the CIAAT Brava 21, Kodak 305, and those Mitsubishi models all use the same basic print engine.

The Sinfonia rep wasn't able to provide any further details on the algorithm, though he did provide a set of binary x86 and x86_64 libraries that perform the necessary transformations. So it's a sort of bad news, good news situation.

Anyway. At this point, the S1245 backend is ready for testing, and since I can't justify buying yet another high-end photo printer, that means I'll need a volunteer to test this stuff out.

In the mean time, I'll probably work on support for the S6245, which will also eventually need testing. Then I'll move on to the S6145, get the core backend in place, then teach myself some x86_64 assembly and get to reverse-engineering the necessary algoritms and maybe eventually get somewhere.

So, does anyone have a spare S1245, S6245, and/or S6145 printer to toss my way? It's for a good cause!

Syndicated 2015-02-18 04:17:51 from Solomon Peachy

17 Feb 2015 mikal   » (Journeyer)

Little Black Mountain

I went on a walk on Monday with the Canberra Bushwalking Club up Little Black Mountain. Its a nice area and I mostly enjoyed the walk. I say mostly because the walk leader was quite un-welcoming. There was the lecture about emergency beacons, and then the lecture about how he's never been bitten by a snake. It was quite an odd experience. I think I might avoid that leader in the future.

Interactive map for this route.

Tags for this post: blog canberra bushwalk
Related posts: Big Monks; Cooleman and Arawang Trigs; Point Hut Cross to Pine Island; A walk around Mount Stranger; Another lunch time walk; Two trigs and a first attempt at finding Westlake

Comment

Syndicated 2015-02-17 14:21:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

17 Feb 2015 dmarti   » (Master)

Picking the next end-user security tool

Malvertising is a thing on the Internet now. Ad fraud meets data leakage meets malware.

One way or another, some kind of tracking protection tool is going to join the basic recommended list of security software for regular users. Firewall, check. Virus checker, check. Tracking protection, check.

The question is whether the anti-malvertising slot on the shoppping list will be filled with a problematic and coarse-grained ad blocker, or with a publisher-friendly tracking protection tool such as Disconnect or the built-in tracking protection in Firefox.

What's the difference, and why does it matter?

Tracking protection tools and AdBlock Plus will each let some ads through. However, AdBlock Plus uses the concept of "acceptable ads", which is broken for modern web designs.

For pages featuring a reading text ads should not be placed in the middle, where they interrupt the reading flow. However, they can be placed above the text content, below it or on the sides.

So a nice-looking design like Quartz does not have "acceptable" ads because the ads there can appear when scrolling a long article, but a crap-ass legacy CMS that splits a shorter article into 9 pages is A-OK.

More importantly, targeted third-party ads can buy into the "acceptable" program too, which does nothing for improving the value of the medium.

This is where the IT media can influence, not just observe.

  • The more that you write about tracking protection tools other than ad blockers, the more users will get them, and the better that business becomes for content sites, including the ones that pay you.

  • The less attention you pay to the issue, the more users are likely to switch to a "dumb" ad blocker, and the more that web ads slide into a no-win struggle like email spam/anti-spam.

(More on the web ad problem)

Syndicated 2015-02-17 15:50:26 from Don Marti

17 Feb 2015 amits   » (Journeyer)

FUDCon Pune 2015 Planning Meeting Minutes – 17 Feb

We had a good productive meeting today in #fedora-india as well as on phone + in-person at the Red Hat Pune office.

We used the Etherpad at http://piratepad.net/FUDConPunePlanning for keeping notes.

Highlights are:

  • We’ll soon add a code of conduct / anti-harrassment policy for the event
  • Outreach: everyone focussed on spreading the CFP message
  • Current logo draft looks good, minor tweak suggestions to be put on ticket
  • FUDPub venues being evaluated, but we’re getting a good deal from one of them

The entire minutes are appended below.

Last meeting : http://piratepad.net/fudcon-pune-planning-20150203

Agenda + Minutes
-----------
 * Creating FAQ for FUDCon India 2015
   * Kushal to draft it
   * Kushal has a list of Q
     * Amit will get creative with answers.
 * Code of Conduct / Anti-harrassment policy
   * In the works; mail sent to ambassadors@
   * Siddhesh + Rupali
   * draft based on Ryan Lerch's Flock planning content + linux.conf.au + Linux Foundation texts
   * Every other confence is using PSF/PyCon's CoC (info)
   * (idea) One person should be listed as contact as go-to person for violations
     * have one email + one person on-site?
       * Rupali nominated! (+4)
 * The end date of CFP, March 9.
   * I think it is early for Fudcon as the conf is in last week of June
   * 2months for Visa and travel plans and 3 weeks for confirming the talks = around 3 months should be fine (Kushal: Having a strict date will give organisers enough time to handle the schedule properly.)
   * Possible negatives: tickets get costly; visa processing takes time; people have to make plans at short notice, etc.
   * Let's see the response on 9th march, and then decide on extension?
     * We can perhaps give leeway for intl speakers who missed cfp deadline
     * Ok. works for me
   * Fudcon date is very near to RH summit. People attending RH summit can't attend Fudcon Pune
     * Unfortunate; can't alter this now.
     * People typically who attend Summit are not our audience / target speakers etc.?
       * There are some overlap , but it is very less.  May be around 10 or so
 * Outreach
   * http://piratepad.net/FudCon-outreach-list
     * this is for industry + mailing lists (communities)
     * we need help here with more lists + more volunteers to do the outreach.
   * http://piratepad.net/FUDCon-College-Outreach
     * this is for colleges / educational institutes
     * separate cfp needed because we need to mention open source in education here -- we could have a track for professors / teachers here to get them together and discuss problems specific to their area.
   * For Outreach, we can do this offline based on both the etherpads.
   * Please think of more companies which deal with RH / CentOS / Fedora; reach out for CFP.
   * Video series
     * Videos from FPL (Matthew Miller), jsmith, Kushal, Parag, Rahul, Joerg, etc. -- extolling the virtues of FUDCon + Pune
       * Let's ask Nitesh for this
 * Website
   * http://fudcon.in website up
     * just need to figuire openid thing now
       * Praveen + Siddhesh.
   * Finalize CfP text  http://piratepad.net/YaC3hNcOZ8 (done)
   * Graphics status update?
     * Two draft logos ready - Suchakra + jurankdankkal (Fedora-Indonesia)
     * We need to give feedback on the ticket: https://fedorahosted.org/design-team/ticket/352
     * Latest Draft: https://jurankdankkal.fedorapeople.org/FUDCon/FUDConPune2015/logo.svg
       * This logo looks good to several people; minor tweaks are needed.
       * Looks good with small + big sizes - fit for website logo + banners.
       * Soni suggests adding flag. (+1 amit)
       * Currently it's a choice between this one and the older logo from FUDCon Pune 2011.
   * SSL support?  Asked Saleem about it
     * We're still figuring this out.
 * Wiki
   * https://fedoraproject.org/wiki/FUDCon:Pune_2015
   * Help editing this wiki - content + categories.
 * Accommodation (done)
   * We negotiated good rates with Cocoon (INR 3000 + taxes)
   * Double-occupancy, Free wifi, breakfast
 * Travel updates? (after CFP closes + speaker selection)
 * Marketing
   * Fedora magazine
     * CFP article out
     * One more CFP article at -1 week (1 March).
       * Amita
   * Make a list of MLs to post the CfP to
     * also reach out to ambassadors in apac for confirmation / planning
       * Siddhesh
     * send email to fudcon-planning about CFP
       * Siddhesh
   * Video update?
     * appear.in is good for sharing between a few people (for these meetings)
     * Pankaj has emailed kpoint folks (the ones who did video last time)
     * Last option will be to have a tiny webcam doing live Hangout -- advantage is it has auto-archival on youtube.
     * Kushal will check with hasgeek for video.
   * fudcon.fedoraproject.org
     * will need some trac ticket?
     * Siddhesh
   * Twitter
   * Facebook
   * Google Plus
   * LinkedIn group
     * Chandan to send out cfp blurbs on all these social sites [DONE].
 * Budget
   * Make and maintain a publicly visible sheet to track expenses?
     * I  think we should have a wiki where we export the sheet in use.   ethercalc lets anyone edit, which is not exactly ideal for a  budget-tracking sheet.  For people interested in handling budget, just  contact the owner for budget acc. to the wiki
       * publicly visible, not editable :) So wiki would be fine
   * Sponsorship
     * Can we get partial funding for RH people?
       * This will come later -- after we get all others done and we have extra budget.

 * FUDPub
   * Rupali reached out to Venue1
   * Venue1
     * Space for 100 people
     * Reasonable (approx 1800 per person)
     * RH has relationship; payments are easier
     * Close to cocoon
     * No limitation on sound limits - a nice party can be had.
   * Rupali continuing to reach out to others
   * In any case, we can close this by next week.


 * Swag
   * Niranjan suggests some programmable arduino boards manufactured locally with our logos
     * About ₹750
     * http://embeddeddedmarket.com
     * http://simplelabs.com
   * Swag for Volunteers
     * tshirts
   * Swag for Organisers?
   * Swag for Speakers
     * Umbrellas (for sweet Pune rains)
   * Fedora badge for attendees?(added to the FAS account)

 * Mobile Application


Syndicated 2015-02-17 12:24:48 from Think. Debate. Innovate.

16 Feb 2015 mjg59   » (Master)

Intel Boot Guard, Coreboot and user freedom

PC World wrote an article on how the use of Intel Boot Guard by PC manufacturers is making it impossible for end-users to install replacement firmware such as Coreboot on their hardware. It's easy to interpret this as Intel acting to restrict competition in the firmware market, but the reality is actually a little more subtle than that.

UEFI Secure Boot as a specification is still unbroken, which makes attacking the underlying firmware much more attractive. We've seen several presentations at security conferences lately that have demonstrated vulnerabilities that permit modification of the firmware itself. Once you can insert arbitrary code in the firmware, Secure Boot doesn't do a great deal to protect you - the firmware could be modified to boot unsigned code, or even to modify your signed bootloader such that it backdoors the kernel on the fly.

But that's not all. Someone with physical access to your system could reflash your system. Even if you're paranoid enough that you X-ray your machine after every border crossing and verify that no additional components have been inserted, modified firmware could still be grabbing your disk encryption passphrase and stashing it somewhere for later examination.

Intel Boot Guard is intended to protect against this scenario. When your CPU starts up, it reads some code out of flash and executes it. With Intel Boot Guard, the CPU verifies a signature on that code before executing it[1]. The hash of the public half of the signing key is flashed into fuses on the CPU. It is the system vendor that owns this key and chooses to flash it into the CPU, not Intel.

This has genuine security benefits. It's no longer possible for an attacker to simply modify or replace the firmware - they have to find some other way to trick it into executing arbitrary code, and over time these will be closed off. But in the process, the system vendor has prevented the user from being able to make an informed choice to replace their system firmware.

The usual argument here is that in an increasingly hostile environment, opt-in security isn't sufficient - it's the role of the vendor to ensure that users are as protected as possible by default, and in this case all that's sacrificed is the ability for a few hobbyists to replace their system firmware. But this is a false dichotomy - UEFI Secure Boot demonstrated that it was entirely possible to produce a security solution that provided security benefits and still gave the user ultimate control over the code that their machine would execute.

To an extent the market will provide solutions to this. Vendors such as Purism will sell modern hardware without enabling Boot Guard. However, many people will buy hardware without consideration of this feature and only later become aware of what they've given up. It should never be necessary for someone to spend more money to purchase new hardware in order to obtain the freedom to run their choice of software. A future where users are obliged to run proprietary code because they can't afford another laptop is a dystopian one.

Intel should be congratulated for taking steps to make it more difficult for attackers to compromise system firmware, but criticised for doing so in such a way that vendors are forced to choose between security and freedom. The ability to control the software that your system runs is fundamental to Free Software, and we must reject solutions that provide security at the expense of that ability. As an industry we should endeavour to identify solutions that provide both freedom and security and work with vendors to make those solutions available, and as a movement we should be doing a better job of articulating why this freedom is a fundamental part of users being able to place trust in their property.

[1] It's slightly more complicated than that in reality, but the specifics really aren't that interesting.

comment count unavailable comments

Syndicated 2015-02-16 20:44:40 from Matthew Garrett

16 Feb 2015 joolean   » (Journeyer)

Transactional B-trees

The next version of gzochi is going to include a new storage engine implementation that keeps data entirely in memory while providing transactional guarantees around atomicity of operations and visibility of data. There are a couple of motivations for this feature. The primary reason is to prepare the architecture for multi-node operation, which, as per Blackman and Waldo's technical report on Project Darkstar, requires that the game server -- which becomes, in a distributed configuration, more like a task execution server -- maintain a transient cache of game data delivered from a remote, durable store of record. The other is to offer an easier mechanism for "quick-starting" a new gzochi installation, and to support users who, for political or operational reasons, prefer not to bundle or install any of the third-party databases that support the persistent storage engines.

That first motivation wouldn't bias me much in either direction on the build-vs-buy question; Project Darkstar's authors almost certainly (planned) to implement this using Berkeley DB's "private" mode (example here). However, gzochi is intentionally agnostic when it comes to storage technology. The database that underlies a storage engine implementation needs only to support serializably isolated cursors and reasonable guarantees around durability; requiring purely in-memory operation would be a heavy requirement. And I feel too ambivalent about Oracle to pin the architecture to what BDB supports, AGPL or no. (The Darkstar architects should have been a bit warier themselves.) So I settled on the "build" side of the balance. ...Although my first move was to look for some source code to steal. And I came up weirdly short. The following is a list of the interesting bits and dead ends I came across while searching for transaction B-tree implementations.

Some more specific requirements: There are two popular flavors of concurrency for the kind of data structure I wanted to build with the serializable level of transactional isolation I wanted to provide. Pessimistic locking requires that all access to the structural or data content of the tree by different agents (threads, transactions) be done under the protection of an explicit read or write lock, depending on the nature of the access. Optimistic locking often comes in the form of multi-version concurrency control, and offers each agent a mutable snapshot of the data over the lifetime of a transaction, mostly brokering resolutions to conflicts only at commit time. Each approach has its advantages: MVCC transactions never wait for locks, which usually makes them faster. Pessimistic locking implementations typically detect conflicting access patterns earlier than optimistic implementations, which wait until commit to do so. Because gzochi transactions are short and fine-grained, and the user experience is sensitive to latency, I believe that the time wasted by unnecessary execution of "doomed" transactional code is better avoided via pessimistic locking. (I could be wrong.)

Here's what I found:
  • Apache Mavibot - Transactional B-tree implementation in Java using MVCC. Supports persistent and in-memory storage. Hard for me to tell from reading their source code how their in-memory mode could possibly work for multi-operation transactions.
  • Masstree - Optimistically concurrent, in-memory non-transactional B+tree implementation designed to better exploit SMP hardware.
  • Silo - Optimistically concurrent, in-memory transactional store that uses Masstree as its B-tree implementation.
  • SQLite - Lightweight SQL implementation with in-memory options, with a transaction-capable B-tree as the underlying storage. Their source code is readable, but the model of threads, connections, transactions, and what they call "shared cache" is hard to puzzle out. The B-tree accumulates cruft without explicit vacuuming. The B-tree code is enormous.
  • eXtremeDB - Commercial in-memory database with lots of interesting properties (pessimistic and MVCC modes, claimed latencies extremely low) but, you know, no source code. So.
Because I was unable to find any pessimistic implementations with readily stealable source code, I struck out on my own. It took me about a week to build my own pessimistic B+tree, using Berkeley DB's code and architecture as a surprisingly helpful guide. My version is significantly slower than BDB (with persistence to disk enabled) but I'm going to tune it and tweak it and hopefully get it to holler if not scream.

15 Feb 2015 dangermaus   » (Journeyer)

There is a hidden and invisible world that passes through us at crazy speed in any moment, it is the world of radio waves. At midnight, the radio silence is broken only by whistles and by electrical discharges of a distant storm...

An ear to listen to the weather birds
I read in Internet about the Quadrifilar Helix antenna (QFH) built by Chris van Lint with coaxial cable RG6. G4ILO did a version with RG58 coax cable and having plenty of this cable around (which was earlier used to power the first Ethernet networks at 10Mbit/s), I decided to build my own.
I started to cut old PVC tubes. My first mistake was to misinterpret the symmetry of the antenna. It is important to understand that the big loop (B) uses supports of 38cm and that the small loop (S) uses supports of 34cm and therefore the mast tube of 57cm supports 3 tubes of 38cm and three tubes of 34cm.
It was easy to soldier braid and signal part of the cable together as described by Chris, but it was very difficult for me to soldier only on the braid. Each time I checked the integrity of the cable with a voltmeter, I still had an unwanted resistance of 3 to 20 MOhm, instead of reading infinite MOhm on the voltmeter.
To solve this problem, I used a BNC connector with the needle carrying the signal removed and only the shield connected in combination with a T-BNC connector to avoid soldering on the braid.
I then checked with my antenna analyzer (the Mini VNA BT Pro) that the antenna was resonant as expected on 137 MHz, measuring a SWR of about 1.4, which meant that the circularly polarized QFH was ready to be mounted on the roof (after some simple impermeabilization for the top part of it).
Satellites transmit with circular polarization because waves travel better through rain, snow and clouds (due to the interference effect of rain and water drops) and also because the antenna orientation also changes in respect to ground, if the satellite is not stabilized.
Once on the roof, the antenna was connected to a Funcube Pro Plus Dongle. My current setup uses HDSDR and Wxtoimg with VB Audio Cable to relay the output signal of HDSDR to WxToImg. Also APT Decoder works well, but it needs to load Keplerian elements in text format (it assumes a .tlx extensions which needs to be overriden to .txt).
In the beginning, it is important to tune HDSDR with a shift between the tuned frequency of 27kHz and the Upper Spectrum Center Frequency (e.g. tuned on 137.100 with USCF on 137.137) and setting a large receiver bandwidth to 68KHz, because of whistles and artifacts created by the cheap Funcube receiver.
It is much easier to receive good signals when the satellite passes close to the Zenit (in the meanwhile I learnt to read the ephemerids). Weather conditions are not so important thanks to the circularized polarization of the antenna. Only if it is a windy day, I have some white noise lines in between of the pictures. Even with my cheap setup, I receive weather images without problems with a signal to noise ratio (SNR) of 25 dB and less.
In the beginning, we looked several times at black images of APT decoder, until once hacker Vir asked me if the monitor was dirty. She then realized that the image was not only black, as the dirt moved with the APT Decoder window. After proper denoising we had our first satellite image which you can see on the bottom of our QRZ page. By the way, the satellite NOAA 19 has an interesting story on Wikipedia, as it had an accident during manufacturing.

15 Feb 2015 StevenRainwater   » (Master)

The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes


Jaynes’ book atop books by a few authors who were influenced by his theory.

Julian Jaynes’ book, The Origin of Consciousness in the Breakdown of the Bicameral Mind is one of those books that just about everyone reads sooner or later. Jaynes is an example of the rare author who could write a scientific treatise that was both ground-breaking and readily readable by the general public. His book was published in 1976 and presented what has to be the most controversial theory ever in the fields of consciousness and religion. Despite the theory seeming completely outlandish at first glance, the book presents testable predictions all along the way. Many modern researchers still believe Jaynes theory to be partially or completely wrong but there’s no question is has pushed research toward a better understanding of consciousness and religion. Daniel Dennett, who notes Jaynes was probably wrong at least about some particulars like the importance of hallucinations, still thinks his main thesis could be correct. Evolutionary biologist Richard Dawkins commented that Jaynes theory is “either complete rubbish or a work of consummate genius, nothing in between! Probably the former but I’m hedging my bets.” In addition to scientists, Jaynes’ theory also inspired two generations of science fiction authors from Philip K. Dick to Neal Stephenson (who based parts of Snow Crash on Jaynes’ theory). David Bowie acknowledges being influenced by this book during his work with Brian Eno on Low and has included the book on his list of 100 Must Read Books.

Julian Jaynes was an American psychologist interested in the origins of consciousness, which he defined roughly as what a modern cognitive scientist or philosopher would call meta-cognition – the awareness of our own thoughts or the ability to think about our own thoughts. In his early research, he specialized in animal ethology (the study of animal’s behavior, communications, and emotions). He began to focus on understanding how consciousness evolved in early humans and studied historical texts and anthropological evidence for clues. This led to his now famous theory that humans initially developed a bicameral mind and that modern consciousness was the result of a breakdown of the two parts.

Bicameral in this case is a metaphor, the word normally describes a type of government consisting of two independent houses. Jaynes came to believe that, as recently as 10,000 years ago, the human brain lacked both consciousness and the strong lateral connection via the corpus callosum that it has today. The two halves of the brain operated more independently but were able to communicate via verbal hallucinations. Humans at this time would have already evolved basic linguistic capabilities, but without the complex metaphors and self-referential aspects of modern language. People behaved in what we would describe today as a ‘zombie-like” way. They would have lacked the ability to reflect on or guide their own thoughts. In times of extreme stress or facing novel situations, the right side of their brain would communicate advice or commands to the left via auditory hallucinations that the person experienced as “hearing a voice”.

As today, humans tended to build up models in their mind of people who are important in their social interactions, parents, tribal leaders, and the like. Jaynes believed the models existed in the part of the mind generating the hallucinations and that the voices often came to be perceived as originating from these people, even if they were not present; even if they were dead. Without the ability to introspect, people simply accepted the voices at face value and assumed they represented some kind of external reality. This predictably gave rise to the earliest religious beliefs: ancestor worship, divinity of kings, belief in an afterlife. It also served as an important social organizing structure that allowed early community groups to form.

This process worked well until about the 2000 BC, when civilizations were going through a periodic collapse. At this time, the growing population was leading to more frequent interactions between disparate groups of humans, resulting in a failure of the bicameral hallucination mechanism as a method of social coordination. If everyone in your group hears the same voices in their head, things work fine. If three or four groups suddenly start living together and everyone is hearing different voices in their heads telling them conflicting things, civilization doesn’t function smoothly anymore.

The result was a gradual breakdown in the bicameral structure of the brain due to the changed environment which gave a huge advantage to individuals whose brains had more direct communication between the two sides via the corpus callosum. This allowed metaphoric language and consciousness to co-evolve, gradually leading to humans who could think about their own thoughts and had the words to describe it. This would also be the origin of the idea of free will, at least in the modern sense. Prior to this time a person did what their brain directed but without any awareness or insight into the process. So, effectively, modern consciousness is a by-product of cultural and linguistic evolution.

The bicameral breakdown leads to the gradual decline of the right brain area that generated verbal hallucinations. Everyone remembered a time when people could hear the “gods” but only a few remain who can still hear their voices. Those people are sometimes elevated to the positions of priest, shamans, oracles or they are seen as insane, eventually classed as schizophrenics.

The whole thing sounds fantastically crazy at first, right? Jaynes says as much throughout the book. But, like any good scientist, he has worked out a series of testable predictions based on the theory in a variety of fields ranging from history to human physiology. Modern researchers have continued to test his theories and, so far, many of his predictions have been dead on. For example, he predicted the existence of an area in the right hemisphere of the brain capable of generating linguistic, auditory hallucinations that is now vestigial and usually dormant. We now know the right hemisphere contains a vestigial area that corresponds to the Broca/Wernicke area in the left brain. This is the part of the left hemisphere responsible for the production of language. He further predicted this vestigial area would be active in schizophrenics who hear auditory hallucinations. Today, with fMRI scanning and other modern techniques, this has been confirmed too. And the hallucinations these patients experience are often in the form of authority figures (parents, leaders, gods) admonishing or commanding them.

Jaynes did an extensive survey of early literature starting with the earliest known writings and progressing through later more well-known documents like Homer and the early writings of the Bible. He analyzes to what extent the authors or the subjects seem to be self-aware and notes a gradual progression through history of both self-awareness and evolution of language to describe self-awareness. The writers of the biblical Old Testament or the Odyssey, for example, show no evidence at all of being self aware, in contrast to authors of the New Testament or later Greek writings. This is complicated by works that have been re-written and changed by later authors, like some books of the Bible or the Epic of Gilgamesh. In these cases, he tries to tease apart what’s original and what was added later.

He suggests that traces of bicameralism might still be found not just in schizophrenia but in many aspects of modern religion (e.g. those occasional people who still hear voices or experience “possession”) or even in the common childhood experience of having invisible friends (some children experience actual auditory hallucinations of their imaginary friends speaking to them).

Some modern researchers discount the need for the physical changes in the corpus callosum and believe the linguistic evolution of metaphor alone may be enough to bear out the changes Jaynes’ theory describes. There is now a huge body of literature surrounding the Bicameral Mind theory; lengthy articles defending or attacking aspects of it. There are also now several variant theories. Lain McGilchrist has proposed not a breakdown in a bicameral mind but a separation and reversal in the two hemispheres of the brain. Michael Gazzaniga, a pyschobiologist has done extensive experimental work in the area of hemisphere specialization and has proposed a theory similar to Jaynes’.

Jaynes is an engaging and interesting author and, whether his theory eventually proves to be crazy or profound, you’ll find the book a great read. If you have any interest in philosophy, religion, consciousness, cognition, evolution, anthropology, literature, history, or any of a dozen other topics, you’ll love the book. It makes you think about things you would never have imagined otherwise.

Syndicated 2015-02-15 03:30:49 from Steevithak of the Internet

13 Feb 2015 eMBee   » (Journeyer)

A static webapplication hosted on Pharo Smalltalk

For part three of our workshop series we start from scratch, and build a small website that hosts nothing but static files from a memory FileSystem.

We are also going to explore the new development tools that are built in the Moose project

You may watch the first and second parts of this series, or you may jump right in here.

In the next session we are going to build the RESTful API to make this application functional

Syndicated 2015-02-13 08:37:26 (Updated 2015-02-28 18:11:55) from DevLog

13 Feb 2015 dmarti   » (Master)

Live and in person, in Los Angeles

SCALE badge

Attention all fans of me. Come hear me at Southern California Linux Expo, February 19-22 in Los Angeles, California, USA.

There's a speaker interview on the conference site, with some more info on what I'll be talking about.

(If you read this blog for the "targeted advertising considered harmful" stuff, I pitched a short talk on that, too, but I don't know if it'll get in.)

Syndicated 2015-02-13 03:45:36 from Don Marti

13 Feb 2015 AlanHorkan   » (Master)

OpenRaster and OpenDocument: Metadata

OpenRaster is a file format for the exchange of layered images, and is loosely based on the OpenDocument standard. I previously wrote about how a little extra XML can make a file that is both OpenRaster and OpenDocument compatible. The OpenRaster specification is small and relatively simple, but it does not do everything, so what happens if a developer wants to do something not covered by the standard? What if you want to include metadata?

How about doing it the same way as OpenDocument, it does not have to be complicated. OpenDocument already cleverly reused the existing Dublin Core (dc) standard for metadata, and includes a file called meta.xml in the zip container. A good idea worth borrowing, a simplified example file follows:

Sample OpenDocument Metadata[Pastebin]

(if you can't see the XML here directly, see the link to Pastebin instead.)

I extended the OpenRaster code in Pinta to support metadata in this way. This is the easy part, it gets more complicated if you want to do more than import and export within the same program. As before the resulting file can renamed from .ora to .odg and be opened using OpenOffice* allowing you to view the image and the metadata too. The code is Pinta OraFormat.cs is freely available on GitHub under the same license (MIT X11) as Pinta. The relevant sections of are "ReadMeta" and "GetMeta". A Properties dialog and other code was also added, and I've edited a screenshot of Pinta to show both the menu and the dialog at the same time:

[* OpenOffice 3 is quite generous, and opens the file without complaint. LibreOffice 4 is far less forgiving and gives an error unless I specifically choose "ODF Drawing (.odg)" as the file type in the Open dialog]

Syndicated 2015-02-13 00:38:44 from Alan Horkan

12 Feb 2015 bagder   » (Master)

Tightening Firefox’s HTTP framing – again

An old http1.1 frameCall me crazy, but I’m at it again. First a little resume from our previous episodes in this exciting saga:

Chapter 1: I closed the 10+ year old bug that made the Firefox download manager not detect failed downloads, simply because Firefox didn’t care if the HTTP 1.1 Content-Length was larger than what was actually saved – after the connection potentially was cut off for example. There were additional details, but that was the bigger part.

Chapter 2: After having been included all the way to public release, we got a whole slew of bug reports immediately when Firefox 33 shipped and we had to revert parts of the fix I did.

Chapter 3.

Will it land before it turns 11 years old? The bug was originally submitted 2004-03-16.

Since chapter two of this drama brought back the original bugs again we still have to do something about them. I fully understand if not that many readers of this can even keep up of all this back and forth and juggling of HTTP protocol details, but this time we’re putting back the stricter frame checks with a few extra conditions to allow a few violations to remain but detect and react on others!

Here’s how I addressed this issue. I wanted to make the checks stricter but still allow some common protocol violations.

In particular I needed to allow two particular flaws that have proven to be somewhat common in the wild and were the reasons for the previous fix being backed out again:

A – HTTP chunk-encoded responses that lack the final 0-sized chunk.

B – HTTP gzipped responses where the Content-Length is not the same as the actual contents.

So, in order to allow A + B and yet be able to detect prematurely cut off transfers I decided to:

  1. Detect incomplete chunks then the transfer has ended. So, if a chunk-encoded transfer ends on exactly a chunk boundary we consider that fine. Good: This will allow case (A) to be considered fine. Bad: It will make us not detect a certain amount of cut-offs.
  2. When receiving a gzipped response, we consider a gzip stream that doesn’t end fine according to the gzip decompressing state machine to be a partial transfer. IOW: if a gzipped transfer ends fine according to the decompressor, we do not check for size misalignment. This allows case (B) as long as the content could be decoded.
  3. When receiving HTTP that isn’t content-encoded/compressed (like in case 2) and not chunked (like in case 1), perform the size comparison between Content-Length: and the actual size received and consider a mismatch to mean a NS_ERROR_NET_PARTIAL_TRANSFER error.

Firefox BallPrefs

When my first fix was backed out, it was actually not removed but was just put behind a config string (pref as we call it) named “network.http.enforce-framing.http1“. If you set that to true, Firefox will behave as it did with my original fix applied. It makes the HTTP1.1 framing fairly strict and standard compliant. In order to not mess with that setting that now has been around for a while (and I’ve also had it set to true for a while in my browser and I have not seen any problems with doing it this way), I decided to introduce my new changes pref’ed behind a separate variable.

network.http.enforce-framing.soft” is the new pref that is set to true by default with my patch. It will make Firefox do the detections outlined in 1 – 3 and setting it to false will disable those checks again.

Now I only hope there won’t ever be any chapter 4 in this story… If things go well, this will appear in Firefox 38.

Chromium

But how do they solve these problems in the Chromium project? They have slightly different heuristics (with the small disclaimer that I haven’t read their code for this in a while so details may have changed). First of all, they do not allow a missing final 0-chunk. Then, they basically allow any sort of misaligned size when the content is gzipped.

Syndicated 2015-02-12 15:10:58 from daniel.haxx.se

12 Feb 2015 mikal   » (Journeyer)

Geocaching in the evening, the second

I went geocaching while the kids were at scouts. Overall, a very successful evening. The first hour was a phone call with Tristan Goode, and what I learnt is I should call Tristan more because every call (based on a data set of one walk) is 5km.

Interactive map for this route.

Tags for this post: blog canberra tuggeranong geocaching
Related posts: Geocaching in the evening; Another lunch time walk; Lunchtime geocaching; Big Monks; Point Hut Cross to Pine Island; A walk around Mount Stranger

Comment

Syndicated 2015-02-11 15:32:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

10 Feb 2015 dmarti   » (Master)

Hey, kids, slide!

The ad market, on which we all depend, started going haywire.

Alexis Madrigal

Haywire is about right. In one slide...

online ads

(from an upcoming conference talk, if I can get a conference to take it. Details.)

Syndicated 2015-02-11 05:42:28 from Don Marti

10 Feb 2015 bagder   » (Master)

HTTP/2 is at 5%

http2 logoHere follow some numbers extracted from my recent HTTP/2 presentation.

First: HTTP/2 is not finalized yet and it is not yet in RFC status, even though things are progressing nicely within the IETF. With some luck we reach RFC status within Q1 this year.

On January 13th 2015, Firefox 35 was released with HTTP/2 enabled by default. Firefox was already running it enabled before that in beta and development versions.

Chrome has also been sporting HTTP/2 support in development versions since many moths back where it could easily be manually enabled. Chrome 40 was the first main release shipped with HTTP/2 enabled by default, but it has so far only been enabled for a very small fraction of the user-base.

On January 28th 2015, Google reported to me by email that they saw HTTP/2 being used in 5% of their global traffic (que all relevant disclaimers that this is not statistically safe numbers). This, close after a shaky period with Google having had their HTTP/2 services disabled through parts of the Christmas holidays (due to bugs) – and as explained above, there’s been no time for any mainstream browser to use HTTP/2 by default for very long!

Further data points: Mozilla collects telemetry data from Firefox users who opted-in to it, and it collects numbers on “HTTP Protocol Version Used on Response”. On February 10, it reports that Firefox 35 users have got their responses to report HTTP/2 in 9% of all responses (out of more than 340 billion reported responses). The Telemetry for Firefox Nightly 38 even reports HTTP/2 in 14% of all responses (based on a much smaller sample collection), which I guess could very well be because users on such a bleeding edge version are more experimental by nature.

In these Firefox stats we see that recently, the number of HTTP/2 responses outnumber the HTTP/1.0 responses 9 to 1.

Syndicated 2015-02-10 11:09:21 from daniel.haxx.se

10 Feb 2015 clarkbw   » (Master)

If writing is a muscle

I haven’t been to the gym in a long time.

David Eaves, a person I have immense amounts of respect for, has been using a tag line related to this title/intro on his blog for quite a while, probably longer than I’ve known him.  And I honestly never gave much thought to the idea that writing really is a muscle until recently. I’ve taken a break from being a designer (or a programmer) to work as a product manager for over a year now. Designing and coding require a set of skills I’m very familiar with, code is an interpretive language that people use to communicate with each other about the details of commands they issue a computer. While design is a more visual language of storytelling, heavily using imagery and some text to convey the journey of a user to the team intent on correctly interacting with that user.  Both pursuits are about communication but each uses written language in a very different way.  As a product manager I’m forced to lean on my skills as a writer and I don’t think I had much in the way of skills previously but whatever bedridden muscles have been dormant are reawakening as I realize how young and foolish I really was to ignore this essential form of communication.

I’m hoping there is more to come, perhaps starting with some tech posts about recent projects while I try to grapple with this idea of writing more than a tweet.

Syndicated 2015-02-10 07:07:17 from Bryan Clark

10 Feb 2015 mako   » (Master)

Kuchisake-onna Decision Tree

Mika recently brought up the Japanese modern legend of Kuchisake-onna (口裂け女). For background, I turned to the English Wikipedia article on Kuchisake-onna which had the following to say about the figure (the description matches Mika’s memory):

According to the legend, children walking alone at night may encounter a woman wearing a surgical mask, which is not an unusual sight in Japan as people wear them to protect others from their colds or sickness.

The woman will stop the child and ask, “Am I pretty?” If the child answers no, the child is killed with a pair of scissors which the woman carries. If the child answers yes, the woman pulls away the mask, revealing that her mouth is slit from ear to ear, and asks “How about now?” If the child answers no, he/she will be cut in half. If the child answers yes, then she will slit his/her mouth like hers. It is impossible to run away from her, as she will simply reappear in front of the victim.

To help anyone who is not only frightened, but also confused, Mika and I made the following decision tree of possible conversations with Kuchisake-onna and their universally unfortunate outcomes.

Decision tree of conversations with Kuchisake-onna.
Decision tree of conversations with Kuchisake-onna.

Of course, we uploaded the SVG source for the diagram to Wikimedia Commons and used the diagram to illustrate the Wikipedia article.

Syndicated 2015-02-10 07:09:22 (Updated 2015-02-10 07:13:42) from copyrighteous

10 Feb 2015 rkrishnan   » (Journeyer)

NixOS

Posted on April 15, 2014 by rkrishnan

I am a long time (and proud) Debian GNU/Linux user and developer. As a debian developer, I have not been active for one or two years but off late, I am following mailing lists, uploading a package or two and starting to be more active.

I have also been getting more and more convinced that Functional Programming paradigms are more suited to solve problems than using imperative languages, at least for a lot of tasks. I have been learning Haskell for the past few months and thoroughly enjoy the beauty and expressiveness of Haskell and the purely functional programming which lends itself to many tasks. It has some kind of beauty that is only found in Mathematics, which I have never seen in any other programming language I ever use.

I also sometimes use a work Macbook Air machine. Since my work involves building linux kernel, making changes to it and so on, I generally work on GNU/Linux boxes. But sometimes I am not in the office or sitting on a couch and I really like the portability of the Macbook hardware. I am not a fan of Apple and do not intend to replace my PC hardware and the very nice Debian system with Apple hardware and software, expecially in the wake of Snowden revelations. But just to make my life easier while using the Mac, I had been looking at various “packaging” systems in the OS X.

People have attempted a few times to create a good packaging system for OS X. Mac-ports, the NetBSD pkgsrc and so on. And then there is homebrew which seem to be the current popular package manager, which was recommended to me by one of my friends who is a long time Apple aficionado.

The problem with brew is that it is source based. This is both good and bad. Bad because it takes me ages to get stuff built and installed. I would love to have binary packages available that can be readily installed. But then there is the binary compatibility between OS X versions. I admit to know nothing about the binary compatibility.

Enter NixOS. NixOS is a GNU/Linux distribution (well, they call it a linux distribution, but I prefer to call it GNU/Linux distro to give some credit to the GNU project which started this all and still contribute a big chunk of software we all use) that brings the concept of Purely Functional programming languages to an OS distro. What does that mean? It means many things.

  • if a package installation fails, it does not leave the system in a useless state. It is in the same state as we started with.
  • The expectations from the OS is “declared” in a configuration file written in a lazy, functional, dynamically typed language called nix expression.
  • One can roll back to any of the previous states of the system.
  • One can install multiple versions of a package. If a package is not used by any other package, it can be “garbage collected”.
  • One does not need any special privilages to install packages. Any user of the system can install packages.
  • NixOS does not have the traditional /etc, /usr, /bin and other directories. Instead, there is a ~/.nix-profile that takes care of linking to the package which is stored in the “nix store”.

If all these sound interesting (or not), I would encourage everyone to read this wonderfully detailed and readable paper on Nix.

Another thing to note. Nix is the name of the packaging system (like apt or rpm) and NixOS is the OS built around nix. Nix can be installed on any POSIX compatible OS like GNU/Linux, *BSD or OS X or any other Unix-like systems.

Nix packages are maintained via github. I don’t personally like such a crucially important project dependent on a commercial network infrastructure like github. But at the same time, I see that github “pull requests” is certainly one of the reasons why a project like Nix is able to keep up with the ever changing Free software versions and builds. It certainly reduces the bureaucracy.

I already sent a couple of pull requests which were merged readily. Github and the pull request mechanisms greatly reduce the barrier to contributions. But the same time, I should note that Debian’s trust model for developers using the GPG “web of trust” and signed package uploads exist for a reason. I wish more and more distributions adopt it.

My OS X experience is now a lot better. I have latest versions of a lot of packages installed. For those that are failing, I am trying to fix them and send the fixes in. And I am having a lot of fun learning about the Nix. There are many rough edges with Nix still. A lot of free software is not packaged yet and packaging itself is boring and a thankless job. But in my opinion the Nix expressions is a nice “Domain Specific Language” to do the packaging. I will certainly be playing with it more in the coming days. But of course, I do not plan to stop using Debian any time soon.

Syndicated 2014-04-15 00:00:00 from Ramakrishnan Muthukrishnan

8 Feb 2015 mikal   » (Journeyer)

Tuggeranong Stone Wall

Its not every day that your walk is to a 140 year old stone wall that you've been driving past for years without even knowing its there. That however was today's walk, inspired by a walk post by John Evans. I really enjoyed this walk, and it was a good length too. It would have been nice to return by a different route though, although such a thing was not obvious to me while doing the walk.

     

Interactive map for this route.

Tags for this post: blog pictures 20150208-tugg_stone_wall photo canberra tuggeranong bushwalk
Related posts: Big Monks; Point Hut Cross to Pine Island; A walk around Mount Stranger; Another lunch time walk; Two trigs and a first attempt at finding Westlake; Taylor Trig

Comment

Syndicated 2015-02-08 02:03:00 (Updated 2015-02-08 11:19:14) from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

7 Feb 2015 mikal   » (Journeyer)

Point Hut Crossing to Pine Island

Andrew and I decided to walk from Point Hut Crossing (near our house) to Pine Island yesterday. Its a nice walk, about 4km and mostly flat with a track the whole way. We even found some geocaches along the way. I think this route would be a good one for cubs.

     

Interactive map for this route.

Tags for this post: blog pictures 20150207-point_hut_pine_island photo canberra tuggeranong bushwalk
Related posts: Big Monks; A walk around Mount Stranger; Another lunch time walk; Two trigs and a first attempt at finding Westlake; Taylor Trig; Lunchtime geocaching

Comment

Syndicated 2015-02-07 14:26:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

7 Feb 2015 amits   » (Journeyer)

devconf.cz talk: Live Migration of QEMU/KVM Virtual Machines

Yesterday was the first day at devconf.cz 2015. It’s my first devconf.cz, and I’m impressed by the large turnout and the perfect management of the event by the organizers.

Yesterday was also the day I presented my talk on live migration of QEMU/KVM VMs. The slides are here. There was also live video broadcast; and the recording is at this link.  You’ll have to select the E104 section to view my talk.  Also, that selection process needs flash.  Unfortunate.  I’ll check if there’s a direct link to my part of the talk.

Update: The direct link to the talk is here, thanks Donovan.