Recent blog entries

6 May 2015 bagder   » (Master)

curl user poll 2015

Now is the time. If you use curl or libcurl from time to time, please consider helping us out with providing your feedback and opinions on a few things:

https://goo.gl/FyToBn

It’ll take you a couple of minutes and it’ll help us a lot when making decisions going forward.

The poll is hosted by Google and that short link above will take you to:

https://docs.google.com/forms/d/1uQNYfTmRwF9RX5-oq_HV4VyeT1j7cxXpuBIp8uy5nqQ/viewform

Syndicated 2015-05-06 12:44:57 from daniel.haxx.se

6 May 2015 louie   » (Master)

Come work with me – developer edition!

It has been a long time since I was able to say to developer friends “come work with me” in anything but the most abstract “come work under the same roof” kind of sense. But today I can say to developers “come work with me” and really mean it. Which is fun :)

Details: Wikimedia’s new community tech team is hiring for a community tech developer and a team lead. This will be extremely community-intensive work, so if you enjoy and get energy from working with a community and helping them achieve their goals, this could be a great role for you. This team will work intensely with my department to ensure that we’re correctly identifying and prioritizing the needs of our most active editors. If that sounds like fun, get in touch :)

[And I realize that I’ve been bad and not posted here, so here’s my new job announce: “my department” is the Foundation’s new Community Engagement department, where we work to support healthy contributor communities and help WMF-community collaboration. It is a detour from law, but I’ve always said law was just a way to help people do their thing — so in that sense is the same thing I’ve always been doing. It has been an intense roller coaster of a first two months, and I look forward to much more of the same.]

Syndicated 2015-05-06 05:51:20 from Luis Villa » Blog

6 May 2015 mikal   » (Journeyer)

Ancillary Justice




ISBN: 9780356502403
LibraryThing
I loved this book. The way the language works takes a little while to work out, but then blends into the background. The ideas here are new and interesting and I look forward to other work of Ann's. Very impressed with this book.

Tags for this post: book ann_leckie combat ai aliens
Related posts: Mona Lisa Overdrive; East of the Sun, West of the Moon; Count Zero; Emerald Sea; All The Weyrs of Pern; Against the Tide


Comment

Syndicated 2015-05-05 20:48:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

5 May 2015 caolan   » (Master)

new area fill toolbar dropdown

The GSOC 2014 Color Selector is in LibreOffice 4.4, but it's not used for the "area fill" dropdown in impress or draw. So I spent a little time today for LibreOffice 5.0 to hack things up so that instead of using the old color drop down list for that we now have the new color selector in the toolbar instead. Gives access to custom colors, multiple palettes, and recently used colors all in one place.

LibreOffice 5.0
And here's the old one for reference, I've backported the above change to Fedora 22's 4.4.X to address some in-house vented frustration at selecting colors in impress.
LibreOffice 4.4


Syndicated 2015-05-05 19:46:00 (Updated 2015-05-05 19:46:31) from Caolán McNamara

5 May 2015 mones   » (Journeyer)

Bye bye DebConf15

Yep, I had planned to go, but given the last mail from registration seems there's a overwhelming number of sponsorship requests, so I've dediced to withdraw my request. There's lots of people doing much more important things for Debian than me which deserve that help. Having to complete my MSc project does also help to take this decision, of course.

I guess the Debian MIA meeting will have to wait for next planetary alignment ;-) well, not really, any other member of the team can set up it, hint! hint!

See you in DebConf17 or a nearby local event!

Syndicated 2015-05-04 08:37:12 from Ricardo Mones

5 May 2015 pabs3   » (Master)

The #newinjessie game: developer & QA tools

Continuing the #newinjessie game:

There are a number of development and QA tools that are new in jessie:

  • autorevision: store VCS meta-data in your release tarballs and use it during build
  • git-remote-bzr: bidirectional interaction with Bzr repositories for git users
  • git-remote-hg: bidirectional interaction with Mercurial repositories for git users
  • corekeeper: dump core files when ELF programs crash and send you mail
  • adequate: check installed Debian packages for various issues
  • duck: check that the URLs in your Debian package are still alive
  • codespell: search your code for spelling errors and fix them
  • iwyu: include only the headers you use to reduce compilation time
  • clang-modernize: modernise your C++ code to use C++11
  • shellcheck: check shell scripts for potential bugs
  • bashate: check shell scripts for stylistic issues
  • libb-lint-perl: check Perl code for potential bugs and style issues
  • epubcheck: validate your ePub docs against the standard
  • i18nspector: check the work of translators for common issues

Syndicated 2015-05-05 05:10:06 from Advogato

4 May 2015 Stevey   » (Master)

A weekend of migrations

This weekend has been all about migrations:

Host Migrations

I've migrated several more systems to the Jessie release of Debian GNU/Linux. No major surprises, and now I'm in a good state.

I have 18 hosts, and now 16 of them are running Jessie. One of them I won't touch for a while, and the other is a KVM-host which runs about 8 guests - so I won't upgraded that for a while (because I want to schedule the shutdown of the guests for the host-reboot).

Password Migrations

I've started migrating my passwords to pass, which is a simple shell wrapper around GPG. I generated a new password-managing key, and started migrating the passwords.

I dislike that account-names are stored in plaintext, but that seems known and unlikely to be fixed.

I've "solved" the problem by dividing all my accounts into "Those that I wish to disclose post-death" (i.e. "banking", "amazon", "facebook", etc, etc), and those that are "never to be shared". The former are migrating, the latter are not.

(Yeah I'm thinking about estates at the moment, near-death things have that effect!)

Syndicated 2015-05-04 00:00:00 from Steve Kemp's Blog

4 May 2015 bagder   » (Master)

HTTP/2 in curl, status update

http2 logoI’m right now working on adding proper multiplexing to libcurl’s HTTP/2 code. So far we’ve only done a single stream per connection and while that works fine and is HTTP/2, applications will still want more when switching to HTTP/2 as the multiplexing part is one of the key components and selling features of the new protocol version.

Pipelining means multiplexed

As a starting point, I’m using the “enable HTTP pipelining” switch to tell libcurl it should consider multiplexing. It makes libcurl work as before by default. If you use the multi interface and enable pipelining, libcurl will try to re-use established connections and just add streams over them rather than creating new connections. Yes this means that A) you need to use the multi interface to get the full HTTP/2 stuff and B) the curl tool won’t be able to take advantage of it since it doesn’t use the multi interface! (An old outstanding idea is to move the tool to use the multi interface and this would yet another reason why this could be a good idea.)

We still have some decisions to make about how we want libcurl to act by default – especially when we can expect application to use both HTTP/1.1 and HTTP/2 at the same time. Since we don’t know if the server supports HTTP/2 until after a certain point in the negotiation, we need to decide on how to do when we issue N transfers at once to the same server that might speak HTTP/2… Right now, we get the best HTTP/2 behavior by telling libcurl we only want one connection per host but that is probably not ideal for an application that might use a mix of HTTP/1.1 and HTTP/2 servers.

Downsides with abusing pipelining

There are some drawbacks with using that pipelining switch to allow multiplexing since users may very well want HTTP/2 multiplexing but not HTTP/1.1 pipelining since the latter is just riddled with interop problems.

Also, re-using the same options for limited connections to host names etc for both HTTP/1.1 and HTTP/2 may not at all be what real-world applications want or need.

One easy handle, one stream

libcurl API wise, each HTTP/2 stream is its own easy handle. It makes it simple and keeps the API paradigm very much in the same way it works for all the other protocols. It comes very natural for the libcurl application author. If you setup three easy handles, all identifying a resource on the same server and you tell libcurl to use HTTP/2, it makes perfect sense that all these three transfers are made using a single connection.

As multiplexed data means that when reading from the socket, there is data arriving that belongs to other streams than just a single one. So we need to feed the received data into the different “data buckets” for the involved streams. It gives us a little internal challenge: we get easy handles with no socket activity to trigger a read, but there is data to take care of in the incoming buffer. I’ve solved this so far with a special trigger that says that there is data to take care of, that it should make a read anyway that then will get the data from the buffer.

Server push

HTTP/2 supports server push. That’s a stream that gets initiated from the server side without the client specifically asking for it. A resource the server deems likely that the client wants since it asked for a related resource, or similar. My idea is to support server push with the application setting up a transfer with an easy handle and associated options, but the URL would only identify the server so that it knows on which connection it would accept a push, and we will introduce a new option to libcurl that would tell it that this is an easy handle that should be used for the next server pushed stream on this connection.

Of course there are a few outstanding issues with this idea. Possibly we should allow an easy handle to get created when a new stream shows up so that we can better deal with a dynamic number of  new streams being pushed.

It’d be great to hear from users who have ideas on how to use server push in a real-world application and how you’d imagine it could be used with libcurl.

Work in progress code

My work in progress code for this drive can be found in two places.

First, I do the libcurl multiplexing development in the separate http2-multiplex branch in the regular curl repo:

https://github.com/bagder/curl/tree/http2-multiplex.

Then, I put all my test setup and test client work in a separate repository just in case you want to keep up and reproduce my testing and experiments:

https://github.com/bagder/curl-http2-dev

Feedback?

All comments, questions, praise or complaints you may have on this are best sent to the curl-library mailing list. If you are planning on doing a HTTP/2 capable applications or otherwise have thoughts or ideas about the API for this, please join in and tell me what you think. It is much better to get the discussions going early and work on different design ideas now before anything is set in stone rather than waiting for us to ship something semi-stable as the closer to an actual release we get, the harder it’ll be to change the API.

Not quite working yet

As I write this, I’m repeatedly doing 99 parallel HTTP/2 streams with no data corruption… But there’s a lot more to be done before I’ll call it a victory.

Syndicated 2015-05-04 08:18:56 from daniel.haxx.se

3 May 2015 AlanHorkan   » (Master)

Usability and Playability

I could be programming but instead today I am playing games and watching television and films. I have always been a fan of Tetris which is a classic, but I am continuing to play an annoyingly difficult game, that to be honest, I am not sure I even enjoy all that much, but it is strangely compelling. My interest in usability coincides with my interest in playability. Each area has their own jargon but are very similar, the biggest difference is that games will intentionally make things difficult. Better games go to great lengths to make the difficulties challenging without being frustrating, gradually increasing the difficulty as they progress, and engaging the user without punishing them for mistakes. (Providing save points in a game game is similar to providing an undo system in an application, both make the system more forgiving and the users allow users to recover from mistakes, rather than punishing and them and forcing them to do things all over again.)

There is a great presentation about making games more juicy (short article including video) which I think most developers will find interesting. Essentially the presentation explains that a game can be improved significantly without adding any core features. The game functionality remains simple but the usability and playability is improved, providing a fuller more immersive experience. The animation added to the game is not merely about showing off, but provides a great level of feedback and interactivity. Theme music and sound effects also add to the experience, and again provide greater feedback to the user. The difference between the game at the start and at the end of the presentation is striking, stunning even.

I am not suggesting that flashy animation or theme music is a good idea for every application but (if the toolkit and infrastructure already provided is good enough) it is worth reconsidering that a small bit of "juice" like animations or sounds effect could be useful, not just in games, in any program. There are annoying bad examples too but when done correctly it is all about providing more feedback for users, and helping make applications feel more interactive and responsive.
For a very simple example I have seen a many users accidentally switch from Insert to Overwrite mode and not know how not get out of it, and unfortunately many things must be learned by trial and error. Abiword changes the shape and colour of the cursor (from a vertical line to a red block) and it could potentially also provide a sound effect when switching modes. Food for thought (alternative video link at Youtube).

Syndicated 2015-05-03 22:38:18 from Alan Horkan

3 May 2015 benad   » (Apprentice)

The Mystery of Logitech Wireless Interferences

As I mentioned before, I got a new gaming PC a few months ago. Since it sits below my TV, I also bought with it a new wireless keyboard and mouse, the Logitech K360 and M510, respectively. I'm used to Bluetooth mice and keyboards, but it seems that in the PC world Bluetooth is not as commonplace as in Macs, so the standard is to use some dongle. Luckily, Logitech use a "Unifying Receiver" so that both the keyboard and mouse can share a single USB receiver, freeing an additional port. In addition, the Alienware Alpha has a hidden USB 2.0 port underneath it, which seems to be the ideal place for the dongle and freeing all the external ports.

My luck stopped there though. Playing some first-person shooters, I noticed that the mouse was quite imprecise, and from time to time the keyboard would lag for a second or so. Is that why "PC gaming purists" swear by wired mice and keyboards? I moved the dongle to the back or front USB ports, and the issue remained. As a test, I plugged in my wired Logitech G500 mouse with the help of a ridiculously long 3-meter USB cable, and it seems to have solved that problem. But I remained with this half-working wireless keyboard, and with that USB cable an annoying setup.

I couldn't figure out what was wrong, and willing to absorb the costs, until I found this post on the Logitech forums. Essentially, it doesn't play well with USB 3.0. I'm not talking about issues when you plus it the receiver in a USB 3.0 port, since that would have been a non-issue with the USB 2.0 port I was using underneath the Alpha. Nope. Just the mere presence of a USB 3.0 in the proximity of the receiver creates "significant amount of RF noise in the 2.4GHz band" used by Logitech. To be fair (and they insist on mentioning it), this seems to be a systemic issue with all 2.4GHz devices, and not just Logitech.

So I did a test. I took this really long USB cable and connected the receiver to it, making the receiver sit right next to the mouse and keyboard at the opposite side of the room where the TV and Alpha are located. And that solved the issue. Of course, to avoid that new "USB cable across the room" issue, I used a combination of a short half-meter USB cable and a USB hub with another half-meter cable to place the receiver at the opposite side of the TV cabinet. Again, the interference was removed.

OK, I guess all is fine and my mouse and keyboard are fully functional, but what about those new laptops with USB 3.0 on each port? Oh well, next time I'll stick to Bluetooth.

Syndicated 2015-05-03 21:48:04 from Benad's Blog

3 May 2015 yosch   » (Master)

Microsoft releasing an open font!

So, after the pleasant but rather unexpected news of Adobe's Source * font families released openly and developed on a public git repo, now we have Microsoft starting to release fonts under the OFL for one of their many projects!

Who would have thought that this could actually happen, that such big font producers would even consider doing this?

But I guess cross-platform web technologies and the corresponding culture tends to carry with it the values of interoperability, consistency and flexibility... And it just makes sense to have unencumbered licensing for that. There must be some value in pursuing that approach, right?

The Selawik font (only Latin coverage at this point) is part of (bootstrap)-WinJS and is designed to be a open replacement for Segoe UI.

A quick look at the metadata reveals:

Full name: Selawik
Version: 1.01
Copyright: (c) 2015 Microsoft Corporation (www.microsoft.com), with Reserved Font Name Selawik. Selawik is a trademark of Microsoft Corporation in the United States and/or other countries.
License: This Font Software is licensed under the SIL Open Font License, Version 1.1.
License URL: http://opensource.org/licenses/OFL-1.1
Designer: Aaron Bell
Designer URL: http://www.microsoft.com/typography
Manufacturer: Microsoft Corporation
Vendor URL: http://www.microsoft.com/typography
Trademark: Selawik is a trademark of the Microsoft group of companies.


Quite a contrast from the very exclusive licenses attached to the fonts commissioned for Windows...

(Oh and the apparent toponym with an Inupiat name is a nice touch too).


1 May 2015 aleix   » (Journeyer)

Fixing your wife's Nexus 5

DISCLAIMER: I'm not responsible for what happens to your phone if you decide to proceed with the instructions below.

Are you experiencing:

  • Boot loop with Lollipop (Android 5.x).

  • Have downgraded to Kitkat (Andorid 4.x) and there's no service, camera crashes, google play store crashes, google earth tells you it needs SD internal storage and crashes.

At this point the phone seems practically unusable except wifi and only with Kitkat, Lollipop doesn't even boot.

It might be that the /persist partition is corrupted. So, don't despair, here's how I fixed it after looking around a bit:

  • Download adb and fastboot. On Ubuntu this is:

    $ sudo apt-get install android-tools-adb android-tools-fastboot
    
  • Power off your phone.

  • Connect your phone to your computer through USB.

  • Boot into the bootloader by pressing volume down and power buttons at the same time.

  • Unlock it:

    $ fastboot oem unlock
    
  • On the phone you must select the option to wipe everything. WARNING: This will wipe all contents on the device.

  • Download TWRP (an improved recovery mode).

  • Flash it:

    $ fastboot flash recovery openrecovery-twrp-2.8.5.2-hammerhead.img
    
  • Reboot again into the bootloader.

  • Once in the bootloader, choose the Recovery mode. It will then start TWRP.

  • On your computer you now type:

    $ adb shell
    

    If everything went well this should give you a root prompt.

  • Fix /persist partition.

    # e2fsck -y /dev/block/platform/msm_sdcc.1/by-name/persist
    
  • Re-create /persist file system.

    # make_ext4fs /dev/block/platform/msm_sdcc.1/by-name/persist
    
  • Exit the adb shell.

  • Download the latest Nexus 5 factory image and untar it.

  • Finally,inside the untared directory run:

    $ ./flash-all.sh
    
  • Your phone should be fixed!

  • As a last step you might want to lock it again. So, go into the booatloader again and this time run:

    $ fastboot oem lock
    

Good luck!

These are the couple of websites I used. Thank you to the guys who wrote it!

http://www.droid-life.com/2013/11/04/how-to-root-the-nexus-5/
http://forum.xda-developers.com/google-nexus-5/general/guide-to-fix-persist-partition-t2821576

Syndicated 2015-05-01 18:32:52 from aleix's blog

1 May 2015 dkg   » (Master)

Preferred Packaging Practices

I just took a few minutes to write up my preferred Debian packaging practices.

The basic jist is that i like to use git-buildpackage (gbp) with the upstream source included in the repo, both as tarballs (with pristine-tar branches) and including upstream's native VCS history (Joey's arguments about syncing with upstream git are worth reading if you're not already convinced this is a good idea).

I also started using gbp-pq recently -- the patch-queue feature is really useful for at least three things:

  • rebasing your debian/patches/ files when a new version comes out upstream -- you can use all your normal git rebase habits! and
  • facilitating sending patches upstream, hopefully reducing the divergence, and
  • cherry-picking new as-yet-unreleased upstream bugfix patches into a debian release.

My preferred packaging practices document is a work in progress. I'd love to improve it. If you have suggestions, please let me know.

Also, if you've written up your own preferred packaging practices, send me a link! I'm hoping to share and learn tips and tricks around this kind of workflow at debconf 15 this year.

Syndicated 2015-05-01 19:41:00 from Weblogs for dkg

1 May 2015 bagder   » (Master)

talking curl on the changelog

The changelog is the name of a weekly podcast on which the hosts discuss open source and stuff.

Last Friday I was invited to participate and I joined hosts Adam and Jerod for an hour long episode about curl. It all started as a response to my post on curl 17 years, so we really got into how things started out and how curl has developed through the years, how much time I’ve spent on it and if I could mention a really great moment in time that stood out over the years?

They day before, they released the little separate teaser we made about about the little known –remote-name-all command line option that basically makes curl default to do -O on all given URLs.

The full length episode can be experienced in all its glory here: https://changelog.com/153/

Syndicated 2015-05-01 09:54:16 from daniel.haxx.se

30 Apr 2015 caolan   » (Master)

gtk3 notebook theming

Starting to work on the gtk3 theming now. Here's a before and after shot of today's notebook color and font theming improvements.

Before:

After:
And a a random native gtk3 notebook for comparison


Syndicated 2015-04-30 15:19:00 (Updated 2015-04-30 15:20:14) from Caolán McNamara

30 Apr 2015 gary   » (Master)

Remote debugging with GDB

→ originally posted on developerblog.redhat.com

This past few weeks I’ve been working on making remote debugging in GDB easier to use. What’s remote debugging? It’s where you run GDB on one machine and the program being debugged on another. To do this you need something to allow GDB to control the program being debugged, and that something is called the remote stub. GDB ships with a remote stub called gdbserver, but other remote stubs exist. You can write them into your own program too, which is handy if you’re using minimal or unusual hardware that cannot run regular applications… cellphone masts, satellites, that kind of thing. I bet you didn’t know GDB could do that!

If you’ve used remote debugging in GDB you’ll know it requires a certain amount of setup. You need to tell GDB how to access to your program’s binaries with a set sysroot command, you need to obtain a local copy of the main executable and supply that to GDB with a file command, and you need to tell GDB to commence remote debugging with a target remote command.

Until now. Now all you need is the target remote command.

This new code is really new. It’s not in any GDB release yet, let alone in RHEL or Fedora. It’s not even in the nightly GDB snapshot, it’s that fresh. So, with the caveat that none of these examples will work today unless you’re using a Git build, here’s some things you can do with gdbserver using the new code.

Here’s an example of a traditional remote debugging session, with the things you type in bold. In one window:

abc$ ssh xyz.example.com
xyz$ gdbserver :9999 --attach 5312
Attached; pid = 5312
Listening on port 9999

gdbserver attached to process 5312, stopped it, and is waiting for GDB to talk to it on TCP port 9999. Now, in another window:

abc$ gdb -q
(gdb) target remote xyz.example.com:9999
Remote debugging using xyz.example.com:9999
...lots of messages you can ignore...
(gdb) bt
#0 0x00000035b5edf098 in *__GI___poll (fds=0x27467a0, nfds=8,
timeout=<optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:83
#1 0x00000035b76449f9 in ?? () from target:/lib64/libglib-2.0.so.0
#2 0x00000035b76451a5 in g_main_loop_run ()
from target:/lib64/libglib-2.0.so.0
#3 0x0000003dfd34dd17 in gtk_main ()
from target:/usr/lib64/libgtk-x11-2.0.so.0
#4 0x000000000040913d in main ()

Now you have GDB on one machine (abc) controlling process 5312 on another machine (xyz) via gdbserver. Here I did a backtrace, but you can do pretty much anything you can with regular, non-remote GDB.

I called that a “traditional” remote debugging session because that’s how a lot of people use this, but there’s a more flexible way of doing things if you’re using gdbserver as your stub. GDB and gdbserver can communicate over stdio pipes, so you can chain commands, and the new code to remove all the setup you used to need makes this really nice. Lets do that first example again, with pipes this time:

abc$ gdb -q
(gdb) target remote | ssh -T xyz.example.com gdbserver - --attach 5312
Remote debugging using | ssh -T xyz.example.com gdbserver - --attach 5312
Attached; pid = 5312
Remote debugging using stdio
...lots of messages...
(gdb)

The “-” in gdbserver’s argument list replaces the “:9999” in the previous example. It tells gdbserver we’re using stdio pipes rather than TCP port 9999. As well as configuring everything with single command, this has the advantage that the communication is through ssh; there’s no security in GDB’s remote protocol, so it’s not the kind of thing you want to do over the open internet.

What else can you do with this? Anything you can do through stdio pipes! You can enter Docker containers:

(gdb) target remote | sudo docker exec -i e0c1afa81e1d gdbserver - --attach 58
Remote debugging using | sudo docker exec -i e0c1afa81e1d gdbserver - --attach 58
Attached; pid = 58
Remote debugging using stdio
...

Notice how I slipped sudo in there too. Anything you can do over stdio pipes, remember? If you’re using Kubernetes you can use kubectl exec, or with OpenShift osc exec.

gdbserver can do more than just attach, you can start programs with it too:

(gdb) target remote | sudo docker exec -i e0c1afa81e1d gdbserver - /bin/sh
Remote debugging using | sudo docker exec -i e0c1afa81e1d gdbserver - /bin/sh
Process /bin/sh created; pid = 89
stdin/stdout redirected
Remote debugging using stdio
...

Or you can start it without any specific program, and then tell it what do do from within GDB. This is by far the most flexible way to use gdbserver. You can control more than one process, for example:

(gdb) target extended-remote | ssh -T root@xyz.example.com gdbserver --multi -
Remote debugging using | gdbserver --multi -
Remote debugging using stdio
(gdb) attach 774
...messages...
(gdb) add-inferior
Added inferior 2
(gdb) inferior 2
[Switching to inferior 2 [<null>] (<noexec>)]
(gdb) attach 871
...messages...
(gdb) info inferiors
Num Description Executable
* 2 process 871 target:/usr/sbin/httpd
  1 process 774 target:/usr/libexec/mysqld

Ready to debug that connection issue between your webserver and database?

Syndicated 2015-04-30 13:14:25 from gbenson.net

30 Apr 2015 mikal   » (Journeyer)

Coding club day one: a simple number guessing game in python

I've recently become involved in a new computer programming club at my kids' school. The club runs on Friday afternoons after school and is still very new so we're still working through exactly what it will look like long term. These are my thoughts on the content from this first session. The point of this first lesson was to approach a programming problem where every child stood a reasonable chance of finishing in the allotted 90 minutes. Many of the children had never programmed before, so the program had to be kept deliberately small. Additionally, this was a chance to demonstrate how literal computers are about the instructions they're given -- there is no room for intuition on the part of the machine here, it does exactly what you ask of it.

The task: write a python program which picks a random number between zero and ten. Ask the user to guess the number the program has picked, with the program telling the user if they are high, low, or right.

We then brainstormed the things we'd need to know how to do to make this program work. We came up with:
  • How do we get a random number?
  • What is a variable?
  • What are data types?
  • What is an integer? Why does that matter?
  • How do we get user input?
  • How do we do comparisons? What is a conditional?
  • What are the possible states for the game?
  • What is an exception? Why did I get one? How do I read it?


With that done, we were ready to start programming. This was done with a series of steps that we walked through as a group -- let's all print hello work. Now let's generate a random number and print it. Ok, cool, now let's do input from a user. Now how do we compare that with the random number? Finally, how do we do a loop which keeps prompting until the user guesses the random number?

For each of these a code snippet was written on the whiteboard and explained. It was up to the students to put them together into a program which actually works.

Due to limitations in the school's operating environment (no local python installation and repl.it not working due to firewalling) we used codeskulptor.org for this exercise. The code that the kids ended up with looks like this:

29 Apr 2015 dorward   » (Journeyer)

A self-indulgent rant about software with a happy ending

Last night I volunteered to convert a couple of documents to PDF for a friend.

‘It'll be easy’, I thought, ‘it'll only take a few minutes’.

The phrase "Ha" comes to mind.

Adobe Acrobat can't import DOCX files. This wasn't a huge surprise and I was prepared.

One a quick trip to Pages later and … one document came out blank while the other was so badly misaligned that it was unusable.

‘Never mind’, thought I, ‘there are other options’.

OpenOffice rendered both DOCX files as blank. This was not progress.

‘Fine, fine, let's see what MS Office is like these days’.

There was a free trial of the upcoming Office for Mac available. A 2.5GB download later and I had a file which would, when double clicked, make an icon appear in the dock for about two seconds before quitting.

At this point, I admit I was getting frustrated.

Off to Office 365 I went. I'd even have gone so far as to give Microsoft my £5.95 for a month of access to it, if they'd let me login. I was presented with a blank page after entering my Live credentials.

I got the same result after switching web browser to one that wasn't laden down with the features that make the WWW bearable.

Did Microsoft not want my money?

(The more I deal with DOCX, the less I like it).

By this point, it was past midnight, I was running out of options, and I didn't want to let my friend down.

Then I found the rather wonderful convertonelinefree.com (Gosh, this paragraph looks a bit spammy, it isn't though.) and I had the DOCX files converted a minute later.

So time to talk about Adobe software… in a blog post where I've been ranting about software. Brace yourselves…

I really like Acrobat CC. (Has the sky fallen? No? OK, then. Let us continue.)

I don't know what someone who has used earlier versions a lot will think of the dramatic UI changes, but as an occasional user, it is really rather nice.

It combined my two files without a hitch and did a near perfect job of identifying all the form fields I wanted to be editable.

The step-by-step UI is rather nice and makes it easy to find the various tools to edit the document.

Syndicated 2015-04-29 08:17:05 from Dorward's Ramblings

27 Apr 2015 Stevey   » (Master)

Validating puppet manifests via git hooks.

It looks like I'll be spending a lot of time working with puppet over the coming weeks.

I've setup some toy deployments on virtual machines, and have converted several of my own hosts to using it, rather than my own slaughter system.

When it comes to puppet some things are good, and some things are bad, as exected, and as any similar tool (even my own). At the moment I'm just aiming for consistency and making sure I can control all the systems - BSD, Debian GNU/Linux, Ubuntu, Microsoft Windows, etc.

Little changes are making me happy though - rather than using a local git pre-commit hook to validate puppet manifests I'm now doing that checking on the server-side via a git pre-receive hook.

Doing it on the server-side means that I can never forget to add the local hook and future-colleagues can similarly never make this mistake, and commit malformed puppetry.

It is almost a shame there isn't a decent collection of example git-hooks, for doing things like this puppet-validation. Maybe there is and I've missed it.

It only crossed my mind because I've had to write several of these recently - a hook to rebuild a static website when the repository has a new markdown file pushed to it, a hook to validate syntax when pushes are attempted, and another hook to deny updates if the C-code fails to compile.

Syndicated 2015-04-27 00:00:00 from Steve Kemp's Blog

27 Apr 2015 mjg59   » (Master)

Reducing power consumption on Haswell and Broadwell systems

Haswell and Broadwell (Intel's previous and current generations of x86) both introduced a range of new power saving states that promised significant improvements in battery life. Unfortunately, the typical experience on Linux was an increase in power consumption. The reasons why are kind of complicated and distinctly unfortunate, and I'm at something of a loss as to why none of the companies who get paid to care about this kind of thing seemed to actually be caring until I got a Broadwell and looked unhappy, but here we are so let's make things better.

Recent Intel mobile parts have the Platform Controller Hub (Intel's term for the Southbridge, the chipset component responsible for most system i/o like SATA and USB) integrated onto the same package as the CPU. This makes it easier to implement aggressive power saving - the CPU package already has a bunch of hardware for turning various clock and power domains on and off, and these can be shared between the CPU, the GPU and the PCH. But that also introduces additional constraints, since if any component within a power management domain is active then the entire domain has to be enabled. We've pretty much been ignoring that.

The tldr is that Haswell and Broadwell are only able to get into deeper package power saving states if several different components are in their own power saving states. If the CPU is active, you'll stay in a higher-power state. If the GPU is active, you'll stay in a higher-power state. And if the PCH is active, you'll stay in a higher-power state. The last one is the killer here. Having a SATA link in a full-power state is sufficient to keep the PCH active, and that constrains the deepest package power savings state you can enter.

SATA power management on Linux is in a kind of odd state. We support it, but we don't enable it by default. In fact, right now we even remove any existing SATA power management configuration that the firmware has initialised. Distributions don't enable it by default because there are horror stories about some combinations of disk and controller and power management configuration resulting in corruption and data loss and apparently nobody had time to investigate the problem.

I did some digging and it turns out that our approach isn't entirely inconsistent with the industry. The default behaviour on Windows is pretty much the same as ours. But vendors don't tend to ship with the Windows AHCI driver, they replace it with the Intel Rapid Storage Technology driver - and it turns out that that has a default-on policy. But to make things even more awkwad, the policy implemented by Intel doesn't match any of the policies that Linux provides.

In an attempt to address this, I've written some patches. The aim here is to provide two new policies. The first simply inherits whichever configuration the firmware has provided, on the assumption that the system vendor probably didn't configure their system to corrupt data out of the box[1]. The second implements the policy that Intel use in IRST. With luck we'll be able to use the firmware settings by default and switch to the IRST settings on Intel mobile devices.

This change alone drops my idle power consumption from around 8.5W to about 5W. One reason we'd pretty much ignored this in the past was that SATA power management simply wasn't that big a win. Even at its most aggressive, we'd struggle to see 0.5W of saving. But on these new parts, the SATA link state is the difference between going to PC2 and going to PC7, and the difference between those states is a large part of the CPU package being powered up.

But this isn't the full story. There's still work to be done on other components, especially the GPU. Keeping the link between the GPU and an internal display panel active is both a power suck and requires additional chipset components to be powered up. Embedded Displayport 1.3 introduced a new feature called Panel Self-Refresh that permits the GPU and the screen to negotiate dropping the link, leaving it up to the screen to maintain its contents. There's patches to enable this on Intel systems, but it's still not turned on by default. Doing so increases the amount of time spent in PC7 and brings corresponding improvements to battery life.

This trend is likely to continue. As systems become more integrated we're going to have to pay more attention to the interdependencies in order to obtain the best possible power consumption, and that means that distribution vendors are going to have to spend some time figuring out what these dependencies are and what the appropriate default policy is for their users. Intel's done the work to add kernel support for most of these features, but they're not the ones shipping it to end-users. Let's figure out how to make this right out of the box.

[1] This is not necessarily a good assumption, but hey, let's see

comment count unavailable comments

Syndicated 2015-04-27 18:33:44 from Matthew Garrett

25 Apr 2015 tampe   » (Journeyer)

The escape of the batch curse

Consider the following problem, assume that we can generate two random sequences l1,l2 of numbers between 0 and 9, take a transform that to each number map it to the length of when it appears again modulo 10, call this map M. Max be the transform of a sequence by taking the max of the current value and the next. Let Plus be the summation of two such sequences modulo 10. we also assume that we now that the second sequence, l2 has the property that elementwize,


Max(M(l1)) .leq. Max(M(l2)),

how do we go about to generate


M(Plus(Max(M(l1)),Max(M(l2)))).

The idea of the solution I would like to play with is to generate a special variable, that when you create it, the value is not known, but you can place it in the right order and then when it's all it's dependants are available the result will be executed. I've played with these ideas a long time a ago here on this blog, but now there is the addition of backtracking that come into play and that we use guile-log and prolog. So what is the main trick that enables this.

Define two predicates, delay and force that is used as follows


plusz(Z,X,Y) :- delay(plusz(Z,X,Y),X,Y) ;
(ZZ is X + Y, force(Z,ZZ)

we want to take the addition of X and Y, if X and Y both have been forced dealy will fail, else it will delay the evaluation of plusz(Z,X,Y) and execute that function at the time when both have been forced, to put the value in Z we need to execute special code to force the value if Z as well have been blessed as a delayed value. That's it, its defined in about 50 rows of guile-log code, nothing huge.

The setup to generate sequence is to maintain state and define transforms that initiate the state and update the state, given such transforms one have enough to generate the sequence, so one need to make sense of the following ideoms


next(S,SS) :- ..
start(S) :- ..

Lets see how it can look for our example in prolog,


next_all2([Z1,Z2,Id,S1,S2,Z],[ZZ1,ZZ2,IId,SS1,SS2,ZZ]) :-
next_M(Z1,[R1,P1,C1|_],ZZ1),
next_M(Z2,[R2,P2,C2|_],ZZ2),
moving_op(2,maxz,0,U1,C1,S1,SS1),
moving_op(2,maxz,0,U2,C2,S2,SS2),
fail_if(P2,(U1 .leq. U2)),
plus10z(C, U1 ,U2),
next_M(Z,[_,_,CZ|_],ZZ),
plusz(IId,Id ,C),
writez(_,IId,C,R1,R2).



next_M(Z,X,ZZ)

next_M(Z,X,ZZ), will be the seuence M(l), e.g. it's a construct that generate state information Z->ZZ with the current value X=[l_i,M(l)_i,Redo ...], with l_i the i'th generated random value, M(l)_i the number of times it takes befor l_i appear again in the sequence modulo 10, and Redo will be the backtracking object so that everything restarts from the generation of random value l_i.


moving_op(N,Op,InitSeed,MaxRes,ValIn,S,SS)

N is the length of the window, Op is the reducing operator op(Z,X,Y), InitSeed is the initial value of the reduction. MaxRes is the current result of e.g. the max operation on the window perhaps delayed, ValIn is the value in the sequence S state in and SS is the state out.


fail_if(P,(U1 .leq. U2),U1,U2)

when U1 and U2 have been forced and U1


plus10z(C, U1 ,U2),


plus modulo 10 of Max(M(l1) and Max(M(l2))


plusz(IId,Id ,C),

This is a convolution of the generation of solutions C, the result IId_i will be non delayed if and only if when all C_k k


writez(_,IId,C,R1,R2).

Write out the result C and the R1 and R2 generated random valued for l1 and l2.

As you see this approach make sure the combination of values are in the right synchronization and that the solution allow to destruct the problem in reusable more abstract components that one quite easy sew together, that's the power of this idea, you want to change the algorithm, easy to do, the number of buggs will be smaller due to the composability of the approach, neat! Also this approach will be memory safe due to the neat gc that guile-log has regarding logical variables so everything would work on as long sequences that you are prepared to wait for.

Cheers!

25 Apr 2015 mikal   » (Journeyer)

Tuggeranong Trig (again)

The cubs at my local scout group are interested in walking to a trig, but have some interesting constraints around mobility for a couple of their members. I therefore offered to re-walk Tuggeranong Trig in Oxley with an eye out for terrain. I think this walk would be very doable for cubs -- its 650 meters with only about 25 meters of vertical change. The path is also ok for a wheelchair I think.

             

Interactive map for this route.

Tags for this post: blog pictures 20150415-tuggeranong_trig photo canberra bushwalk trig_point
Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger

Comment

Syndicated 2015-04-24 18:04:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

24 Apr 2015 bagder   » (Master)

curl on the NASDAQ tower

Apigee posted this lovely picture over at twitter. A curl command line on the NASDAQ tower.

curl-nasdaq-cropped

Syndicated 2015-04-24 16:54:47 from daniel.haxx.se

23 Apr 2015 crhodes   » (Master)

els2015 it happened

Oh boy.

It turns out that organizing a conference is a lot of work. Who’d have thought? And it’s a lot of work even after accounting for the benefits of an institutional Conference Services division, who managed things that only crossed my mind very late: signage, extra supplies for college catering outlets – the kinds of things that are almost unnoticeable if they’re present, but whose absence would cause real problems. Thanks to Julian Padget, who ran the programme, and Didier Verna, who handled backend-financials and the website; but even after all that there were still a good number of things I didn’t manage to delegate – visa invitation letters, requests for sponsorship, printing proceedings, attempting to find a last-minute solution for recording talks after being reminded of it on the Internet somewhere... I’m sure there is more (e.g. overly-restrictive campus WiFi, blocking outbound ssh and TLS-enabled IMAP) but it’s beginning to fade into a bit of a blur. (An enormous “thank you” to Richard Lewis for stepping in to handle recording the talks as best he could at very short notice).

And the badges! People said nice things about the badges on twitter, but... I used largely the same code for the ILC held in Cambridge in 2007, and the comment passed back to me then was that while the badges were clearly going to become collectors’ items, they failed in the primary purpose of a badge at a technical conference, which is to give to the introvert with poor facial recognition some kind of clue who they are talking to: the font size for the name was too small. Inevitably, I got round to doing the badges at the last minute, and between finding the code to generate PDFs of badges (I’d lost my local copy, but the Internet had one), finding a supplier for double-sided sheets of 10 85x54mm business cards, and fighting with the office printer (which insisted it had run out of toner) the thought of modifying the code beyond the strictly necessary didn’t cross my mind. Since I asked for feedback in the closing session, it was completely fair for a couple of delegates to say that the badges could have been better in this respect, so in partial mitigation I offer a slightly cleaned-up and adjusted version of the badge code with the same basic design but larger names: here you go (sample output). (Another obvious improvement suggested to me at dinner on Tuesday: print a list of delegate names and affiliations and pin it up on a wall somewhere).

My experience of the conference is likely to be atypical – being the responsible adult, I did have to stay awake at all times, and do some of the necessary behind-the-scenes stuff while the event was going on. But I did get to participate; I listened to most of most of the talks, with particular highlights for me being Breanndán Ó Nualláin’s talk about a DSL for graph algorithms, Martin Cracauer’s dense and technical discussion of conservative garbage collection, and the demo session on Tuesday afternoon: three distinct demos in three different areas, each both well-delivered and with exciting content. Those highlights were merely the stand-out moments for me; the rest of the programme was pretty good, too, and it looked like there were some good conversations happening in the breaks, over lunch, and at the banquet on Monday evening. We ended up with 90 registrations all told, with people travelling in from 18 other countries; the delegate with the shortest distance to travel lived 500m from Goldsmiths; the furthest came from 9500km away.

The proceedings are now available for free download from the conference website; some speakers have begun putting up their talk materials, and in the next few weeks we’ll try to collect as much of that as we can, along with getting release permissions from the speakers to edit and publish the video recordings. At some point there will be a financial reckoning, too; Goldsmiths has delivered a number of services on trust, while ELSAA has collected registration fees in order to pay for those services – one of my next actions is to figure out the bureaucracy to enable these two organizations to talk to each other. Of course, Goldsmiths charges in pounds, while ELSAA collected fees in euros, and there’s also the small matter of cross-border sales tax to wrap my head around... it’s exciting being a currency speculator!

In summary, things went well – at least judging by the things people said to my face. I’m not quite going to say “A+ would organize again”, because it is a lot of work – but organizing it once is fine, and a price worth paying to help sustain and to contribute to the communication between multiple different Lisp communities. It would be nice to do some Lisp programming myself some day: some of the stuff that you can do with it is apparently quite neat!

Syndicated 2015-04-23 10:47:10 from notes

21 Apr 2015 dmarti   » (Master)

Why ad blockers don't have to do content marketing

From the Condé Nast "User Agreement & Privacy Policy" page:

The use of Tracking Technologies by third parties is subject to their own privacy policies, not this Privacy Policy, and we have no responsibility or liability in connection therewith. If you do not want the services that Tracking Technologies provide, you may be able to opt-out by visiting http://www.aboutads.info.

Sounds like checking into a hotel and getting this...

Feeding by third-party insects in guest rooms is subject to their own policies, and we have no responsibility or liability in connection therewith. If you wish to opt out of feeding by third party insects, here is the card of a really lousy exterminator we know, who only gets some of them but that's your problem.

Ad blockers don't have to do content marketing, because publishers are doing it for them.

But there's a way for publishers to opt out of the whole tracking vs. blocking race to the bottom, and neither surveillance marketers nor conventional ad blockers have it. More: Ad blocking, bullshit and a point of order

Syndicated 2015-04-19 13:49:00 from Don Marti

18 Apr 2015 dmarti   » (Master)

The end of Please Turn Off Your Ad Blocker

More news from the ongoing malvertising outbreak.

These aren't skeevy ads on low-reputation pirate sites. These attacks are coming in on big-budget sites such as AOL's Huffington Post, and included in fake ads for real brands such as Hugo Boss. They're using A-list adtech companies. Read the articles. Nasty stuff. The ongoing web ad fraud problem is hitting users now, not just advertisers.

So far the response from the ad networks has been a few whacks at the problem accounts. So I can make the safest kind of prediction: someone made money doing something not very risky, not much has changed, so they'll do it again and others will copy them. Want to bet against me?

Users already trust web ads less than any other ad medium. Malvertising takes a form of advertising that's a bad deal for the user and makes it worse. (If sewer rats are coming out of the commode, users are going to put a brick on the lid. If the rats have rabies, make that two bricks.)

The more malvertising that comes along, the more that the "please turn off your ad blocker" message on web sites is going to look not just silly, but irresponsible or just plain scary. "Turn off your ad blocker" sounds like the web version of "If you can't open lottery-winner-wire-transfer.zip, turn off your antivirus."

Time to rewrite the "turn off your ad blocker" messages and talk about a sensible alternative. Instead of running a general ad blocker (and encouraging the "acceptable ads" racket) or running entirely unprotected, the hard part is just starting: how to educate users about third-party content protection that works for everyone: users, sites, and responsible advertisers.

Bonus links

Sherwin Siy: IP Rights Aren’t a License to Kill Devices (And No, Fine Print Doesn’t Make It OK)

Planet Debian: Joey Hess: a programmable alarm clock using systemd

Calvin Spealman: The Curl Pipe

@feedly: Why we retired the feedly URL shortener

James Gingell: Where Did Soul-Sucking Office-Speak Come From?

Glyn Moody: China Turns From 'Pirate' Nation To Giant Patent Troll

Joe Wein: Disclaimers by spammers

SMBlog -- Steve Bellovin's Blog: If it Doesn't Exist, it Can't be Abused

phobos: Partnering with Mozilla

Eryn Paul: Why Germans Work Fewer Hours But Produce More: A Study In Culture

The Tech Block: The tech worker shortage doesn’t really exist

Heidi Moore: The readers we can’t friend

Lary Wallace: Why Stoicism is one of the best mind-hacks ever devised

Steven Sinofsky: Why Remote Engineering Is So Difficult!?#@%

SysAdmin1138: Application firewalls for your phone

Syndicated 2015-04-18 14:57:06 from Don Marti

18 Apr 2015 Stevey   » (Master)

skx-www upgraded to jessie

Today I upgraded my main web-host to the Jessie release of Debian GNU/Linux.

I performed the upgraded by changing wheezy to jessie in the sources.list file, then ran:

apt-get update
apt-get dist-upgrade

For some reason this didn't upgrade my kernel, which remained the 3.2.x version. That failed to boot, due to some udev/systemd issues (lots of "waiting for job: udev /dev/vda", etc, etc). To fix this I logged into my KVM-host, chrooted into the disk image (which I mounted via the use of kpartx), and installed the 3.16.x kernel, before rebooting into that.

All my websites seemed to be OK, but I made some changes regardless. (This was mostly for "neatness", using Debian packages instead of gems, and installing the attic package rather than keeping the source-install I'd made to /opt/attic.)

The only surprise was the significant upgrade of the Net::DNS perl-module. Nothing that a few minutes work didn't fix.

Now that I've upgraded the SSL-issue I had with redirections is no longer present. So it was a worthwhile thing to do.

Syndicated 2015-04-18 00:00:00 from Steve Kemp's Blog

17 Apr 2015 gary   » (Master)

Judgement Day

GDB will be the weapon we fight with if we accidentally build Skynet.

Syndicated 2015-04-17 20:07:19 from gbenson.net

17 Apr 2015 titus   » (Journeyer)

The PyCon 2015 Ally's Workshop

At PyCon 2015, I had the pleasure of attending the Ally Skills Workshop, organized by @adainitiative (named after Ada Lovelace).

The workshop was a 3 hour strongly guided discussion centering around 4-6 person group discussion of short scenarios. There's a guide to running them here, although I personally would not have wanted to run one without attending one first!

I attended the workshop for at least three reasons --

First, I want to do better myself. I have put some effort into (and received a lot of encouragement for) making my lab an increasingly open and welcoming place. While I have heard concerns about being insufficiently critical and challenging of bad ideas in science (and I have personally experienced a few rather odd situations where obviously bad ideas weren't called out in my past labs), I don't see any inherent conflict between being welcoming and being intellectually critical - in fact, I rather suspect they are mutually supportive, especially for the more junior people.

But, doing better is surprisingly challenging; everyone needs a mentor, or at least guideposts. So when I heard about this workshop, I leapt at the chance to attend!

Second, I am interested in connecting these kinds of things to my day job in academia, where I am now a professor at UC Davis. UC Davis is the home of the somewhat notorious Jonathan Eisen, who is notorious for many reasons that include boycotting and calling out conferences that have low diversity. UC Davis also has an effort to increase diversity at the faculty level, and I think that this is an important effort. I'm hoping to be involved in this when I actually take up residence in Davis, and learning to be a male ally is one way to help. More, I think that Davis would be a natural home to some of these ally workshops, and so I attended the Ally Skills workshop to explore this.

And third, I was just curious! It's surprisingly tricky to confront and talk about sexism effectively, and I thought seeing how the the pros did it would a good way to start.

Interestingly, 2/3 of my lab attended the workshop, too - without me requesting it. I think they found it valuable, too.

The workshop itself

Valerie Aurora ran the workshop, and it's impossible to convey how good it was, but I'll try by picking out some choice quotes:

"You shouldn't expect praise or credit for behaving like a decent human being."

"Sometimes, you just need a flame war to happen." (paraphrase)

"LPT: Read Captain Awkward. And read the comments."

"It's not up to the victim whether you enforce your code of conduct."

"The physiological effects of alcohol are actually limited, and most effects of alcohol are socially and/or culturally mediated."

"Avoid rules lawyering. I don't now if you've ever worked with lawyers, but software engineers are almost as bad."

"One problem for male allies is the assumption that you are only talking to a woman because you are sexually interested in them."

"Trolls are good at calibrating their level of awfulness to something that you will feel guilty about moderating."

Read the blog post "Tone policing only goes one way..


Overall, a great experience and something I hope to help host more of at UC Davis.

--titus

Syndicated 2015-04-16 22:00:00 from Living in an Ivory Basement

17 Apr 2015 katzj   » (Master)

Looking back on a day in the mud – 2015 Rasputitsa

Back in mid-January, the weather in New England had been unseasonably nice and it was looking like we were going to have a mild winter. I had completed the Rapha Festive 500 at the end of the year and felt like it would be a good winter of riding although it was starting to get cold in January. Someone mentioned the Rasputitsa gravel race (probably Chip) and I thought it looked like it could be fun. There was one little blizzard as we neared the end of January (and the registration increase!) but things still seemed okay. So I signed up, thinking it would help keep me riding even through the cold. Little did I know that we were about to get hit with a record amount of snow basically keeping me off the bike for six weeks. So March rolls around, I’ve barely ridden and Rasputitsa is a month away. Game. On.

I stepped up my riding and by a week ago, I started to feel I’d at least be able to suffer through things. But everyone that I’d been talking with about driving up with was bailing and so I started thinking along the same lines. But on Friday afternoon, I was reminded by my friend Kate that “What would Jens do?”. And that settled it, I was going.

I drove up and spent the night in Lincoln, NH on Friday night to avoid having to do a 3 hour drive on Saturday morning before the race. I woke up Saturday morning, had some hotel breakfast and drove the last hour to Burke. As I stepped out of the car, I was hit by a blast of cold wind and snow flurries were starting to fall. And I realized that my vest and my jacket hadn’t made the trip with me, instead being cozy in my basement. Oops.

I finished getting dressed, spun down to pick up my number and then waited around for the start. It was cold but I tried to at least keep walking around, chatting with folks I knew and considering buying another layer from one of the vendors, although I decided against.

It's overcast and chilly as we line up at the start
It’s overcast and chilly as we line up at the start

But then we lined up and, with what was in retrospect not my wisest choice of the day, I decided to line up with some friends of mine who were near the back. But then we started and I couldn’t just hang out at the back and enjoy a nice ride. Instead, I started picking my way forward through the crowd. My heart rate started to go up, though my Garmin wasn’t picking up the HR strap, just as the road did. The nice thing was that this also had the impact of warming me up and not feel cold. The roads started out smooth but quickly got to washed out dirt, potholes and peanut butter thick mud. But it was fun… I hadn’t spent time on roads like this before but it was good. I got into a rhythm where on the flats and climbs, I would push hard and then on some of the downhills, I would be a little sketched out and take it slower. So I’d pass people going up, they’d pass me going down. But I was making slow progress forward.

Until Cyberia. I was feeling strong. I was 29.3 miles in of 40. And I thought that I was going to end up with a pretty good time. After a section of dirt that was all up-hill, we took a turn to a snow covered hill. I was able to ride about 100 feet before hopping off and starting to walk the bike up hill. And that is when the pain began. My calves pulled and hurt. I couldn’t go that quickly. The ruts were hard to push the bike through. And it kept going. At the bottom of the hill, they had said 1.7 miles to the feed zone… I thought some of it I’d ride. But no, I walked it all. Slowly. Painfully. And bonking while I did it as I was needing to eat as I got there and I couldn’t walk, push my bike and eat at the same time. I made it to the top and thought that maybe I could ride down. But no, more painful walking. It was an hour of suffering. It wasn’t pretty. But I did it. But I was passed by oh so many people. It was three of the hardest miles I’ve ever had.

The slow and painful slog through the snow. Photo courtesy of @jarlathond
The slow and painful slog through the snow.
Photo courtesy of @jarlathond

I reached the bottom where the road began again and I got back on my bike. They said we had 7.5 miles to go but I was delirious. I tried to eat and drink and get back into pedaling.  I couldn’t find my rhythm. I was cold. But I kept going, because suffering is something I can do. So I managed to basically hold on to my position, although I certainly didn’t make up any ground. I took the turn for 1K to go, rode 200 meters and saw the icy, snowy chute down to the finish… I laughed and I carefully worked my way down it and then crossed the finish line. 4:12:54 on the clock… a little above the 4 hours I hoped for but the hour and 8 minutes that I spent on Cyberia didn’t help me.

Yep, ended up with some mud there.
Yep, ended up with some mud there.

I went back to the car, changed and took advantage of the plentiful and wonderful food on offer before getting back in the car and starting the three hour drive back home.

Mmm, all the food
Mmm, all the food

So how was it? AWESOME. One of the most fun days I’ve had on the bike. Incredibly well-organized and run. Great food both on the course (Untappd maple syrup hand up, home made cookie handup, home made doughnuts at the top of Cyberia, Skratch Labs bottle feeds) and after. The people who didn’t come missed out on a great day on a great course put on by great people. I’m already thinking that I probably will have to do the Dirty 40 in September. As for next year? Well, with almost a week behind me, I’m thinking that I’ll probably tackle Rasputitsa again… although I might go for more walkable shoes than the winter boots I wore this year and try to be a bit smarter about Cyberia. But what a great start event for the season!

Fire.  Chainsaws.  Alf. Basically, all of Vermont's finest on offer.
Fire. Chainsaws. Alf.
Basically, all of Vermont’s finest on offer.

Syndicated 2015-04-17 12:55:12 from Jeremy's Thoughts

16 Apr 2015 hypatia   » (Journeyer)

Wednesday 15 April 2015

So many things about travel are only things I remember when I travel. Which is a shame, because some of those things I forget when not traveling are bad things about travel and I wouldn’t spend so much of the rest of my time puttering around being all “why am I so mysteriously averse to traveling? how strange!” Sure, I never forget the things about airports and aircraft being hostile to all things normal and human, I remember my three continuous days of insomnia after getting home from Romania in 2007, things like that. But that’s physical discomfort. I forget the emotions. I don’t remember the defensiveness of wanting to spend multiple consecutive days in dark hotel rooms (probably culture shock), I don’t remember the constant loneliness that nicely counterbalances that so that I’m unhappy even in the hotel rooms and I don’t remember the homesickness on top of it all.

I don’t remember the punch in the gut of “almost everything I love best in the world is somewhere else entirely”.

These memories obviously brought to you by being in San Francisco rather than Sydney right now. How else would I be accessing them? And you shouldn’t think of this as an unusual trip for me, this is pretty much every damn time. Not non-stop of course, or I probably would remember better why I have mixed feelings about travel. No. It’s an acute problem and I’m right in the target zone for it: more than halfway done with the travel, mostly done with the reason for the travel, why can’t I go home now?

As I’ve been telling people, last Thursday night was my first night away from A, ever. That Friday night through to this coming Monday night were/will be the second through twelfth nights, respectively. So that’s not helping either. Apparently she’s been pretty fine with it, which is in character. She doesn’t mind when we get babysitters, she doesn’t mind being dropped at daycare, it turns out she doesn’t noticeably mind that I vanished a week ago and that a couple of days later, V vanished too. (He’s gone to visit my parents.) C’est la vie?

On the bright side, I’ve finally been to Montreal! Which is actually part of this whole sad pattern too: I get this way worse when I travel as far as the US East Coast, or Europe, than I do otherwise. But still, I’ve finally been to Montreal! I didn’t really understand their seasons until I was flying in and I noticed that the waterways were still iced up, which I have never actually seen before anywhere, let alone anywhere in the middle of spring. I didn’t leave the city, but I did go and specifically look right at the river at Vieux Port. The ice was pretty slushy but it was extensive. I went to Notre Dame, which I wouldn’t have chosen for myself but am happy about; I wasn’t aware of the French Catholic history of Montreal and the cathedral is beautiful.

I was very Australian about the temperature, which is to say, it was above freezing, so why wear a coat? I run very hot in any case, even other Australians regularly look at my outfits and say “but aren’t you cold?” However by Monday, it was 22°C anyway (up from about -5 the week before) so I didn’t have to shock everyone for long. There was definitely much less ice visible on the way out.

Australian or not, I will admit that walking in the rain on Friday when it was about 3° and I had left my raincoat, conscientiously lugged all the way from Australia, in Outremont was a bit of a challenge.

I was there for PyCon and AdaCamp. The former confirmed that if I want to go to PyCon, some day I just need to go to PyCon and stop thinking that I can go on a work trip and actually attend the conference too. A number of people I know were very surprised to hear I was there given that they didn’t see me at all, and probably some more will be surprised when they read this. I have a more reasonable approach to AdaCamp: I can attend some of it and I do, and it is much as I picture.

I’m in San Francisco now. I think five hours or so is the worst length of flight. Long enough that I spend about four hours thinking “OK, surely we’re nearly there” and checking out the flight map to find out that nope, we are in no way nearly there, short enough that there’s no institutionalisation to the plane environment. Just non-stop outrage the whole way. Plus no one feels sorry for you afterwards, unlike my Sydney to Vancouver to Montreal itinerary which caused some appreciative intake of breath from Montrealers.

Four more nights.

Syndicated 2015-04-16 06:55:30 from puzzling.org

15 Apr 2015 marnanel   » (Journeyer)

Song sermons

"As Rick Astley says, never gonna give you up, never gonna let you down. And Joshua is told that God will never leave nor forsake him..."

"As Haddaway says, what is love? Baby don't hurt me no more. But St John answers that perfect love casts out fear..."

"As A-Ha say, take on me, take me on. And likewise in today's reading we see Elijah taking on the priests of Baal..."

"As Sting says, I hope that someone gets my message in a bottle. But the letters of Paul were written to specific situations..."

"As Wham say, wake me up before you go-go. In Ephesians 5, Paul also exhorts sleepers to wake so Christ can shine on them..."

"As Chumbawamba say, I get knocked down, but I get up again. So also, our Lord's resurrection on that first Easter morning..."

This entry was originally posted at http://marnanel.dreamwidth.org/333380.html. Please comment there using OpenID.

Syndicated 2015-04-15 01:24:52 from Monument

14 Apr 2015 hypatia   » (Journeyer)

Mary in San Francisco: come meet me at Double Union on the evening of April 18!

I’m in San Francisco from tomorrow (Wednesday) until Sunday! Most of the trip is a work trip, but I have figured out that I can make use of my Double Union membership when I’m in town and have fun, chill events in the space.

Double Union event: Button-making & crafts with Mary Gardiner

Mary Gardiner, our Australian member and a co-founder of the Ada Initiative, will be visiting San Francisco and wants to use our button-maker! Come make buttons and do assorted crafts (vinyl-cutter, 3D printer, sewing, etc.) and hang out with Mary and Valerie!

When: Sat Apr 18, 2015 6:00pm – 8:00pm

Where: Double Union on Valencia Street between 14th Street and 15th Street. See the visitor information.

This is open to Double Union members. It’s also open to non-Double Union members who are my friends!

For my friends

If you are not a Double Union member, and we’re friends, please email me at my personal address to let me know you’re coming. People of all genders welcome.

Please read the Double Union visitor information and the anti-harassment policy if you are coming along.

Syndicated 2015-04-14 18:43:04 from puzzling.org

14 Apr 2015 broonie   » (Journeyer)

Flashing an AT91SAM9G20-EK from bare metal

Since I just had cause to do this and it was harder than it needed to be due to bitrot in the public documentation I could find I thought I’d write up how to get a modern bootloader onto older Atmel boards. These instructions are written for the AT91SAM9G20-EK though they should also apply to other Atmel boards of a similar generation.

These instructions are for booting from NAND since it’s the default thing for the board, for this J34 should be fitted to enable the chip select and J33 disconnected to disable the dataflash. If there is something broken programmed into flash then booting while holding down BP4 should cause the second stage bootloader to trash itself and ensure the ROM bootloader puts itself into recovery mode, or just removing both J33 and J34 during power on will also ensure no second stage bootloader is found.

There is a ROM bootloader but it just loads a small region from the boot media and jumps into it which isn’t enough for u-boot so there is a second stage bootloader called AT91Bootstrap. Download sources for current versions from github. If it (or a more sensibly written equivalent) is not yet merged upstream you’ll need to apply this patch to get it to build with a modern compiler, or you could use an old toolchain (which you’ll need in the next step anyway):

diff --git a/board/at91sam9g20ek/board.mk b/board/at91sam9g20ek/board.mk
index 45f59b1822a6..b8251ca2fbad 100644
--- a/board/at91sam9g20ek/board.mk
+++ b/board/at91sam9g20ek/board.mk
@@ -1,7 +1,7 @@
 CPPFLAGS += \
        -DCONFIG_AT91SAM9G20EK \
-       -mcpu=arm926ej-s
+       -mcpu=arm926ej-s -mfloat-abi=soft
 
 ASFLAGS += \
        -DCONFIG_AT91SAM9G20EK \
-       -mcpu=arm926ej-s
+       -mcpu=arm926ej-s -mfloat-abi=soft

Once that’s done you can build with:

make at91sam9g20eknf_uboot_defconfig
make CROSS_COMPILE=arm-linux-gnueabihf-

producing binaries/at91sam9g20ek-nandflashboot-uboot-${VERSION}.bin. This configuration will look for u-boot at 0x40000 in the flash so we need a u-boot binary. Unfortunately modern compilers seem to produce binaries that fail with no output. This is normally a sign that they need the ABI specifying more clearly as above but I got fed up trying to spot what was missing so I used an old CodeSourcery 2013.05 release instead, hopefully future versions of u-boot will be able to build for this target with older toolchains. Grab a recent release (I used 2015.01) and build with:

cd ${UBOOT}
make at91sam9g20ek_nandflash_defconfig
make CROSS_COMPILE=arm-linux-gnueabihf-

to get u-boot.bin.

These can then be flashed using the Atmel flashing tool SAM-BA. Start it and connect to the target (there is a Linux version, though it appears to rely on old versions of TCL/TK so if you get trouble starting it the easiest thing is to use the sacrificial Windows laptop you’ve obtained in order to run the “entertaining” flashing tools companies sometimes provide without risking a real system, or in my case your shiny new laptop that you’ve not yet installed Linux on). Start it then:

  1. Connect SAM-BA to the device following the dialog on start.
  2. Make sure you’ve selected “NandFlash” in the memory type tabs in the center of the window.
  3. Run the “Enable NandFlash” script.
  4. Run the “Erase All” script.
  5. Run the “Send Boot File” script and provide the at91bootstrap binary.
  6. Set “Send File Name” to be the u-boot binary you built earlier and “Address” to be 0x40000.
  7. Click “Send File”
  8. Press the reset button

which should result in at91bootstrap output followed by u-boot output on the serial console. A similar process works for the AT91SAM9263, there the jumper you need is J19 (sadly u-boot does not flash pictures of cute animals or forested shorelines on the screen as the default “Basic LCD Project 1.4″ firmware does, I’m not sure this “full operating system” thing is really delivering improved functionality).

Syndicated 2015-04-14 18:04:43 from Technicalities

14 Apr 2015 Stevey   » (Master)

Subject - Verb Agreement

There's pretty much no way that I can describe the act of cutting a live, 240V mains-voltage, wire in half with a pair of scissors which doesn't make me look like an idiot.

Yet yesterday evening that is exactly what I did.

There were mitigating circumstances, but trying to explain them would make little sense unless you could see the scene.

In conclusion: I'm alive, although I almost wasn't.

My scissors? They have a hole in them.

Syndicated 2015-04-14 00:00:00 from Steve Kemp's Blog

12 Apr 2015 broonie   » (Journeyer)

Acer Aspire E11

Recently I was in Seoul in the middle of three weeks of travel and my laptop died on me.  Since I had some work that needed doing fairly urgently I took myself over to Yongsan Electronics Market and got myself a cheap replacement to tide myself over.

What I ended up with was an Acer Aspire E11. There’s a bunch of different models all with very similar plastics, I got one which has a N2940 SoC, 2G of RAM (upgraded to 4G in store), a 500G hard disk and no fans for just over 200000 Korean Won, or about $200. As you’d expect at that price it’s got shortcomings but overall I’ve been extremely happy with it, it’s worth looking at if you need something cheap.

The keyboard in particular is probably the nicest I’ve used 0n a laptop in a long time with a good, definite but not excessive click feel as you press. Battery life is about 5 hours as advertised which is not wonderful but basically fine for me most of the time, and while not exactly Retina it’s clear with good viewing angles and generally pleasant to look at. Everything is plastic but feels very solid and robust, better than a lot of more expensive devices I’ve used, and there’s not much bezel around the screen which means it’s the first laptop I’ve had which has been comfortable to use in a standard economy seat on a plane.

The biggest drawback is performance – it’s a little slow opening applications sometimes and kernel builds crawl with an x86 allmodconfig taking about one and three quarter hours. For e-mail and web browsing there’s no problem at all, I did have to move from offlineimap to mbsync to get my mail to sync in a reasonable time but that’s more to do with the performance of offlineimap than that of the system. Overall in use it feels like the Dell I was using from about 2008-2011 or so, comfortable in use outside of builds, and I do appreciate having a system with no fans.

There were a couple of small tricks getting Debian installed – this is the first system I’ve seen with secure boot enabled by default which took me a few moments to work out (but is really good to see). Once that was disabled the install was smooth other than being bitten by Debian bug#778810 which meant I needed a manual fixup to actually get it to boot from the disk. It’s also got a Broadcom WiFi module which means it doesn’t work at all with mainline but it looked like that was on a standard mini PCI Express module so easily replaceable (I happened to have a USB dongle handy so haven’t bothered) and the wired ethernet just worked.

Like I say I’ve been very happy with it, there’s a bunch of other models with different specs for everything except the case (some touchscreen, some with small 32G eMMC drives) as well. Were it not for my need to do kernel builds I’d probably be keeping it as my primary laptop.

Syndicated 2015-04-12 18:52:10 from Technicalities

12 Apr 2015 AlanHorkan   » (Master)

OpenRaster with JPEG and SVG

OpenRaster is a file format for layered images, essentially each layer is a PNG file, there is some XML glue and it is all contained in a Zip file.

In addition to PNG some programs allow layers in other formats. MyPaint is able to import JPG and SVG layers. Drawpile has also added SVG import.

After a small change to the OpenRaster plugin for The GNU Image Manipulation Program, it will also allow non-PNG layers. The code had to be changed in any case, it needed to at least give a warning that non-PNG layers were not being loaded, instead of quietly dropping them. Allowing other layer types was more useful and easier too.
(This change only means that other file types with be imported, they will not be passed through and will be stored as PNG when the file is exported.)

Syndicated 2015-04-12 18:08:16 from Alan Horkan

12 Apr 2015 mikal   » (Journeyer)

One Tree and Painter

Paul and I set off to see two trigs today. One Tree is on the ACT border and is part of the centenary trail. Painter is a suburban trig in Belconnen. Much fun was had, I hope I didn't make Paul too late for the wedding he had to go to.

 

Interactive map for this route.

Interactive map for this route.

Tags for this post: blog pictures 20150412-one_tree_painter photo canberra bushwalk trig_point
Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; A walk around Mount Stranger; Forster trig

Comment

Syndicated 2015-04-11 23:53:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

11 Apr 2015 Stevey   » (Master)

Some things get moved, some things get doubled in size.

Relocation

We're about three months away from relocating from Edinburgh to Newcastle and some of the immediate panic has worn off.

We've sold our sofa, our spare sofa, etc, etc. We've bought a used dining-table, chairs, and a small sofa, etc. We need to populate the second-bedroom as an actual bedroom, do some painting, & etc, but things are slowly getting done.

I've registered myself as a landlord with the city council, so that I can rent the flat out without getting into trouble, and I'm in the process of discussing the income possabilities with a couple of agencies.

We're still unsure of precisely which hospital, from the many choices, in Newcastle my wife will be stationed at. That's frustrating because she could be in the city proper, or outside it. So we need to know before we can find a place to rent there.

Anyway moving? It'll be annoying, but we're making progress. Plus, how hard can it be?

VLAN Expansion

I previously had a /28 assigned for my own use, now I've doubled that to a /27 which gives me the ability to create more virtual machines and run some SSL on some websites.

Using SNI I've actually got the ability to run SSL almost all sites. So I configured myself as a CA and generated a bunch of certificates for myself. (Annoyingly few tutorials on running a CA mentioned SNI so it took a few attempts to get the SAN working. But once I got the hang of it it was simple enough.)

So if you have my certificate authority file installed you can browse many, many of my interesting websites over SSL.

SSL

I run a number of servers behind a reverse-proxy. At the moment the back-end is lighttpd. Now that I have SSL setup the incoming requests hit the proxy, get routed to lighttpd and all is well. Mostly.

However redirections break. A request for:

  • https://lumail.org/docs

Gets rewritten to:

  • http://lumail.org/docs/

That is because lighttpd generates the redirection and it only sees the HTTP connection. It seems there is mod_extforward which should allow the server to be aware of the SSL - but it doesn't do so in a useful fashion.

So right now most of my sites are SSL-enabled, but sometimes they'll flip to naked and unprotected. Annoying.

I don't yet have a solution..

Syndicated 2015-04-11 00:00:00 from Steve Kemp's Blog

11 Apr 2015 olea   » (Master)

Compiling node.js for Android Lollipop

While participating in the Nordic IoT Hackathon 2015 our team Hello North (wrongly tagged as «HackLab team») wanted to explore the potential of running node.js applications running native in Android.

Happily this was solved by Yaron Y. Goland and described in a post. Using his method I've compiled node.js against android-ndk-r10d running the example on a 4.2.2 rooted device.

The next step was to try in a unrooted one, but only got at first a 5.0 Lollipop one. Execution failed with a error: only position independent executables (PIE) are supported. error message. Some investigation got me to a solved bug report. The magic trick seems to be just this patch.

It took me some time to understand how to add this to the node.js building configuration system but seems got fixed just like this:

--- /home/olea/node/android-configure~  2015-04-11 02:46:04.063966802 +0200
+++ /home/olea/node/android-configure   2015-04-11 01:56:34.470154253 +0200
@@ -6,14 +6,16 @@
     --toolchain=arm-linux-androideabi-4.8 \
     --arch=arm \
     --install-dir=$TOOLCHAIN \
-    --platform=android-9
+    --platform=android-16
 export PATH=$TOOLCHAIN/bin:$PATH
 export AR=arm-linux-androideabi-ar
 export CC=arm-linux-androideabi-gcc
 export CXX=arm-linux-androideabi-g++
 export LINK=arm-linux-androideabi-g++
+export CPPFLAGS="-fPIE"
+export LDFLAGS="-fPIE -pie -L$PREFIX/lib"
 

And this is the test:

¡Yepa!

PS: Just checked the same build using android-16 platform runs in 4.2.2. ¡Double Yepa!

Syndicated 2015-04-11 06:20:00 from Ismael Olea

10 Apr 2015 mikal   » (Journeyer)

Thinking time

I've had a lot of things to think about this week, so I've gone on a few walks. I found some geocaches along the way, but even better I think my head is a bit more sorted out now.

Interactive map for this route.

Interactive map for this route.

Interactive map for this route.

Tags for this post: blog canberra bushwalk
Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches

Comment

Syndicated 2015-04-09 16:16:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

6 Apr 2015 AlanHorkan   » (Master)

OpenRaster Paths (or Vectors)

Summary: plugin updated to allow round-trip of paths.

The MyPaint team are doing great work, making progress towards MyPaint 1.2, I encourage you to give it a try, build it from source or check out the nightly builds. (Recent windows build Note: the filename mypaint-1.1.1a.7z may stay the same but the date of build does change.)
The Vector Layers feature in MyPaint is particularly interesting. One downside though is that the resulting OpenRaster files with vector layers are incompatible with most existing programs. MyPaint 1.0 was one of the few programs that managed to open the file at all, presenting an error message only for the layer it was not able to import. The other programs I tested, failed to import the file at all. It would be great if OpenRaster could be extended to include vector layers and more features but it will take some careful thought and planning.

It can be challenging enough to create a new and useful feature, planning ahead or trying to keep backwards compatibility makes matters even more complicated. With that in mind I wanted to add some support for vectors to the OpenRaster plugin. Similar to my previous work to round-trip metadata in OpenRaster I found a way to round-trip Paths/Vectors that is "good enough" and that I hope will benefit users. The GNU Image Manipulation Program already allows paths to be exported in Scalable Vector Graphics (SVG) format. All paths are exported to a single file, paths.svg and are imported back from that same file. It is not ideal, but it is simple and it works.

Users can get the updated plugin immediately from the OpenRaster plugin gitorious project page. There is lots more that could be done behind the scenes, but for ordinary users I do expect any changes as noticeable as these for a while.


Back to the code. I considered (and implemented) a more complicated approach that included changes to stack.xml, where raster layers were stored as one group, and
paths (vectors layers) as another group. This approach was better for exporting information that was compatible with MyPaint but as previously mentioned, the files were not compatible with any other existing programs.

To ensure OpenRaster files that are back compatibility it might be better to always include a PNG file as the source for every layer, and to find another way to link to other types of content, such as text or vectors, or at some distant point in the future even video. A more complicated fallback system might be useful in the long run. For example the EPUB format reuses the Open Packaging Framework (OPF) standard, any pages can be stored in multiple formats, so long as it includes a fallback to another format, ending with a fallback to a few standard baseline formats (i.e. XHTML). The OpenRaster standard has an elegant simplicity, but there is so much more it could do.

Syndicated 2015-04-06 22:00:30 from Alan Horkan

6 Apr 2015 AlanHorkan   » (Master)

OpenRaster Metadata

Summary: plugin updated to allow round-trip of metadata.

OpenRaster does not yet make any suggestions on how to store metadata. My preference is for OpenRaster to continue to borrow from OpenDocument and use the same format meta.xml file, but that can be complicated. Rather than taking the time to write a whole lot of code and waiting do metadata the best way, I found another way that is good enough, and expedient. I think ordinary users will find it useful -- which is the most important thing -- to be able to round-trip metadata in the OpenRaster format, so despite my reservations about creating code that might discourage developers (myself included) from doing things a better way in future I am choosing the easy option. (In my previous post I mentioned my concern about maintainability, this is what I was alluding to.)

A lot of work has been done over the years to make the The GNU Image Manupilation Program (GIMP) work with existing standards. One of those standards is XMP, the eXtensible Metadata Platform originally created by Adobe Systems, which used the existing Dublin Core metadata standard to create XML packets that can be inserted inside (or alongside) an image file. The existing code creates an XMP packet, let's call it packet.xmp and include it in the OpenRaster file. There's a little more code to the load the information back in and users should be able to go to menu File, Properties and in Properties dialog go to the tab labelled Advanced to view (or set) metadata.

This approach may not be particularly useful to users who want to get their information out into other applications such as MyPaint or Krita (or Drawpile or Lazpaint) but it at least allows them not to lose metadata information when they use OpenRaster. (In the long run other programs will probably want to implement code to read XMP anyway, so I think this is a reasonable compromise, even though I want OpenRaster to stay close to OpenDocument and benefit from being part of that very large community.)

You can get the updated plugin immediately from the OpenRaster plugin gitorious project page.

If you are a developer and want to modify or reuse the code, it is published under the ISC License.

Syndicated 2015-04-06 20:36:09 from Alan Horkan

5 Apr 2015 hypatia   » (Journeyer)

Thursday 12 March 2015

A few scenes from the end of our week off work:

After dropping off a load of computer games we were donating on Thursday (OK, it isn’t only Diablo, Civ IV was a huge part of our lives in the late 2000s, so much so that it seemed like we had purchased more copies than strictly necessary), we went to the cafe at Bathers Pavilion, Balmoral. In the process we remembered why it is we never ever go to Balmoral despite it being so ridiculously beautiful, viz, the traffic on Military Road and the parking at Balmoral itself. But we were given the last table in the cafe and had pizza with cheerful napkins in bold beach colours and that made it all worth it.

Friday was our official day off together, and we started by going for an ocean swim at Coogee. This is a sneaky activity, evidently: years ago Alice went for an ocean swim and ended up spending a few years doing Can Too training and life-saving, because it turns out you just can’t say no to ocean swimming. As a SCUBA diver, I was sceptical; how can ocean swimming be anything like as appealing? But we went swimming with Martin once over the summer, and suddenly, here we are, choosing ocean swimming to open our morning off.

On the way there, I made a remark while changing lanes — “bad choice of lane, Mary, no lane biscuit for you” — and Andrew responded that Lane Biscuit sounded like a romance novel hero. We developed the idea fairly rapidly: an entire series of parallel universe romance novels, in which Lane Biscuit can be the hero in every single one. If you’re a literary agent, call me.

The swim was not quite as sublime as the one with Martin in Janaury. The shorebreak was pretty looking (tall thin waves) and dangerous, so it took us a while to pick our moment to get past it and then we swam back and forth between the flags and again needed to pick our moment to come back out of the surf. Plus, I really need new goggles as my current ones flood all the time. But nevertheless walking around afterwards was a happy time.

We went to The Boathouse for lunch afterwards and had our usual experience with Sydney dining, namely that one of the entrees was the best part of the meal and so the mains are great, but not quite as great, and the second half of the meal is thus a puzzle. But that was some lovely sashimi indeed, and where else does “a selection” of oysters?

Overall, I think it’s time to escape from our suburb a little more.

Syndicated 2015-03-14 08:19:55 from puzzling.org

4 Apr 2015 mikal   » (Journeyer)

Bendora Arboretum and Bulls Head trig

Prompted largely by a not very detailed entry in a book, a bunch of friends and I went to explore Bendora Arboretum. The arboretum was planted in the 1940's as scientific experiments exploring what soft woods would grow well in our climate -- this was prompted by the large amount of wood Australia was importing at the time. There were 34 Arboreta originally, but only this one remains. The last three other than this one were destroyed in the 2003 bush fires.

We also did a side trip to Bulls Head trig, which was interesting as its not the traditional shape.

                                       

See more thumbnails

Interactive map for this route.

Interactive map for this route.

Tags for this post: blog pictures 20150404-bendora_bulls_head photo canberra bushwalk trig_point
Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; A walk around Mount Stranger; Forster trig

Comment

Syndicated 2015-04-04 15:35:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

3 Apr 2015 Skud   » (Master)

I’m looking for someone to take over Written? Kitten!

A few years ago, my housemate Emily and I sat down for an afternoon and created Written? Kitten!, a writing motivation tool that rewards you with pictures of kittens for every 100 words you write. Since then it’s had over a million visitors and has gained heaps of fans among writers of all kinds.

Sadly, I no longer have the motivation to maintain it, so I’m looking for someone to take it over.

This would involve:

  • I transfer the domain to you, and you continue to keep it up and running or at least smoothly transition to something else (eg. with a redirect for a year or two)
  • Web-hosting wise, you simply need to host a single static HTML file; it gets about 25k-50k hits a month, sometimes spiking in November during NaNoWriMo (the most was its first NaNo, at 235k visits in Nov 2011.) It is currently hosted on Dreamhost shared hosting, and has no troubles.
  • You take over ownership of this github repo and deal with very occasional bugfixes, improvements, etc. (There are a couple of outstanding pull requests/issues at present.)

I’d ask that you continue to attribute me and Emily as the original creators, and to retain its current BSD license.

Anyone want it? Let me know here.

Syndicated 2015-04-03 09:03:58 from Infotropism

1 Apr 2015 joey   » (Master)

I am ArchiveTeam

This seems as good a day as any to mention that I am a founding member of ArchiveTeam.

ArchiveTeam logo

Way back, when Geocities was closing down, I was one of a small rag-tag group who saved a copy of most of it. That snapshot has since generated more publicity than most other projects I've worked on. I've heard many heartwarning stories of it being the only remaining copy of baby pictures and writings of deceased friends, and so on. It's even been the subject of serious academic study as outlined in this talk, which is pretty awesome.

Jason Scott in full stage regalia

I'm happy to let this guy be the public face of ArchiveTeam in internet meme-land. It's a 0.1% project for me, and has grown into a well-oiled machine, albeit one that shouldn't need to exist. I only get involved these days when there's another crazy internet silo fire drill and/or I'm bored.

(Rumors of me being the hand model for ArchiveTeam are, however, unsubstantiated.)

Syndicated 2015-04-01 17:50:49 from see shy jo

1 Apr 2015 mako   » (Master)

RomancR: The Future of the Sharing-Your-Bed Economy

romancer_logo

Today, Aaron Shaw and I are pleased to announce a new startup. The startup is based around an app we are building called RomancR that will bring the sharing economy directly into your bedrooms and romantic lives.

When launched, RomancR will bring the kind of market-driven convenience and efficiency that Uber has brought to ride sharing, and that AirBnB has brought to room sharing, directly into the most frustrating and inefficient domain of our personal lives. RomancR is Uber for romance and sex.

Here’s how it will work:

  • Users will view profiles of nearby RomancR users that match any number of user-specified criteria for romantic matches (e.g., sexual orientation, gender, age, etc).
  • When a user finds a nearby match who they are interested in meeting, they can send a request to meet in person. If they choose, users initiating these requests can attach an optional monetary donation to their request.
  • When a user receives a request, they can accept or reject the request with a simple swipe to the left or right. Of course, they can take the donation offer into account when making this decision or “counter-offer” with a request for a higher donation. Larger donations will increase the likelihood of an affirmative answer.
  • If a user agrees to meet in person, and if the couple then subsequently spends the night together — RomancR will measure this automatically by ensuring that the geolocation of both users’ phones match the same physical space for at least 8 hours — the donation will be transferred from the requester to the user who responded affirmatively.
  • Users will be able to rate each other in ways that are similar to other sharing economy platforms.

Of course, there are many existing applications like Tinder and Grindr that help facilitate romance, dating, and hookups. Unfortunately, each of these still relies on old-fashion “intrinsic” ways of motivating people to participate in romantic endeavors. The sharing economy has shown us that systems that rely on these non-monetary motivations are ineffective and limiting! For example, many altruistic and socially-driven ride-sharing systems existed on platforms like Craigslist or Ridejoy before Uber. Similarly, volunteer-based communities like Couchsurfing and Hospitality Club existed for many years before AirBnB. None of those older systems took off in the way that their sharing economy counterparts were able to!

The reason that Uber and AirBnB exploded where previous efforts stalled is that this new generation of sharing economy startups brings the power of markets to bear on the problems they are trying to solve. Money both encourages more people to participate in providing a service and also makes it socially easier for people to take that service up without feeling like they are socially “in debt” to the person providing the service for free. The result has been more reliable and effective systems for proving rides and rooms! The reason that the sharing economy works, fundamentally, is that it has nothing to do with sharing at all! Systems that rely on people’s social desire to share without money — projects like Couchsurfing — are relics of the previous century.

RomancR, which we plan to launch later this year, will bring the power and efficiency of markets to our romantic lives. You will leave your pitiful dating life where it belongs in the dustbin of history! Go beyond antiquated non-market systems for finding lovers. Why should we rely on people’s fickle sense of taste and attractiveness, their complicated ideas of interpersonal compatibility, or their sense of altruism, when we can rely on the power of prices? With RomancR, we won’t have to!

Note: Thanks to Yochai Benkler whose example of how leaving a $100 bill on the bedside table of a person with whom you spent the night can change the nature of the a romantic interaction inspired the idea for this startup.

Syndicated 2015-04-01 17:18:57 (Updated 2015-04-02 00:15:22) from copyrighteous

1 Apr 2015 mako   » (Master)

More Community Data Science Workshops

Pictures from the CDSW sessions in Spring 2014
Pictures from the CDSW sessions in Spring 2014

After two successful rounds in 2014, I’m helping put on another round of the Community Data Science Workshops. Last year, our 40+ volunteer mentorss taught more than 150 absolute beginners the basics of programming in Python, data collection from web APIs, and tools for data analysis and visualization and we’re still in the process of improving our curriculum and scaling up.

Once again, the workshops will be totally free of charge and open to anybody. Once again, they will be possible through the generous participation of a small army of volunteer mentors.

We’ll be meeting for four sessions over three weekends:

  • Setup and Programming Tutorial (April 10 evening)
  • Introduction to Programming (April 11)
  • Importing Data from web APIs (April 25)
  • Data Analysis and Visualization (May 9)

If you’re interested in attending, or interested in volunteering as mentor, you can go to the information and registration page for the current round of workshops and sign up before April 3rd.

Syndicated 2015-04-01 02:41:06 (Updated 2015-04-01 02:48:01) from copyrighteous

31 Mar 2015 bagder   » (Master)

The state and rate of HTTP/2 adoption

http2 logoThe protocol HTTP/2 as defined in the draft-17 was approved by the IESG and is being implemented and deployed widely on the Internet today, even before it has turned up as an actual RFC. Back in February, already upwards 5% or maybe even more of the web traffic was using HTTP/2.

My prediction: We’ll see >10% usage by the end of the year, possibly as much as 20-30% a little depending on how fast some of the major and most popular platforms will switch (Facebook, Instagram, Tumblr, Yahoo and others). In 2016 we might see HTTP/2 serve a majority of all HTTP requests – done by browsers at least.

Counted how? Yeah the second I mention a rate I know you guys will start throwing me hard questions like exactly what do I mean. What is Internet and how would I count this? Let me express it loosely: the share of HTTP requests (by volume of requests, not by bandwidth of data and not just counting browsers). I don’t know how to measure it and we can debate the numbers in December and I guess we can all end up being right depending on what we think is the right way to count!

Who am I to tell? I’m just a person deeply interested in protocols and HTTP/2, so I’ve been involved in the HTTP work group for years and I also work on several HTTP/2 implementations. You can guess as well as I, but this just happens to be my blog!

The HTTP/2 Implementations wiki page currently lists 36 different implementations. Let’s take a closer look at the current situation and prospects in some areas.

Browsers

Firefox and Chome have solid support since a while back. Just use a recent version and you’re good.

Internet Explorer has been shown in a tech preview that spoke HTTP/2 fine. So, run that or wait for it to ship in a public version soon.

There are no news about this from Apple regarding support in Safari. Give up on them and switch over to a browser that keeps up!

Other browsers? Ask them what they do, or replace them with a browser that supports HTTP/2 already.

My estimate: By the end of 2015 the leading browsers with a market share way over 50% combined will support HTTP/2.

Server software

Apache HTTPd is still the most popular web server software on the planet. mod_h2 is a recent module for it that can speak HTTP/2 – still in “alpha” state. Give it time and help out in other ways and it will pay off.

Nginx has told the world they’ll ship HTTP/2 support by the end of 2015.

IIS was showing off HTTP/2 in the Windows 10 tech preview.

H2O is a newcomer on the market with focus on performance and they ship with HTTP/2 support since a while back already.

nghttp2 offers a HTTP/2 => HTTP/1.1 proxy (and lots more) to front your old server with and can then help you deploy HTTP/2 at once.

Apache Traffic Server supports HTTP/2 fine. Will show up in a release soon.

Also, netty, jetty and others are already on board.

HTTPS initiatives like Let’s Encrypt, helps to make it even easier to deploy and run HTTPS on your own sites which will smooth the way for HTTP/2 deployments on smaller sites as well. Getting sites onto the TLS train will remain a hurdle and will be perhaps the single biggest obstacle to get even more adoption.

My estimate: By the end of 2015 the leading HTTP server products with a market share of more than 80% of the server market will support HTTP/2.

Proxies

Squid works on HTTP/2 support.

HAproxy? I haven’t gotten a straight answer from that team, but Willy Tarreau has been actively participating in the HTTP/2 work all the time so I expect them to have work in progress.

While very critical to the protocol, PHK of the Varnish project has said that Varnish will support it if it gets traction.

My estimate: By the end of 2015, the leading proxy software projects will start to have or are already shipping HTTP/2 support.

Services

Google (including Youtube and other sites in the Google family) and Twitter have ran HTTP/2 enabled for months already.

Lots of existing services offer SPDY today and I would imagine most of them are considering and pondering on how to switch to HTTP/2 as Chrome has already announced them going to drop SPDY during 2016 and Firefox will also abandon SPDY at some point.

My estimate: By the end of 2015 lots of the top sites of the world will be serving HTTP/2 or will be working on doing it.

Content Delivery Networks

Akamai plans to ship HTTP/2 by the end of the year. Cloudflare has previously stated that they will “support HTTP/2 just as soon as it is practical“.

Amazon has not given any response publicly that I can find for when they will support HTTP/2 on their services.

Not a totally bright situation but I also believe (or hope) that as soon as one or two of the bigger CDN players start to offer HTTP/2 the others might feel a bigger pressure to follow suit.

Non-browser clients

curl and libcurl support HTTP/2 since months back, and the HTTP/2 implementations page lists available implementations for just about all major languages now. Like node-http2 for javascript, http2-perl, http2 for Go, Hyper for Python, OkHttp for Java, http-2 for Ruby and more. If you do HTTP today, you should be able to switch over to HTTP/2 relatively easy.

More?

I’m sure I’ve forgotten a few obvious points but I might update this as we go as soon as my dear readers point out my faults and mistakes!

How long is HTTP/1.1 going to be around?

My estimate: HTTP 1.1 will be around for many years to come. There is going to be a double-digit percentage share of the existing sites on the Internet (and who knows how many that aren’t even accessible from the Internet) for the foreseeable future. For technical reasons, for philosophical reasons and for good old we’ll-never-touch-it-again reasons.

The survey

Finally, I asked friends on twitter, G+ and Facebook what they think the HTTP/2 share would be by the end of 2015 with the help of a little poll. This does of course not make it into any sound or statistically safe number but is still just a collection of what a set of random people guessed. A quick poll to get a rough feel. This is how the 64 responses I received were distributed:

http2 share at end of 2015

Evidently, if you take a median out of these results you can see that the middle point is between 5-10 and 10-15. I’ll make it easy and say that the poll showed a group estimate on 10%. Ten percent of the total HTTP traffic to be HTTP/2 at the end of 2015.

I didn’t vote here but I would’ve checked the 15-20 choice, thus a fair bit over the median but only slightly into the top quarter..

In plain numbers this was the distribution of the guesses:

0-5% 29.1% (19)
5-10% 21.8% (13)
10-15% 14.5% (10)
15-20% 10.9% (7)
20-25% 9.1% (6)
25-30% 3.6% (2)
30-40% 3.6% (3)
40-50% 3.6% (2)
more than 50% 3.6% (2)

Syndicated 2015-03-31 05:54:36 from daniel.haxx.se

30 Mar 2015 benad   » (Apprentice)

Electricity Savings: All Those Blinking Lights

As part of my "spring cleaning", and partly inspired by this "Earth Hour" thing, I did an inventory of all the connected electrical devices around my apartment.

I basically categorized them this way:

  1. Devices that are used all the time and must be connected: Lights, electrical heating, fridge, water heater and so on.
  2. Devices that are seldom used, but cannot be turned off completely or disconnected easily: Oven, washer, dryer, and so on.
  3. Devices that are on all the time, for some reason.
  4. Devices that are used enough to warrant leaving them in "low-power standby mode".
  5. Devices I should turn off completely or disconnect when not used.

While I can't do anything for the devices in categories 1 and 2, other than replacing them, my goal was to move as many devices to either standby or turned off as possible. For example, my "home server PC", a Mac mini, doesn't use much power, but do I really need to have to running all the time? So I programmed it to be in standby, and wake up only during the afternoons on weekdays.

For devices already in standby mode, are they used enough? For example, my Panasonic Blu-Ray player kept being warm, since it remained in standby mode, for what? About 10 seconds of boot time? Since my TV takes that much time to "boot up" anyway, I just need to power on both at the same time, and I'll save all the electricity of keeping it in standby all the time.

I am generally less worried about laptops, tables and other battery-operated mobile devices when they stand in standby. They are already quite energy-efficient, running on batteries or not, especially when not actively used. Still, unplugging them from chargers reduces risks if there's an electrical surcharge in the apartment's wiring.

Syndicated 2015-03-30 20:26:00 from Benad's Blog

30 Mar 2015 dmarti   » (Master)

It's not about freedom

Doc Searls writes:

We hold as self-evident that personal agency and independence matter utterly, that free customers are more valuable than captive ones, that personal data belongs more to persons themselves than to those gathering it, that conscious signaling of intent by individuals is more valuable than the inferential kind that can only be guessed at, that spying on people when they don’t know about it or like it is wrong, and so on.

I'm going to agree with Doc that these are all good and important principles.

But then I'm going to totally ignore them.

Yes, it is "self-evident" that it's important to behave as a decent human being in online interactions, and in marketing projects. (Complexity dilutes understanding of a system but not moral responsibility for participating in a system. Just because you don't understand how your marketing budget gets diverted to fraud does not mean that you aren't ultimately responsible when you end up funding malware and scams.) Thinking about user rights is important. 30 years ago, Richard Stallman released the GNU Manifesto, which got people thinking about the ethical aspects of software licensing, and we need that kind of work about information in markets, too.

But that's not what I'm on about here. Targeted Advertising Considered Harmful is just background reading for a marketing meeting. And I've been to enough marketing meetings to know that, no matter how rat-holed and digressed the discussion gets, Freedom is never on the agenda.

So I'm going to totally ignore the Freedom side of discussing the targeted ad problem. You don't have to worry about some marketing person clicking through to this site and saying, WTF is this freedom woo-woo? It's all pure, unadulterated, 100% marketing-meeting-compatible business material, with some impressive-looking citations to Economics papers to give it some class.

Big Data proponents like to talk about "co-creating value," so let's apply that expression to advertising. The advertiser offers signal, and the reader offers attention. The value is in the exchange. Here's the point that we need to pick up on, and the point that ad blocker stats are shoving in our face until we get it. When one side's ability to offer value goes away—when a targeted ad ceases to carry signal and becomes just a windshield flyer—there's no incentive for the other side to participate in the exchange. Freedom or no freedom. Homo economicus himself would run a spam filter, or hang up on a cold call, or block targeted ads.

The big problem for web sites now is to get users onto a publisher-friendly tracking protection tool that facilitates advertising's exchange of value for value, before web advertising turns into a mess of crappy targeted ads vs. general filters, the way email spam has.

Syndicated 2015-03-30 14:33:29 from Don Marti

30 Mar 2015 Skud   » (Master)

Visiting San Francisco, Montreal, and Ottawa

Just a quick note to say that I’ll be in North America starting next week, for about two weeks:

  • San Francisco April 6th-10th (meetings, coworking, jetlag recovery, tacos, etc)
  • Montreal April 10th-15th (AdaCamp Montreal — I’m fully booked up from the afternoon of the 12th onward, I’m afraid, but have some time before that)
  • Ottawa April 15th-19th (friends, maybe meetings, coworking, etc)
  • San Francisco, again April 19th-21st

If you’re in any of those places and you’d like to catch up, ping me! I’ve got a fair bit of flexibility so I’m up for coffee/meals/coworking/whatever.

I’m particularly interested in talking with people/groups/orgs about:

  • Open food data, open source for food growers, etc — especially interoperability and linked open data!
  • Sustainable (open source) tech for sustainable (green) communities — why do so many sustainability groups use Facebook and how can we choose tech that better reflects our values?
  • Community management beyond/outside the tech bubble (we didn’t invent this thing; how do we learn and level up from here?)
  • Diversity beyond 101 level — how can we keep pushing forward? What’s next?

I should probably also note that I’ve got some capacity for short-medium term contract work from May onward. For the last 6 months or so I’ve been doing a lot of diversity consulting: I organise/lead AdaCamps (feminist unconferences for women in open tech/culture) around the world, and more recently I’ve been working with the Wikimedia Foundation on their Inspire campaign to address the gender gap. I’m interested in doing more along the same lines, so if you need someone with heaps of expertise at the intersection of open stuff and diversity/inclusiveness, let’s talk!

Syndicated 2015-03-30 13:30:34 from Infotropism

29 Mar 2015 marnanel   » (Journeyer)

in which Final Fantasy is discovered to be a computer game

Today someone made a reference I didn't get to something called a chockoboo (I think). I looked confused, and they said, "Have you heard of Final Fantasy?" "Yes," I said, "but I'm not sure what it is. A film, maybe, or a computer game?" There followed a great deal of explanation which I have now forgotten because I have no context to attach it to, except that FF is a large series of complicated computer games and that chockoboos are important in some of them. I think they must have explained what a chockoboo actually *is*, but if they did I forgot it.

The main takeaway, however, was an alarming realisation that I do this too, to almost everyone I meet.

This entry was originally posted at http://marnanel.dreamwidth.org/332483.html. Please comment there using OpenID.

Syndicated 2015-03-29 17:23:34 from Monument

28 Mar 2015 mdz   » (Master)

What I think about thought

Only parts of us will ever
touch o̶n̶l̶y̶ parts of others –
one’s own truth is just that really — one’s own truth.
We can only share the part that is u̶n̶d̶e̶r̶s̶t̶o̶o̶d̶ ̶b̶y̶ within another’s knowing acceptable t̶o̶ ̶t̶h̶e̶ ̶o̶t̶h̶e̶r̶—̶t̶h̶e̶r̶e̶f̶o̶r̶e̶ so one
is for most part alone.
As it is meant to be in
evidently in nature — at best t̶h̶o̶u̶g̶h̶ ̶ perhaps it could make
our understanding seek
another’s loneliness out.

– unpublished poem by Marilyn Monroe, via berlin-artparasites

This poem inspired me to put some ideas into words this morning, an attempt to summarize my current working theory of consciousness.

Ideas travel through space and time. An idea that exists in my mind is filtered through my ability to express it somehow (words, art, body language, …), and is then interpreted by your mind and its models for understanding the world. This shifts your perspective in some way, some or all of which may be unconscious. When our minds encounter new ideas, they are accepted or rejected, reframed, and integrated with our existing mental models. This process forms a sort of living ecosystem, which maintains equilibrium within the realm of thought. Ideas are born, divide, mutate, and die in the process. Language, culture, education and so on are stable structures which form and support this ecosystem.

Consciousness also has analogues of the immune system, for example strongly held beliefs and models which tend to reject certain ideas. Here again these can be unconscious or conscious. I’ve seen it happen that if someone hears an idea they simply cannot integrate, they will behave as if they did not hear it at all. Some ideas can be identified as such a serious threat that ignoring them is not enough to feel safe: we feel compelled to eliminate the idea in the external world. The story of Christianity describes a scenario where an idea was so threatening to some people that they felt compelled to kill someone who expressed it.

A microcosm of this ecosystem also exists within each individual mind. There are mental structures which we can directly introspect and understand, and others which we can only infer by observing our thoughts and behaviors. These structures communicate with each other, and this communication is limited by their ability to “speak each other’s language”. A dream, for example, is the conveyance of an idea from an unconscious place to a conscious one. Sometimes we get the message, and sometimes we don’t. We can learn to interpret, but we can’t directly examine and confirm if we’re right. As in biology, each part of this process introduces uncountable “errors”, but the overall system is surprisingly robust and stable.

This whole system, with all its many minds interacting, can be thought of as an intelligence unto itself, a gestalt consciousness. This interpretation leads to some interesting further conclusions:

  • The notion that an individual person possesses a single, coherent point of view seems nonsensical
  • The separation between “my mind” and “your mind” seems arbitrary
  • The attribution of consciousness only to humans, or only to living beings, seems absurd

Syndicated 2015-03-28 16:50:22 from We'll see | Matt Zimmerman

27 Mar 2015 marnanel   » (Journeyer)

Image accessibility

I have an accessibility idea. I shall probably do it, unless it turns out to be fundamentally flawed. Your thoughts are appreciated!

1) A site that takes an uploaded JPEG, and a string, and returns the JPEG with the EXIF comment field set to that string.

2) Browser extensions for Firefox and Chrome which set the alt property of each JPEG on a page to its comment field, if it has one.

This means you can describe an image before you post it, and that description travels with the image. Thoughts?

This entry was originally posted at http://marnanel.dreamwidth.org/332220.html. Please comment there using OpenID.

Syndicated 2015-03-27 15:27:19 (Updated 2015-03-27 15:27:29) from Monument

26 Mar 2015 caolan   » (Master)

gtk3 vclplug, some more gesture support

Now gtk3 long-press support to go with swipe

With the demo that a long-press in presentation mode will bring up the context menu for switching between using the pointer for draw-on-slide vs normal slide navigation.

Syndicated 2015-03-26 14:53:00 (Updated 2015-03-26 14:53:33) from Caolán McNamara

26 Mar 2015 caolan   » (Master)

gtk3 vclplug, basic gesture support

gtk3's gesture support is the functionality I'm actually interested in, so now that presentations work in full-screen mode, I've added basic GtkGestureSwipe support to LibreOffice (for gtk3 >= 3.14) and hooked it up the slideshow, so now swiping towards the left advances to the next slide, to the right for the the previous slide.

Syndicated 2015-03-26 09:35:00 (Updated 2015-03-26 09:35:24) from Caolán McNamara

24 Mar 2015 jas   » (Master)

Laptop indecision

I wrote last month about buying a new laptop and I still haven’t made a decision. One reason for this is because Dell doesn’t seem to be shipping the E7250. Some online shops claim to be able to deliver it, but aren’t clear on what configuration it has – and I really don’t want to end up with Dell Wifi.

Another issue has been the graphic issues with the Broadwell GPU (see the comment section of my last post). It seems unlikely that this will be fixed in time for Debian Jessie. I really want a stable OS on this machine, as it will be a work-horse and not a toy machine. I haven’t made up my mind whether the graphics issue is a deal-breaker for me.

Meanwhile, a couple of more sub-1.5kg (sub-3.3lbs) Broadwell i7’s have hit the market. Some of these models were suggested in comments to my last post. I have decided that the 5500U CPU would also be acceptable to me, because some newer laptops doesn’t come with the 5600U. The difference is that the 5500U is a bit slower (say 5-10%) and lacks vPro, which I have no need for and mostly consider a security risk. I’m not aware of any other feature differences.

Since the last round, I have tightened my weight requirement to be sub-1.4kg (sub-3lbs), which excludes some recently introduced models, and actually excludes most of the models I looked at before (X250, X1 Carbon, HP 1040/810). Since I’m leaning towards the E7250, with the X250 as a “reliable” fallback option, I wanted to cut down on the number of further models to consider. Weigth is a simple distinguisher. The 1.4-1.5kg (3-3.3lbs) models I am aware that of that is excluded are the Asus Zenbook UX303LN, the HP Spectre X360, and the Acer TravelMate P645.

The Acer Aspire S7-393 (1.3kg) and Toshiba Kira-107 (1.26kg) would have been options if they had RJ45 ports. They may be interesting to consider for others.

The new models I am aware of are below. I’m including the E7250 and X250 for comparison, since they are my preferred choices from the first round. A column for maximum RAM is added too, since this may be a deciding factor for me. Higher weigth is with touch screens.

Toshiba Z30-B 1.2-1.34kg 16GB 13.3″ 1920×1080
Fujitsu Lifebook S935 1.24-1.36kg 12GB 13.3″ 1920×1080
HP EliteBook 820 G2 1.34-1.52kg 16GB 12.5″ 1920×1080
Dell Latitude E7250 1.25kg 8/16GB? 12.5″ 1366×768
Lenovo X250 1.42kg 8GB 12.5″ 1366×768

It appears unclear whether the E7250 is memory upgradeable, some sites say max 8GB some say max 16GB. The X250 and 820 has DisplayPort, the S935 and Z30-B has HDMI, and the E7250 has both DisplayPort/HDMI. The E7250 does not have VGA which the rest has. All of them have 3 USB 3.0 ports except for X250 that only has 2 ports. The E7250 and 820 claims NFC support, but Debian support is not given. Interestingly, all of them have a smartcard reader. All support SDXC memory cards.

The S935 has an interesting modular bay which can actually fit a CD reader or an additional battery. There is a detailed QuickSpec PDF for the HP 820 G2, haven’t found similar detailed information for the other models. It mentions support for Ubuntu, which is nice.

Comparing these laptops is really just academic until I have decided what to think about the Broadwell GPU issues. It may be that I’ll go back to a fourth-gen i7 laptop, and then I’ll probably pick a cheap reliable machine such as the X240.

Syndicated 2015-03-24 22:11:30 from Simon Josefsson's blog

24 Mar 2015 amits   » (Journeyer)

Live Migrating QEMU-KVM Virtual Machines: Full Text

I’ve attempted to write down all I said while delivering my devconf.cz talk on Live Migrating QEMU-KVM Virtual Machines.  The full text is on the Red Hat Developer Blog:

http://developerblog.redhat.com/2015/03/24/live-migrating-qemu-kvm-virtual-machines/

Syndicated 2015-03-24 15:53:40 from Think. Debate. Innovate.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Advogato User Stats
Users13999
Observer9885
Apprentice744
Journeyer2336
Master1030

New Advogato Members

Recently modified projects

25 Apr 2015 Justice4all
7 Mar 2015 Ludwig van
7 Mar 2015 Stinky the Shithead
18 Dec 2014 AshWednesday
11 Nov 2014 respin
20 Jun 2014 Ultrastudio.org
13 Apr 2014 Babel
13 Apr 2014 Polipo
19 Mar 2014 usb4java
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp

New projects

2 Dec 2014 Justice4all
11 Nov 2014 respin
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
1 Nov 2013 FAQ Linux
15 Apr 2013 Gramps
8 Apr 2013 pydiction