Recent blog entries

2 Sep 2015 bagder   » (Master)

The TLS trinity dance

In the curl project we currently support eleven different TLS libraries. That is 8 libraries and the OpenSSL “trinity” consisting of BoringSSL, libressl and of course OpenSSL itself.

You could easily be mislead into believing that supporting three libraries that all have a common base would be reallytrinity easy since they have the same API. But no, it isn’t. Sure, they have the same foundation and they all three have more in common that they differ but still, they all diverge in their own little ways and from my stand-point libressl seems to be the one that causes us the least friction going forward.

Let me also stress that I’m but a user of these projects, I don’t participate in their work and I don’t have any insights into their internal doings or greater goals.

libressl

Easy-peacy, very similar to OpenSSL. The biggest obstacle might be that the version numbering is different so an old program that might be adjusted to different OpenSSL features based on version numbers (like curl was) needs some adjusting. There’s a convenient LIBRESSL_VERSION_NUMBER define to detect libressl with.

OpenSSL

I regularly build curl against OpenSSL from their git master to get an early head-start when they change things and break backwards compatibility. They’ve increased that behavior since Heartbleed and while I generally agree with their ambitions on making more structs opaque instead of exposing all internals, it also hurts us over and over again when they remove things we’ve been using for years. What’s “funny” is that in almost all cases, their response is “well use this way instead” and it has turned out that there’s an equally old API that is still there that we can use instead. It also tells something about their documentation situation when that is such a common pattern. It’s never been possible to grasp this from just reading docs.

BoringSSL

BoringSSL has made great inroads in the market and is used on Android now and more. They don’t do releases(!) and have no version numbers so the only thing we can do is to build from git and there’s no install target in the makefile. There’s no docs for it, they remove APIs from OpenSSL (curl can’t support NTLM nor OCSP stapling when built with it), they’ve changed several data types in the API making it really hard to build curl without warnings. Funnily, they also introduced non-namespaced typedefs prefixed with X509_* that collide with other common headers.

How it can play out in real life

A while ago we noticed BoringSSL had removed the DES_set_odd_parity function which we use in curl. We changed the configure script to look for it and changed the code to survive without it. The lack of that function then also signaled that it wasn’t OpenSSL, it was BoringSSL

BoringSSL moved around things that caused our configure script to no longer detect it as “OpenSSL compliant” because CRYPTO_lock could no longer be found by configure. We changed it to instead search for HMAC_Init and we were fine again.

Time passed and BoringSSL brought back DES_set_odd_parity, so our configure script no longer saw it as BoringSSL (the Android fixed this problem in their git but never sent as the fix). We changed the configure script accordingly to properly use OPENSSL_IS_BORINGSSL instead to detect BoringSSL which was the correct thing anyway and now as a bonus it can thus detect and work with both new and old BoringSSL versions.

A short time after, I again try to build curl against the OpenSSL master branch only to realize they’ve deprecated HMAC_Init that we just recently switched to for detection (since the configure script needs to check for a particular named function within a library to really know that it has detected and can use said library). Sigh, we switched “detect function” again to HMAC_Update. Hopefully this exists in all three and will stick around for a while…

Right now I think we can detect and use all three. It is only a matter of time until one of them will ruin that and we will adapt again.

Syndicated 2015-09-02 07:05:26 from daniel.haxx.se

1 Sep 2015 Pizza   » (Master)

Lifting the skirts of Kodak photo printers

You have to hand it to Kodak. They have been selling their workhorse 6850 dyesub photo printer for more than ten years, and are still actively supporting it with updated drivers and firmware. It's even outlasted one of its successors (the 605), which is no longer sold, yet is still actively supported.

One of those firmware updates led to a discovery that resulted in a flurry of hacking on the 6800/6850 and 605 backends, resulting in considerably improved reliability, robustness, and performance. As well as many bug fixes, both backends now support full job pipelining and vastly improved status and error handling.

So what had I learned? The Kodak 68x0 family are variations of the Shinko S1145 and the Kodak 605 is actually a Shinko S1545. Digging deeper into other Kodak models, I discovered that the 7000/7010/7015 are variations of the Shinko S1645, and that the 8810 is a Shinko S1845.

I'd done my earlier reverse-engineering work on these Kodak models before some kind folks at Shinko/Sinfonia sent me documentation on several of their printers -- So when I re-examined what I had previously figured out with the other docs as a reference, I discovered that from a protocol perspective the 68x0/S1145 models were 6" variations of the 8" S1245, the 605/S1545 and 70xx/S6145 models were very close to the S2145, and the 8810/1845 are apparently identical to the S6245.

This means that I should be able to support the 70xx and 8810 printers with only minor modifications to the existing backend code. Granted, until I can get my hands on any of these printers all of this is conjecture.

So, I'll re-iterate my call for testers for these printers:

  • Shinko CHC-S1245 (aka Sinfonia E1)
  • Shinko CHC-S6245 (aka Sinfonia CE1)
  • Kodak 8810
  • Kodak 7000/7010/7015
  • Kodak 605 (Need to ensure no regressions were introduced)

As my personal printing needs are very well met at this point and these are all fairly expensive models (especially the 8810 and 70xx series), I can't justify buying more printers just to try and make them work with Linux. Someone else is going to have to step up to help make this possible.

On that note, I should mention the S6145/CS2 (and the Ciaat Brava 21), where the situation is a bit more complex. The backend is already written and partially tested, but it currently relies on a proprietary library that is only available in binary form - and which I lack permission to redistribute.

I'm pursuing a multi-prong approach to rectify that situation. In order of desireability:

  • Obtain source code to the library
  • Obtain algorithmic documentation so I can independently re-implement the library
  • Obtain permissions to redistribute the (binary) library, and also get it compiled for a variety of ARM targets
  • Reverse-engineer the library so I can re-implement it

Let me just say that curiousity, in of itself, is poor motivation for enduring the the combination of tedium and frustration that comes from trying to reverse-engineer an opaque blob of x86 code.

Ugh. I need to get out more.

Syndicated 2015-09-01 21:40:31 from Solomon Peachy

1 Sep 2015 bagder   » (Master)

Blog refresh

Dear reader,

If you ever visited my blog in the past and you see this, you should’ve noticed a pretty significant difference in appearance that happened the other day here.

When I kicked off my blog here on the site back in August 2007 and moved my blogging from advogato to self-host, I installed WordPress and I’ve been happy with it since then from a usability stand-point. I crafted a look based on an existing theme and left it at that.

Over time, WordPress has had its hefty amount of security problems over and over again and I’ve also suffered from them myself a couple of times, and a few times I ended up patching it manually more than once. At one point when I decided to bite the bullet and upgrade to the latest version it didn’t work to upgrade anymore and I postpone it for later.

Time passed, I tried again without success and then more time passed.

I finally fixed the issues I had with upgrading. With a series of manual fiddling I finally managed to upgrade to the latest WordPress and when doing so my old theme was considered broken/incompatible so I threw that out and started fresh with a new theme. This new one is based on one of the simple default ones WordPress ships for free. I’ve mostly just made it slightly wider and edited the looks somewhat. I don’t need fancy. Hopefully I’ll be able to keep up with WordPress better this time.

Additionally, I added a captcha that now forces users to solve an easy math problem to submit anything to the blog to help me fight spam, and perhaps even more to solve a problem I have with spambots creating new users. I removed over 3300 users yesterday that never posted anything that has been accepted.

Enjoy.  Now back to our regular programming!

Syndicated 2015-09-01 09:30:01 from daniel.haxx.se

31 Aug 2015 mjg59   » (Master)

Working with the kernel keyring

The Linux kernel keyring is effectively a mechanism to allow shoving blobs of data into the kernel and then setting access controls on them. It's convenient for a couple of reasons: the first is that these blobs are available to the kernel itself (so it can use them for things like NFSv4 authentication or module signing keys), and the second is that once they're locked down there's no way for even root to modify them.

But there's a corner case that can be somewhat confusing here, and it's one that I managed to crash into multiple times when I was implementing some code that works with this. Keys can be "possessed" by a process, and have permissions that are granted to the possessor orthogonally to any permissions granted to the user or group that owns the key. This is important because it allows for the creation of keyrings that are only visible to specific processes - if my userspace keyring manager is using the kernel keyring as a backing store for decrypted material, I don't want any arbitrary process running as me to be able to obtain those keys[1]. As described in keyrings(7), keyrings exist at the session, process and thread levels of granularity.

This is absolutely fine in the normal case, but gets confusing when you start using sudo. sudo by default doesn't create a new login session - when you're working with sudo, you're still working with key posession that's tied to the original user. This makes sense when you consider that you often want applications you run with sudo to have access to the keys that you own, but it becomes a pain when you're trying to work with keys that need to be accessible to a user no matter whether that user owns the login session or not.

I spent a while talking to David Howells about this and he explained the easiest way to handle this. If you do something like the following:
$ sudo keyctl add user testkey testdata @u
a new key will be created and added to UID 0's user keyring (indicated by @u). This is possible because the keyring defaults to 0x3f3f0000 permissions, giving both the possessor and the user read/write access to the keyring. But if you then try to do something like:
$ sudo keyctl setperm 678913344 0x3f3f0000
where 678913344 is the ID of the key we created in the previous command, you'll get permission denied. This is because the default permissions on a key are 0x3f010000, meaning that the possessor has permission to do anything to the key but the user only has permission to view its attributes. The cause of this confusion is that although we have permission to write to UID 0's keyring (because the permissions are 0x3f3f0000), we don't possess it - the only permissions we have for this key are the user ones, and the default state for user permissions on new keys only gives us permission to view the attributes, not change them.

But! There's a way around this. If we instead do:
$ sudo keyctl add user testkey testdata @s
then the key is added to the current session keyring (@s). Because the session keyring belongs to us, we possess any keys within it and so we have permission to modify the permissions further. We can then do:
$ sudo keyctl setperm 678913344 0x3f3f0000
and it works. Hurrah! Except that if we log in as root, we'll be part of another session and won't be able to see that key. Boo. So, after setting the permissions, we should:
$ sudo keyctl link 678913344 @u
which ties it to UID 0's user keyring. Someone who logs in as root will then be able to see the key, as will any processes running as root via sudo. But we probably also want to remove it from the unprivileged user's session keyring, because that's readable/writable by the unprivileged user - they'd be able to revoke the key from underneath us!
$ sudo keyctl unlink 678913344 @s
will achieve this, and now the key is configured appropriately - UID 0 can read, modify and delete the key, other users can't.

This is part of our ongoing work at CoreOS to make rkt more secure. Moving the signing keys into the kernel is the first step towards rkt no longer having to trust the local writable filesystem[2]. Once keys have been enrolled the keyring can be locked down - rkt will then refuse to run any images unless they're signed with one of these keys, and even root will be unable to alter them.

[1] (obviously it should also be impossible to ptrace() my userspace keyring manager)
[2] Part of our Secure Boot work has been the integration of dm-verity into CoreOS. Once deployed this will mean that the /usr partition is cryptographically verified by the kernel at runtime, making it impossible for anybody to modify it underneath the kernel. / remains writable in order to permit local configuration and to act as a data store, and right now rkt stores its trusted keys there.

comment count unavailable comments

Syndicated 2015-08-31 17:18:52 from Matthew Garrett

31 Aug 2015 mikal   » (Journeyer)

Walk to the Southern Most Point

I've just realized that I didn't post any pics of my walk to the most southern point of the ACT. The CBC had a planned walk to the southern most point on the ACT border and I was immediately intrigued. So, I took a day off work and gave it a go. It was well worth the experience, especially as Matthew the guide had a deep knowledge of the various huts and so forth in the area. A great day.

                                       

See more thumbnails

Interactive map for this route.

Tags for this post: blog pictures 20150818-southernmostpoint photo canberra bushwalk
Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches

Comment

Syndicated 2015-08-30 20:33:00 (Updated 2015-08-31 08:04:10) from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

29 Aug 2015 dmarti   » (Master)

Watering the data lawn

News from California: Big month for conservation: Californians cut water use by 31% in July.

Governor Brown said to cut back by 25%, and people did 31%.

Why? We were watering and maintaining lawns because we were expected to, because everyone else was doing it. As soon as we had a good excuse to cut back, a lot of us did, even if we overshot the 25% target.

Today, advertising on the web has its own version of lawn care. Ad people have the opportunity to collect excess data. Everyone is stuck watering the data lawn and running the data mower. So the ad-supported web is getting mixed up with surveillance marketing, failing to build any new brands, and getting less and less valuable for everyone.

Clearly, the optimum amount of data to collect is not "as much as possible". If an advertiser is able to collect enough data to target an ad too specifically, that ad loses its power to communicate the advertiser's intentions in the market, and becomes just like spam or a cold call. By enabling users to confidently reduce the amount of information they share, advertisers make their own signal stronger. (Good explanation of signaling and advertising from Eaon Pritchard.)

Where's a good reason to justify a shift to higher-value advertising? Everybody wants to get out of the race to collect more and more, less and less useful, data. So what's a good excuse to start?

Could a good news frenzy do it? No IT company is better at kicking off a news frenzy than Apple, and now Apple is doing Content Blocking. Doc Searls covers Content Blocking's interaction with Apple's own ad business, and adds:

Apple also appears to be taking sides against adtech with its privacy policy, which has lately become more public and positioned clearly against the big tracking-based advertising companies (notably Google and Facebook).

It's a start, but unfortunately, Big Marketing tends to take Apple's guidance remarkably slowly. Steve Jobs wrote Thoughts on Flash in 2010, and today, more than five years later, battery-sucking Flash ads are still a thing.

So even if Apple clobbers adtech companies over the head with a "Thoughts on Tracking" piece, expect a lot of inertia. (People who can move fast are already moving out of adtech to other things.)

Bob Hoffman writes:

The era of creepy tracking, maddening pop-ups and auto-play, and horrible banners may be drawing to its rightful conclusion.

But things don't just happen on the Internet. Someone builds an alternative. It looks obvious later, but somebody had to take the first whack at it. Tracking protection is great, but someone has to build the tools, check that they don't break web sites, and spread the word to regular users.

So why just look at tracking protection and say, wow, won't it be cool if this catches on?

Individuals, sites, and brands can help make tracking protection happen..

And if you really think about it, tracking protection tools are just products that users install. If only there were some way to get the attention of a bunch of people at once to persuade them to try things.

Syndicated 2015-08-29 14:28:16 from Don Marti

27 Aug 2015 bagder   » (Master)

Content over HTTP/2

cdn77 logoRoughly a week ago, on August 19, cdn77.com announced that they are the “first CDN to publicly offer HTTP/2 support for all customers, without ‘beta’ limitations”. They followed up just hours later with a demo site showing off how HTTP/2 might perform side by side with a HTTP/1.1 example. And yes, the big competitor CDNs are not yet offering HTTP/2 support it seams.

Their demo site initially got critized for not being realistic and for showing HTTP/2 to be way better in comparison that what a real life scenario would be more likely to look like, and it was also subsequently updated fairly quickly. It is useful to compare with the similarly designed previously existing demo sites hosted by Akamai and the Go project.

NGINX logocdn77’s offering is built on nginx’s alpha patch for HTTP/2 that was anounced just two weeks ago. I believe nginx’s full release is still planned to happen by the end of this year.

I’ve talked with cdn77’s Jakub Straka about their HTTP/2 efforts, and since I suspect there are a few others in my audience who’re also similarly curious I’m offering this interview-style posting here, intertwined with my own comments and thoughts. It is not just a big ad for this company, but since they’re early players on this field I figure their view and comments on this are worth reading!

I’ve been in touch with more than one person who’ve expressed their surprise and awe over the fact that they’re using this early patch for nginx to run in production. So I had to ask them about that. Below, Jakub’s comments are all prefixed with his name and shown using italics.

nginx

Jakub: “Yes, we are running the alpha patch, which is basically a slightly modified SPDY. In the past we have been in touch with the Nginx team and exchanged tips and ideas, the same way we plan to work on the alpha patch in the future.

We’re actually pretty careful when deploying new and potentially unstable packages into production. We have separate servers for http2 and we are monitoring them 24/7 for any exceptions. We also have dedicated developers who debug any issues we are facing with these machines. We would never jeopardize the stability of our current network.

I’m not an expert on neither server-side HTTP/2 nor nginx in particular , but I think I read somewhere that the nginx HTTP/2 patch removes the SPDY support in favor of the new protocol.

Jakub: “You are right. HTTP/2 patch “rewrites” SPDY into the HTTP/2, so the SPDY is no longer supported after applying the patch. Since we have HTTP/2 running on separate servers, we still have SPDY support on the rest of the network.”

Did the team at cdn77 at all consider using something else than nginx for HTTP/2, like the promising newcomer h2o?

Jakub: “Not at all. Nginx is a clear choice for us. Its architecture and modularity is awesome. It is also very reliable and it has a pretty long history.

On scale

Can you share some of the biggest hurdles you had to overcome to deploy HTTP/2 on this scale with nginx?

Jakub: “Since nobody has tried the patch in such a scale ever before, we had to make sure it will hold even under pressure and needed to create a load heavy testing environment. We used servers from our partner company 10gbps.io and their 10G uplinks to create intensive ghost traffic. Also, it was important to make sure that supporting tools and applications are HTTP/2 ready – not all of them were. We needed to adjust the way we monitor and control servers in few cases.

There are a few bugs in Nginx that appear mainly in association with the longer-lived connections. They cause issues with the application layer and consume more resources. To be able to accommodate HTTP/2 and still keep necessary network redundancies, we needed to upgrade our network significantly.

I read this as an indication that the nginx patch isn’t perfected just yet rather than signifying that http2 is special. Perhaps also that http2 connections might use a larger footprint in nginx than good old http1 connections do.

Jakub mentions they see average data bandwidth savings in the order of 20 to 60 percent depending on sites and contents with the switch to h2, but their traffic amounts haven’t been that large yet:

Jakub: “So far, only a fraction of the traffic is running via HTTP/2, but that is understandable since we launched the support few days ago. On the first day, only about 0.45% of the traffic was HTTP/2 and a big part of this was our own demo site. Over the weekend, we saw impressive adoption rate and the total HTTP/2 traffic accounts for more than 0.8% now, all that with the portion of our own traffic in this dropping dramatically. We expect to be pushing around 1.2% – 1.5% of total traffic over HTTP/2 till the end of this week.

Understandably, it is ramping up. Still, Firefox telemetry is showing at least 10% of the total traffic over HTTP/2 already.

Future of HTTPS and HTTP/2?

Whttp2 logohile I’m talking to a CDN operator, I figured I should poll their view on HTTPS going forward! Will the fact that all browsers only support h2 over HTTPS push more of your customers and your traffic in general over to HTTP, you think?

Jakub: “This is not easy to predict. There is encryption overhead, but HTTP/2 comes with header compression and it is binary. So at this single point, the advantages and disadvantages zero out. Also, the use of HTTPS is rising rapidly even on the older protocol, so we don’t consider this an issue at all.

In general, from a CDN perspective and as someone who just deployed this on a fairly large scale, what’s your general perception of what http2 means going forward?

Jakub: “We believe that this is a huge step forward in how we distribute content online and as a CDN company, we are especially excited as it concerns the very core of our business. From the new features, we have great expectations about cache invalidation that is being discussed right now.

Thanks to Jakub, Honza and Tomáš of cdn77 for providing answers and info. We live in exciting times.

Syndicated 2015-08-27 20:56:25 from daniel.haxx.se

27 Aug 2015 superuser   » (Journeyer)

TIL: Keeping SSH connections alive

It's actually fairly simple. In your SSH config file, you add this:

Host *
    ServerAliveInterval 240

The * after the host can be whatever host you want it to apply to, and * simply means to apply this to all hosts. So you could also apply it this way:

Host example.com
    ServerAliveInterval 240

Also, if you aren't sure where your SSH config file is, on a UNIX based system, it's generally in ~/.ssh/config.

Syndicated 2015-08-27 14:24:59 from Jason Lotito

27 Aug 2015 joey   » (Master)

then and now

It's 2004 and I'm in Oldenburg DE, working on the Debian Installer. Colin and I pair program on partman, its new partitioner, to get it into shape. We've somewhat reluctantly decided to use it. Partman is in some ways a beautful piece of work, a mass of semi-object-oriented, super extensible shell code that sprang fully formed from the brow of Anton. And in many ways, it's mad, full of sector alignment twiddling math implemented in tens of thousands of lines of shell script scattered amoung hundreds of tiny files that are impossible to keep straight. In the tiny Oldenburg Developers Meeting, full of obscure hardware and crazy intensity of ideas like porting Debian to VAXen, we hack late into the night, night after night, and crash on the floor.

sepia toned hackers round a table

It's 2015 and I'm at a Chinese bakery, then at the Berkeley pier, then in a SF food truck lot, catching half an hour here and there in my vacation to add some features to Propellor. Mostly writing down data types for things like filesystem formats, partition layouts, and then some small amount of haskell code to use them in generic ways. Putting these peices together and reusing stuff already in Propellor (like chroot creation).

Before long I have this, which is only 2 undefined functions away from (probably) working:

let chroot d = Chroot.debootstrapped (System (Debian Unstable) "amd64") mempty d
        & Apt.installed ["openssh-server"]
        & ...
    partitions = fitChrootSize MSDOS
        [ (Just "/boot", mkPartiton EXT2)
        , (Just "/", mkPartition EXT4)
        , (Nothing, const (mkPartition LinuxSwap (MegaBytes 256)))
        ]
 in Diskimage.built chroot partitions (grubBooted PC)

This is at least a replication of vmdebootstrap, generating a bootable disk image from that config and 400 lines of code, with enormous customizability of the disk image contents, using all the abilities of Propellor. But is also, effectively, a replication of everything partman is used for (aside from UI and RAID/LVM).

sailboat on the SF bay

What a difference a decade and better choices of architecture make! In many ways, this is the loosely coupled, extensible, highly configurable system partman aspired to be. Plus elegance. And I'm writing it on a lark, because I have some spare half hours in my vacation.

Past Debian Installer team lead Tollef stops by for lunch, I show him the code, and we have the conversation old d-i developers always have about partman.

I can't say that partman was a failure, because it's been used by millions to install Debian and Ubuntu and etc for a decade. Anything that deletes that many Windows partitions is a success. But it's been an unhappy success. Nobody has ever had a good time writing partman recipes; the code has grown duplication and unmaintainability.

I can't say that these extensions to Propellor will be a success; there's no plan here to replace Debian Installer (although with a few hundred more lines of code, propellor is d-i 2.0); indeed I'm just adding generic useful stuff and building further stuff out of it without any particular end goal. Perhaps that's the real difference.

Syndicated 2015-08-27 00:01:59 from see shy jo

25 Aug 2015 hypatia   » (Journeyer)

Wednesday 26 August 2015

I went to San Franciso a month ago this Friday, for the final stages of planning the Ada Initiative’s shutdown. The first morning I woke up there to the news of Nóirín’s death. I wrote “Nóirín was also one of the strongest and bravest people [I] will ever have the privilege of knowing” that same morning and that’s everything I want to say.

So, the only thing about that trip that makes sense to tell is some images.

Staying in the Mission and being in the sun in the streets full of trees and brightly painted houses finally made San Francisco make sense to me. As was probably inevitable, coming from another beautiful city full of gentrifiers.

Long weekday evenings in the dusk at Dolores Park, watching the fog from a distance. Seeing a rainbow.

Mad Max: Fury Road which I had never expected to see, much less like, even though I had heard about it from feminists more than action fans. (Or maybe they were both!)

But instead of rushing into Furiosa’s cab like everyone else I know, I developed an obsession with Pitch Perfect instead and walked up and down Valencia for hours in the middle of the night listening to its soundtrack.

Eating berries. And paté on apple slices.

My family sending me so much Lindt chocolate in San Francisco that I still have about ⅓ of it now. But I ate all the peanut butter balls before I left.

Broken choppy video chat images of V and A smiling at me.

Cutting down my SIM card from my broken phone with scissors rather than waiting another day to call home.

Walking on a hillside in the hot sun near Muir Woods, in a country where pines are supposed to be and eucalypts are pests.

The purple windows of the 787 that brought me home.

Syndicated 2015-08-25 22:26:38 from puzzling.org

25 Aug 2015 gary   » (Master)

td_ta_map_lwp2thr

To debug live processes on modern Linux GDB needs four libthread_db functions:

  • td_ta_map_lwp2thr (required for initial attach)
  • td_thr_get_info (required for initial attach)
  • td_thr_tls_get_addr (not required for initial attach, but required for “p errno” on regular executables)
  • td_thr_tlsbase (not required for initial attach, but required for “p errno” for -static -pthread executables)

To debug a corefile on modern Linux GDB needs one more libthread_db function:

  • td_ta_thr_iter

GDB makes some other libthread_db calls too, but these are bookkeeping that won’t be required with the replacement. So, the order of work will be:

  1. Implement replacements for the four core functions.
  2. Get those approved and committed in GDB, BFD and glibc (and in binutils, coreutils readelf).
  3. Replace td_ta_thr_iter too, and get that committed.
  4. Implement runtime-linker interface stuff to allow GDB to follow dlmopen.

The first (non-bookkeeping) function GDB calls is td_ta_map_lwp2thr and it’s a pig. If I can do td_ta_map_lwp2thr I can do anything.

When you call it, td_ta_map_lwp2thr has four ways it can proceed:

  1. If __pthread_initialize_minimal has not gotten far enough we can’t rely on whatever’s in the thread registers. If this is the case, td_ta_map_lwp2thr checks that the LWP is the initial thread and sets th->th_unique to NULL. (Other bits of libthread_db spot this NULL and act accordingly.) td_ta_map_lwp2thr decides whether __pthread_initialize_minimal has gotten far enough by examining __stack_user.next in the inferior. If it’s NULL then __pthread_initialize_minimal has not gotten far enough.
  2. On ta_howto_const_thread_area architectures (x86_64, aarch64, arm)
    [glibc/sysdeps/*/nptl/tls.h has
      #define DB_THREAD_SELF CONST_THREAD_AREA(bits, value)
    which exports
      const uint32_t _thread_db_const_thread_area = value;
    from glibc/nptl_db/db_info.c]:
    • td_ta_map_lwp2thr will call ps_get_thread_area with value

    to set th->th_unique.

    ps_get_thread_area (in GDB) does different things for different
    architectures:

    1. on x86_64, value is a register number (FS or GS)
      ps_get_thread_area returns the contents of that register.
    2. on arm, GDB uses PTRACE_GET_THREAD_AREA, NULL and subtracts value from the result.
    3. on aarch64, GDB uses PTRACE_GETREGSET, NT_ARM_TLS and subtracts value from the result.
  3. On ta_howto_reg architectures (ppc*, s390*)
    [glibc/sysdeps/*/nptl/tls.h has
      #define DB_THREAD_SELF REGISTER(bits, size, regofs, bias)...
    which exports
      const uint32_t _thread_db_register32[3] = {size, regofs, bias};
    and/or
      const uint32_t _thread_db_register64[3] = {size, regofs, bias};
    from glibc/nptl_db/db_info.c]:

    td_ta_map_lwp2thr will:

    • call ps_lgetregs to get the inferior’s registers
    • get the contents of the specified register (with _td_fetch_value_local)

        and

    • SUBTRACT bias from the register’s contents

    to set th->unique.

  4. On ta_howto_reg_thread_area architectures (i386)
    [glibc/sysdeps/*/nptl/tls.h has
      #define DB_THREAD_SELF REGISTER_THREAD_AREA(bits, size, regofs, bias)...
    which exports
      const uint32_t _thread_db_register32_thread_area[3] = {size, regofs, bias};
    and/or
      const uint32_t _thread_db_register64_thread_area[3] = {size, regofs, bias};
    from glibc/nptl_db/db_info.c]:

    td_ta_map_lwp2thr will:

    • call ps_lgetregs to get the inferior’s registers
    • get the contents of the specified register (with _td_fetch_value_local)
    • RIGHT SHIFT the register’s contents by bias

        and

    • call ps_get_thread_area with that number

    to set th->unique.

    ps_get_thread_area (in GDB) does different things for different
    architectures:

    1. on i386, GDB uses PTRACE_GET_THREAD_AREA, VALUE and returns the second element of the result.

Cases 2, 3, and 4 will obviously be hardwired into the specific architecture’s libpthread. But… yeah.

Syndicated 2015-08-25 15:24:47 from gbenson.net

25 Aug 2015 gary   » (Master)

Infinity

I’m writing a replacement for libthread_db. It’s called Infinity.

Why? Because libthread_db is a pain in the ass for debuggers. GDB has to watch for inferiors loading thread libraries. It has to know that, for example, on GNU/Linux, when the inferior loads libpthread.so then GDB has to locate the corresponding libthread_db.so into itself and use that to inspect libpthread’s internal structures. How does GDB find libthread_db? It doesn’t, it has to search for it. How does it know, when it finds it, that the libthread_db it found is compatible with the libpthread the inferior loaded? It doesn’t, it has to load it to see, then unload it if it didn’t work. How does GDB know that the libthread_db it found is compatible with itself? It doesn’t, it has to load it and, erm, crash if it isn’t. How does GDB manage when the inferior (and it’s libthread_db) has a different ABI to GDB? Well, it doesn’t.

libthread_db means you can’t debug an application in a RHEL 6 container with a GDB in a RHEL 7 container. Probably. Not safely. Not without using gdbserver, anyway–and there’s no reason you should have to use gdbserver to debug what is essentially a native process.

So. Infinity. In Infinity, inspection functions for debuggers will be shipped as bytecode in ELF notes in the same file as the code they pertain to. libpthread.so, for example, will contain a bunch of Infinity notes, each representing some bit of functionality that GDB currently gets from libthread_db. When the inferior starts or loads libraries GDB will find the notes in the files it alread loaded and register their functions. If GDB notices it has, for example, the full set of functions it requires for thread support then, boom, thread support switches on. This happens regardless of whether libpthread was dynamically or statically linked.

(If you’re using gdbserver, gdbserver gives GDB a list of Infinity functions it’s interested in. When GDB finds these functions it fires the (slightly rewritten) bytecode over to gdbserver and gdbserver takes it from there.)

Concrete things I have are: a bytecode format (but not the bytecode itself), an executable with a couple of handwritten notes (with some junk where the bytecode should be), a readelf that can decode the notes, a BFD that extracts the notes and a GDB that picks them up.

What I’m doing right now is rewriting a function I don’t understand (td_ta_map_lwp2thr) in a language I’m inventing as I go along (i8) that’ll be compiled with a compiler that barely exists (i8c) into a bytecode that’s totally undefined to be executed by an interpreter that doesn’t exist.

(The compiler’s going to be written in Python, and it’ll emit assembly language. It’s more of an assembler, really. Emitting assembler rather than going straight to bytecode simplifies things (e.g. the compiler won’t understand numbers!) at the expense of emitting some slightly crappy code (e.g. instruction sequences that add zero). I’m thinking GDB will eventually JIT the bytecode so this won’t matter. GDB will have to JIT if it’s to cope with millions of threads, but jitted Infinity should be faster than libthread_db. None of this is possible now, but it might be sooner than you thing with the GDB/GCC integration work that’s happening. Besides, I can think of about five different ways to make an interpreter skip null operations in zero time.)

Future applications for Infinity notes include the run-time linker interface.

Watch this space.

Syndicated 2015-08-25 00:24:49 from gbenson.net

24 Aug 2015 eMBee   » (Journeyer)

Hiring Pike Programmers

Once in a while i have someone reject to work with me because they don't know Pike. What they are really saying is, that they are not willing to learn something new.

If you are a decent programmer, then learning a new programming language is not hard. Technology changes all the time, and every year you'll learn new frameworks and tools. That's part of your work. So why shy away from learning a new language?

If you can't bring yourself to learn a new language then i suspect you'll also have a hard time learning anything else. So actually i should thank you by refusing the job because of that.

You say: learning a new language is hard.

If you believe that, you haven't tried enough. Sure, if you pick some of the more unusual languages like Haskell, it may be hard (but i don't know, i have not tried learning Haskell yet) and in general, learning your second language is probably the hardest (because the first language you learn, everything is new and you expect it to be hard, but with the second language maybe you fear it is as dificult as the first one, and you don't want to go through that again), also learning a new syntax may take some getting used to.

But all of these hurdles are measured in days.

Pike in particular has a syntax very close to C and Java. (that is, operations that are the same in C, Java and Pike also use the same syntax, with very few exceptions). This makes the syntax also similar to Javascript, PHP, and the many other languages with a C-inspired syntax. Picking that up should not be hard.

The rest is learning the Pike libraries and figuring out what makes Pike tick. You should have that down within a few weeks.

This is the same for pretty much any other language you might start to learn.

I am talking from experience here. I'll give you a few examples:

At my first fulltime job i was hired for my Pike experience. As a junior programmer who hadn't finished univeristy yet, i didn't really have any work history. But i did have a number of Pike modules for the Roxen webapplicationserver that i could show off.

At the same time a university graduate was hired, who had not even seen Pike before joining the team. Within a few weeks she was as productive as the rest of us, and having finished her studies she arguably knew more about programming and could explain more about Pike than i could.

At another job a few years later one of my managers who had just recently joined the company fell in love with Pike, and when he left he built his own company using Pike as the main development language. This guy was not even a programmer.

When i came to china, my first job was for a python programmer. I had learned python by then, but i had no practical experiece whatsoever. I was allowed to do the programming tests in Pike (they had an automated testsuite, which of course could not handle Pike, so in my case the answers were reviewed manually. They had no problems reviewing their tests in a language they had never seen before. That's how good they were). One of the tests i did in python, and i passed and got the job. I was productive from the start.

A few years ago i hired 3 chinese students to work for me. Since this was the first time i hired anyone, i was not sure how learning a new language would go down, on the first day, possibly their first experiene working with a foreigner too. So the first project i gave them was in Java. It was a Java client for the sTeam server. Two of the students left after the summer holidays were over, but one stayed on, and his next project was in Pike. Also for the sTeam server, so he could reuse his knowledge of the APIs that he learned during the Java project, but he did have to learn the language itself. He was productive within a few days.

Last year i was hired to help with a PHP project, using the Laravel framework. I had never really written PHP code before, but the framework was not so different from others (eg Django) so that i was productive immideately. And i ended up fixing other peoples code too.

This summer, i was working with 3 students for Google Summer Of Code. One student worked on the sTeam server, and had to learn Pike for that. He did it during the get-to-know period and started churning out code from the first day of the coding-period.

Another student picked a smalltalk project. She learned smalltalk as soon as she picked the project, joined the pharo-smalltalk community and became a recognized contributor to the pharo 4.0 release. All before her proposal for the GSOC project was even accepted.

Convinced yet?

You say: Noone else uses Pike. It won't help me get a job.

That is probably true. But it is becoming less true as time goes by.

One of the problems with hiring is that, just as you believe learning a new language is hard, so do the hiring managers, and thus they search only for programmers that already know the language that they will need to use.

In the Pike community too. I was the only Pike programmer available who liked moving countries, and so i had my pick for jobs in the USA, in Germany, in New Zealand, in Latvia. Thanks to Pike i got around. Try that with a popular language.

Fortunately, this is changing. Like my first China job, more companies recognizing the ability to learn as more important than a particular language. For them it won't matter which programming languages you learned, as long as you can demonstrate your learning skill. In fact, learning an unknown language will let you stand out as someone serious about learning programming languages.

Learning new languages will also increase your confidence in your ability. For that PHP job i was never asked how much PHP experience i had. I did make clear that i had no experience with Laravel, which is something they could not expect from everyone, even if they had plenty of PHP experiece. But i had experience with similar frameworks, and i was confident that i could pick up what i needed quickly. And i proved it.

When i am hiring programmers myself, i definetly don't care which languages they know. All i care is that they know at least two languages. These people have at least gotten over the second language hump, and learning a third language will be a breeze. Whether it's Pike or any other language.

Stop telling me that you can't learn a new programming language. You can! Because if you couldn't, you would not qualify as a programmer to begin with. At least, i would not hire you.

Syndicated 2015-08-24 17:47:51 from DevLog

19 Aug 2015 bagder   » (Master)

One year and 6.76 million key-presses later

I’ve been running a keylogger on my main Linux box for exactly one year now. The keylogger logs every key-press – its scan code together with a time stamp. This now allows me to do some analyses and statistics of what a year worth of using a keyboard means.

This keyboard being logged is attached to my primary work machine as well as it being my primary spare time code input device. Sometimes I travel and sometimes I take time-off (so called vacation) and then I usually care my laptop with me instead which I don’t log and which uses a different keyboard layout anyway so merging a log from such a different machine would probably skew the results a bit too much.

Func KB-460 keyboard

What did I learn?

A full year of use meant 6.76 million keys were pressed. I’ve used the keyboard 8.4% on weekends. I used the keyboard at least once on 298 days during the year.

When I’m active, I average on 2186 keys pressed per hour (active meaning that at least one key was pressed during that hour), but the most fierce keyboard-bashing I’ve done during a whole hour was when I recorded 8842 key-presses on June 9th 2015 between 23:00 and 24:00! That day was actually also the most active single day during the year with 63757 keys used.

In total, I was active on the keyboard 35% of the time (looking at active hours). 35% of the time equals about 59 hours per week, on average. I logged 19% keyboard active minutes, minutes in which I hit at least one key. I’m pretty amazed by that number as it equals almost 32 hours a week with really active keyboard action.

Zooming in and looking at single minutes, the most active minute occurred 15:48 on November 10th 2014 when I managed to hit 405 keys. Average minutes when I am active I type 65 keys/minute.

Looking at usage distribution over week days: Tuesday is my most active day with 19.7% of all keys. Followed by Thursday (19.1%), Monday (18.7%), Wednesday (17.4%) and Friday (16.6%). I already mentioned weekends, and I use the keyboard 4.8% on Sundays and a mere 3.6% on Saturdays.

Separating the time-stamps over the hours of the day, the winning hour is quite surprising the 23-00 hour with 11.9% followed by the more expected 09-10 (10.0%), 10-11 and 14-15. Counting the most active minutes over the day shows an even more interesting table. All the top 10 most active minutes are between 23-00!

I’ll let the keylogger run and see what else I’ll learn over time…

Syndicated 2015-08-19 05:58:23 from daniel.haxx.se

18 Aug 2015 mjg59   » (Master)

Canonical's deliberately obfuscated IP policy

I bumped into Mark Shuttleworth today at Linuxcon and we had a brief conversation about Canonical's IP policy. The short summary:

  • Canonical assert that the act of compilation creates copyright over the binaries, and you may not redistribute those binaries unless (a) the license prevents Canonical from restricting redistribution (eg, the GPL), or (b) you follow the terms of their IP policy. This means that, no matter what Dustin's blogpost says, Canonical's position is that you must ask for permission before distributing any custom container images that contain Ubuntu binaries, even if you use no Ubuntu trademarks in the process. Doing so without their permission is an infringement of their copyright.
  • Canonical have no intention of clarifying their policy, because Canonical benefit from companies being legally uncertain as to whether they have permission to do something or not.
  • Mark justifies maintaining this uncertainty by drawing an analogy between it and the perceived uncertainties that exist around certain aspects of the GPL. I disagree with this analogy pretty strongly. One of the main reasons for the creation of GPLv3 was to deal with some more ambiguous aspects of GPLv2 (such as what actually happened after license termination and how patents interacted with the GPL). The FSF publish a large FAQ intended to provide further clarity. The major ambiguity is in what a derivative work actually is, which is something the FSF can't answer absolutely (that's going to be up to courts) but will give its opinion on when asked. The uncertainties in Canonical's IP policy aren't a result of a lack of legal clarity - they're a result of Canonical's refusal to answer questions.

The even shorter summary: Canonical won't clarify their IP policy because they believe they can make more money if they don't.

Why do I keep talking about this? Because Canonical are deliberately making it difficult to create derivative works, and that's one of the core tenets of the definition of free software. Their IP policy is fundamentally incompatible with our community norms, and that's something we should care about rather than ignoring.

comment count unavailable comments

Syndicated 2015-08-18 19:02:52 from Matthew Garrett

17 Aug 2015 marnanel   » (Journeyer)

Anarchism compared to vegetarianism

Something I said at a party at the vicarage last night:

People ask why I'm an anarchist. The reasons are a bit like my reasons for being a vegetarian. I believe this would be a better world if we gave up eating meat-- and that humanity can't survive unless we do. Once, perhaps, our civilisation was at a stage where eating meat is necessary, but we've shown we've got beyond that now. But now and then, in a world where most people still have to eat meat, I might agree to eat meat too for the short term-- with caution that it doesn't become the long term. It's easy for the best to be the enemy of good.

This entry was originally posted at http://marnanel.dreamwidth.org/340563.html. Please comment there using OpenID.

Syndicated 2015-08-17 12:50:48 from Monument

13 Aug 2015 Stevey   » (Master)

Making an old android phone useful again

I've got an HTC Desire, running Android 2.2. It is old enough that installing applications such as thsoe from my bank, etc, fails.

The process of upgrading the stock ROM/firmware seems to be:

  • Download an unsigned zip file, from a shady website/forum.
  • Boot the phone in recovery mode.
  • Wipe the phone / reset to default state.
  • Install the update, and hope it works.
  • Assume you're not running trojaned binaries.
  • Hope the thing still works.
  • Reboot into the new O/S.

All in all .. not ideal .. in any sense.

I wish there were a more "official" way to go. For the moment I guess I'll ignore the problem for another year. My nokia phone does look pretty good ..

Syndicated 2015-08-13 14:44:38 from Steve Kemp's Blog

12 Aug 2015 sye   » (Journeyer)

repost from here:
http://bin63.com/how-to-setup-a-git-repository-on-freebsd
Install devel/git from the ports tree. Optinally select GITWEB. SVN, P4 or CVS.
did what I need to do on VPS FreeBSD9.1
cd /usr/ports/devel/git
make install clean; rehash
cd /usr/ports/x11-servers/x11rdb
make install clean; rehash
cd /usr/port/dev/gitg/
...

10 Aug 2015 Stevey   » (Master)

A brief look at the weed file store

Now that I've got a citizen-ID, a pair of Finnish bank accounts, and have enrolled in a Finnish language-course (due to start next month) I guess I can go back to looking at object stores, and replicated filesystems.

To recap my current favourite, despite the lack of documentation, is the Camlistore project which is written in Go.

Looking around there are lots of interesting projects being written in Go, and so is my next one the seaweedfs, which despite its name is not a filesystem at all, but a store which is accessed via HTTP.

Installation is simple, if you have a working go-lang environment:

go get github.com/chrislusf/seaweedfs/go/weed

Once that completes you'll find you have the executable bin/weed placed beneath your $GOPATH. This single binary is used for everything though it is worth noting that there are distinct roles:

  • A key concept in weed is "volumes". Volumes are areas to which files are written. Volumes may be replicated, and this replication is decided on a per-volume basis, rather than a per-upload one.
  • Clients talk to a master. The master notices when volumes spring into existance, or go away. For high-availability you can run multiple masters, and they elect the real master (via RAFT).

In our demo we'll have three hosts one, the master, two and three which are storage nodes. First of all we start the master:

root@one:~# mkdir /node.info
root@one:~# weed master -mdir /node.info -defaultReplication=001

Then on the storage nodes we start them up:

root@two:~# mkdir /data;
root@two:~# weed volume -dir=/data -max=1  -mserver=one.our.domain:9333

Then the second storage-node:

root@three:~# mkdir /data;
root@three:~# weed volume -dir=/data -max=1 -mserver=one.our.domain:9333

At this point we have a master to which we'll talk (on port :9333), and a pair of storage-nodes which will accept commands over :8080. We've configured replication such that all uploads will go to both volumes. (The -max=1 configuration ensures that each volume-store will only create one volume each. This is in the interest of simplicity.)

Uploading content works in two phases:

  • First tell the master you wish to upload something, to gain an ID in response.
  • Then using the upload-ID actually upload the object.

We'll do that like so:

laptop ~ $ curl -X POST http://one.our.domain:9333/dir/assign
{"fid":"1,06c3add5c3","url":"192.168.1.100:8080","publicUrl":"192.168.1.101:8080","count":1}

client ~ $ curl -X PUT -F file=@/etc/passwd  http://192.168.1.101:8080/1,06c3add5c3
{"name":"passwd","size":2137}

In the first command we call /dir/assign, and receive a JSON response which contains the IPs/ports of the storage-nodes, along with a "file ID", or fid. In the second command we pick one of the hosts at random (which are the IPs of our storage nodes) and make the upload using the given ID.

If the upload succeeds it will be written to both volumes, which we can see directly by running strings on the files beneath /data on the two nodes.

The next part is retrieving a file by ID, and we can do that by asking the master server where that ID lives:

client ~ $ curl http://one.our.domain:9333/dir/lookup?volumeId=1,06c3add5c3
{"volumeId":"1","locations":[
 {"url":"192.168.1.100:8080","publicUrl":"192.168.1.100:8080"},
 {"url":"192.168.1.101:8080","publicUrl":"192.168.1.101:8080"}
]}

Or, if we prefer we could just fetch via the master - it will issue a redirect to one of the volumes that contains the file:

client ~$ curl http://one.our.domain:9333/1,06c3add5c3
<a href="http://192.168.1.100:8080/1,06c3add5c3">Moved Permanently</a>

If you follow redirections then it'll download, as you'd expect:

client ~ $ curl -L http://one.our.domain:9333/1,06c3add5c3
root:x:0:0:root:/root:/bin/bash
..

That's about all you need to know to decide if this is for you - in short uploads require two requests, one to claim an identifier, and one to use it. Downloads require that your storage-volumes be publicly accessible, and will probably require a proxy of some kind to make them visible on :80, or :443.

A single "weed volume .." process, which runs as a volume-server can support multiple volumes, which are created on-demand, but I've explicitly preferred to limit them here. I'm not 100% sure yet whether it's a good idea to allow creation of multiple volumes or not. There are space implications, and you need to read about replication before you go too far down the rabbit-hole. There is the notion of "data centres", and "racks", such that you can pretend different IPs are different locations and ensure that data is replicated across them, or only within-them, but these choices will depend on your needs.

Writing a thin middleware/shim to allow uploads to be atomic seems simple enough, and there are options to allow exporting the data from the volumes as .tar files, so I have no undue worries about data-storage.

This system seems reliable, and it seems well designed, but people keep saying "I'm not using it in production because .. nobody else is" which is an unfortunate problem to have.

Anyway, I like it. The biggest omission is really authentication. All files are public if you know their IDs, but at least they're not sequential ..

Syndicated 2015-08-10 13:29:10 from Steve Kemp's Blog

9 Aug 2015 caolan   » (Master)

guadec 2015 porting LibreOffice to gtk3 slides

Presented our porting LibreOffice to GTK3 presentation at GUADEC yesterday. Here, as hybrid pdf, are those slides with a rough guide to our architecture there and current wayland progress.

It was pointed out after the presentation (by jrb), that our gtk3-themed spinbuttons had the up and down buttons in +,- order instead of the correct -,+ order. So that's fixed now.

Syndicated 2015-08-09 13:14:00 (Updated 2015-08-09 13:14:11) from Caolán McNamara

9 Aug 2015 caolan   » (Master)

crash testing, 80000 documents, 0 export failures, 0 import failures

The last LibreOffice crashtesting run reports our goal of 0 import/export crash/asserts. This is on a refreshed up to date 80000 docuent corpus from various bugzillas and other sources.

Earlier runs had been over a static collection of 76000 documents, hopefully now we have zeroed the dials we can refresh the corpus far more frequently, perhaps on every run, and actively trawl for crasher documents.

Syndicated 2015-08-09 09:57:00 (Updated 2015-08-09 13:14:56) from Caolán McNamara

8 Aug 2015 mjg59   » (Master)

Difficult social problems are still difficult problems

After less than a week of complaints, the TODO group have decided to pause development of their code of conduct. This seems to have been triggered by the public response to the changes I talked about here, which TODO appear to have been completely unprepared for.

While disappointing in a bunch of ways, this is probably the correct decision. TODO stumbled into this space with a poor understanding of the problems that they were trying to solve. Nikki Murray pointed out that the initial draft lacked several of the key components that help ensure less privileged groups can feel that their concerns are taken seriously. This was mostly rectified last week, but nobody involved appeared to be willing to stand behind those changes in a convincing way. This wasn't helped by almost all of this appearing to land on Github's plate, with the rest of the TODO group largely missing in action[1]. Where were Google in this? Yahoo? Facebook? Left facing an angry mob with nobody willing to make explicit statements of support, it's unsurprising that Github would try to back away from the situation.

But that doesn't remove their blame for being in the situation in the first place. The statement claims
We are consulting with stakeholders, community leaders, and legal professionals, which is great. It's also far too late. If an industry body wrote a new kernel from scratch and deployed it without any external review, then discovered that it didn't work and only then consulted any of the existing experts in the field, we'd never take them seriously again. But when an industry body turns up with a new social policy, fucks up spectacularly and then goes back to consult experts, it's expected that we give them a pass.

Why? Because we don't perceive social problems as difficult problems, and we assume that anybody can solve them by simply sitting down and talking for a few hours. When we find out that we've screwed up we throw our hands in the air and admit that this is all more difficult than we imagined, and we give up. We ignore the lessons that people have learned in the past. We ignore the existing work that's been done in the field. We ignore the people who work full time on helping solve these problems.

We wouldn't let an industry body with no experience of engineering build a bridge. We need to accept that social problems are outside our realm of expertise and defer to the people who are experts.

[1] The repository history shows the majority of substantive changes were from Github, with the initial work appearing to be mostly from Twitter.

comment count unavailable comments

Syndicated 2015-08-08 20:01:52 from Matthew Garrett

7 Aug 2015 badvogato   » (Master)

the other day, my kid called 9-1-1, attempting to evoke higher authority to shut me off doing the talking and scolding unafraid of his whining and crying in my own kitchen. Can you believe THAT? He says I committed crime of mental abuse. He's 13 now. He need to learn to handle other people's talking more constructively and gracefully. The best strategy if one doesn't like anybody or any group's talking, is to leave the premises after dropping his own hint of displeasing sentiment. I am all ears for any better work-around ... currently browsing http://labix.org/lunatic-python

7 Aug 2015 mones   » (Journeyer)

New router, worse router

Our internet overlord TeleCable has decided to upgrade the hardware they had installed at home (the old Motorola SURFboard SB5101E) to something new, supporting DOCSIS 3.0, so they can happily say all their customers enjoy 100 MBps download rates. Not checked this fact yet, though.

The hardware of choice (I wonder who choose it) is a Cisco EPC3925, which comes also with Wi-Fi, so I decided to shut down my Linksys WRT54GL, for saving some electricity and enjoy the faster Wi-Fi (N in the Cisco vs G in the Linksys).

Unfortunately the EPC3925 doesn't like machines with static IP addresses on the network and was unable to route between two of them using the wired ports. Strangely both machines were reachable when accessed from a third machine connected to the wireless network with static address too (!).

After struggling for a whole afternoon with it (even reformatted the SD card of one of the machines thinking it could be the problem) I gave up using static addresses and just solved it the other way around:


  • Reconfigure the whole set of machines with static addresses to use DHCP instead

  • Watch the machines to pop up in the router's client list as they connected to the network and write down their MAC address

  • Configure the router DHCP server to give a determined address to each machine MAC


This way they still have a fixed address but now the router is capable to route traffic among them. While I was at it I decided to collect the MACs of the devices which had no fixed address (TV, Wii, Smartphone) and give them an address, so now with a simple ping I can now check if they're on or off :-) (before I had to nmap the whole /24 segment to be able to identify the machines).

Anyway, despite it works mostly OK now I'm still not liking it much. Clearly it's a substandard device (each time you expand the DHCP address range every other DHCP configuration is lost!), and who knows what other surprises it hides. For now it has entered the while it works, don't touch it category, but I don't expect it to last as longer as the Linksys.

Syndicated 2015-08-07 16:28:01 from Ricardo Mones

7 Aug 2015 superuser   » (Journeyer)

Photos

So, we looked at JPEG image encoding, which is a very popular image codec. Especially since our image is going to be blurred heavily on the client, and thus band-limiting our image data, JPEG should compress this image quite efficiently for our purposes. Unfortunately, the standard JPEG header is hundreds of bytes in size. In fact, the JPEG header alone is several times bigger than our entire 200-byte budget. However, excluding the JPEG header, the encoded data payload itself was approaching our 200 bytes. We just needed to figure out what to do about that pesky header!

The technology behind preview photos

Syndicated 2015-08-07 13:27:25 (Updated 2015-08-07 13:30:03) from Jason Lotito

7 Aug 2015 mikal   » (Journeyer)

The Crossroad




ISBN: 9781743519103
LibraryThing
Written by a Victoria Cross recipient, this is the true story of a messed up kid who made something of himself. Mark's dad died of cancer when he was young, and his mum was murdered. Mark then went through a period of being a burden on society, breaking windows for fun and generally being a pain in the butt. But then one day he decided to join the army...

This book is very well written, and super readable. I enjoyed it a lot, and I think its an important lesson about how troubled teenagers are sometimes that way because of pain in their past, and can often still end up being a valued contributor to society. I have been recommending this book to pretty much everyone I meet since I started reading it.

Tags for this post: book mark_donaldson combat sas army afghanistan
Related posts: Goat demoted


Comment

Syndicated 2015-08-06 22:04:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

7 Aug 2015 mikal   » (Journeyer)

Terrible pong

The kids at coding club have decided that we should write an implementation of pong in python. I took a look at some options, and decided tkinter was the way to go. Thus, I present a pong game broken up into stages which are hopefully understandable to an 11 year old: Operation Terrible Pong.

Tags for this post: coding_club python game tkinter
Related posts: More coding club; Implementing SCP with paramiko; Coding club day one: a simple number guessing game in python; Packet capture in python; mbot: new hotness in Google Talk bots; Calculating a SSH host key with paramiko

Comment

Syndicated 2015-08-06 17:21:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

6 Aug 2015 marnanel   » (Journeyer)

Boycotting the Stonewall movie

Homophobia seems to me as if the straight people are crammed into a small and dimly-lit circular compound, holding on to all the power and hating the queer people outside full of colours and sunshine. Most of us want to break the wall down, stop the hatred, let the power flood out and the colours flood in. But some say the answer is for everyone outside to run away from the sunshine and climb into the courtyard too.

For years before the Stonewall riots, queer people had held peaceful protests asking to be respected in the same way that straight people are respected. Nobody listened. Then the riot happened, queer people fought back, not assimilated and not ashamed. And the wall began to break.

But the wall-climbers haven’t gone away. We’ve often seen LGBT associations forget trans folk in their hurry to climb over the wall into respectability. And this film is selling a lie. The rioters weren’t the acceptable face of gay culture. They weren’t even trying to be.

They lived on the outside.

So do we.

This entry was originally posted at http://marnanel.dreamwidth.org/340223.html. Please comment there using OpenID.

Syndicated 2015-08-06 17:03:41 from Monument

4 Aug 2015 hypatia   » (Journeyer)

The Ada Initiative’s sunset

This morning, the Ada Initiative, which I co-founded in 2011 and have been employed by between 2011 and now, announced our shutdown.

Sunset over San Francisco, original by  Nan Palmero
Sunset over San Francisco, original by
Nan Palmero

I’m proud of all the work we talked about in the announcement, but a few things of mine over the years in particular that I enjoyed doing a lot and that I hope will have a continuing impact:

AdaCamp. AdaCamp Melbourne was my idea, and was, for me, something of a followup to the LinuxChix/Haecksen miniconfs I founded in 2007, but, as we had done with the Ada Initiative, decoupled from the Linux community specifically, and explicitly feminist and incorporating what I’d learned from organizing earlier women’s events and meetups. It grew into much more over time, incorporating ideas from other events like quiet rooms and inclusive catering, and solving problems that plagued the events that all of the Ada Initiative staff and AdaCamp staff had been to over the years.

The guide to responding to harassment reports as an event organizer. This was based on a email I wrote to a conference organizer who was wondering what one actually does when a harassment report comes in, which, as I tend to do with my best emails, I later edited to put on the web. The wiki text has been somewhat edited and expanded of course, but is substantially similar to my initial version. It formed the basis of the enforcement manual that PyCon developed.

The AdaCamp Toolkit. I wrote more than half of this in the month between closing the AdaCamp program and launching the Toolkit, and edited the remainder from material developed internally. Not since the Geek Feminism wiki have I had so much (rather intense) fun emptying the contents of my head onto a website.

The Impostor Syndrome Training and our Impostor Syndrome Proofing article. AdaCampers had been discussing Impostor Syndrome since the event in Melbourne. I developed the version given at AdaCamps from Portland onwards, and which I will teach in Sydney shortly, built up around an exercise developed by Leigh Honeywell for AdaCamp SF, and we’re releasing it publicly after the Sydney workshop.

I also did a great deal of the behind the scenes project management and technical work (web work, systems administration, payments processing setup) throughout the life of the organization, and internally my documents are the core of our institutional knowledge. (I am hoping to edit a few of the fundraising documents for publication this month.) Valerie’s life will never be the same again now that everything goes in a spreadsheet. I am hoping I can offer my project management skills to another organization soon.

There’s a lot of smaller things that I would never have without the Ada Initiative, like quite good double-entry bookkeeping skills, passable knowledge of Javascript, and too much knowledge of US non-profit tax law.

Thank you to Valerie Aurora, my friend and co-founder, without whom the Ada Initiative could never have existed in the first place, would never have had the vision or the conviction to do 95% of what it did, and who made a very unlikely and very lucky gamble on me as a co-founder four and a half years ago. I’m in San Francisco right now, my last trip for the Ada Initiative, so that we could do this last thing together and go out leaving as much for the community to use as possible.

Thank you to the many many people who worked and volunteered for us over the last four and a half years, who came to our events, who donated, and who advocated for, amplified, and improved our work.

As for what’s up next, I’ll be at the Ada Initiative for another couple of months. During that time, if this sentence of our shutdown notice was of interest, let’s talk:

Mary will be looking for a new position based in Sydney, Australia, working in a leadership role with the right organization.

Sunrise in Sydney, original by Tom French
Sunrise in Sydney, original by Tom French

Image credits:

Nan Palmero, You Heading to Oakland or Space?, CC BY, cropped and colour adjusted by the author of this post.
Tom French, Harbour Sunrise, CC BY, cropped and colour adjusted by the author of this post.

Syndicated 2015-08-04 22:54:30 from puzzling.org

4 Aug 2015 mjg59   » (Master)

Reverse this

The TODO group is an industry body that appears to be trying to define community best practices or something. I don't really know what their backstory is and whether they're trying to do meaningful work or just provide a fig leaf of respectability to organisations that dislike being criticised for doing nothing to improve the state of online communities but don't want to have to actually do anything, and their initial work on codes of conduct was, perhaps, suboptimal. But they do appear to be trying to improve things - this commit added a set of inappropriate behaviours, and also clarified that reverseisms were not actionable behaviour.

At which point Reddit lost its shit, because Reddit is garbage. And now the repository is a mess of white men attempting to explain how any policy that could allow them to be criticised is the real racism.

Fuck that shit.

Being a cis white man who's a native English speaker from a fairly well-off background, I'm pretty familiar with privilege. Spending my teenage years as an atheist of Irish Catholic upbringing in a Protestant school in a region of Northern Ireland that made parts of the bible belt look socially progressive, I'm also pretty familiar with the idea that that said privilege doesn't shield me from everything bad in life. Having privilege isn't a guarantee that my life will be better, in the same way that avoiding smoking doesn't mean I won't die of lung cancer. But there's an association in both cases, one that's strong enough to alter the statistical likelihood in meaningful ways.

And that inherently affects discussions about race or gender or sexuality. The probability that I've been subject to systematic discrimination because of these traits is vanishingly small. In the communities this policy is intended to cover, I'm the default. It's very difficult for any minority to exercise power over me. "You're white, you wouldn't understand" isn't fundamentally about my colour, it's about the fact that my colour means I haven't been subject to society trying to make my life more difficult at every opportunity. A community that considers saying that to be racist is a community that will never change the default, a community that will never be able to empower people who didn't grow up with that privilege. A code of conduct that makes it clear that "reverse racism" isn't grounds for complaint makes it clear that certain conversations are legitimate and helps ensure we have the framework we need to gradually change that default, and as such is better than one that doesn't.

(comments disabled because I don't trust any of you)

comment count unavailable comments

Syndicated 2015-08-04 21:59:57 from Matthew Garrett

4 Aug 2015 wingo   » (Master)

developing v8 with guix

a guided descent into hell

It all started off so simply. My primary development machine is a desktop computer that I never turn off. I suspend it when I leave work, and then resume it when I come back. It's always where I left it, as it should be.

I rarely update this machine because it works well enough for me, and anyway my focus isn't the machine, it's the things I do on it. Mostly I work on V8. The setup is so boring that I certainly didn't imagine myself writing an article about it today, but circumstances have forced my hand.

This machine runs Debian. It used to run the testing distribution, but somehow in the past I needed something that wasn't in testing so it runs unstable. I've been using Debian for some 16 years now, though not continuously, so although running unstable can be risky, usually it isn't, and I've unborked it enough times that I felt pretty comfortable.

Perhaps you see where this is going!

I went to install something, I can't even remember what it was now, and the downloads failed because I hadn't updated in a while. So I update, install the thing, and all is well. Except my instant messaging isn't working any more because there are a few moving parts (empathy / telepathy / mission control / gabble / dbus / whatwhat), and the install must have pulled in something that broke one of them. No biggie, this happens. Might as well go ahead and update the rest of the system while I'm at it and get a reboot to make sure I'm not running old software.

Most Debian users know that you probably shouldn't do a dist-upgrade from an old system -- you upgrade and then you dist-upgrade. Or perhaps this isn't even true, it's tribal lore to avoid getting eaten by the wild beasts of bork that roam around the village walls at night. Anyway that's what I did -- an upgrade, let it chunk for a while, then a dist-upgrade, check the list to make sure it didn't decide to remove one of my kidneys to satisfy the priorities of the bearded demon that lives inside apt-get, OK, let it go, all is well, reboot. Swell.

Or not! The computer restarts to a blank screen. Ha ha ha you have been bitten by a bork-beast! Switch to a terminal and try to see what's going on with GDM. It's gone! Ha ha ha! Your organs are being masticated as we speak! How does that feel! Try to figure out which package is causing it, happily with another computer that actually works. Surely this will be fixed in some update coming soon. Oh it's something that's going to take a few weeks!!!! Ninth level, end of the line, all passengers off!

my gods

I know how we got here, I love Debian, but it is just unacceptable and revolting that software development in 2015 is exposed to an upgrade process which (1) can break your system (2) by default and (3) can't be rolled back. The last one is the killer: who would design software this way? If you make a system like this in 2015 I'd say you're committing malpractice.

Well yesterday I resolved that this would be the last time this happens to me. Of course I could just develop in a virtual machine, and save and restore around upgrades, but that's kinda trash. Or I could use btrfs and be able to rewind changes to the file system, but then it would rewind everything, not just the system state.

Fortunately there is a better option in the form of functional package managers, like Nix and Guix. Instead of upgrading your system by mutating /usr, Nix and Guix store all files in a content-addressed store (/nix/store and /gnu/store, respectively). A user accesses the store via a "profile", which is a forest of symlinks into the store.

For example, on my machine with a NixOS system installation, I have:

$ which ls
/run/current-system/sw/bin/ls

$ ls -l /run/current-system/sw/bin/ls
lrwxrwxrwx 1 root nixbld 65 Jan  1  1970
  /run/current-system/sw/bin/ls ->
    /nix/store/wc472nw0kyw0iwgl6352ii5czxd97js2-coreutils-8.23/bin/ls

$ ldd /nix/store/wc472nw0kyw0iwgl6352ii5czxd97js2-coreutils-8.23/bin/ls
  linux-vdso.so.1 (0x00007fff5d3c4000)
  libacl.so.1 => /nix/store/c2p56z920h4mxw12pjw053sqfhhh0l0y-acl-2.2.52/lib/libacl.so.1 (0x00007fce99d5d000)
  libc.so.6 => /nix/store/la5imi1602jxhpds9675n2n2d0683lbq-glibc-2.20/lib/libc.so.6 (0x00007fce999c0000)
  libattr.so.1 => /nix/store/jd3gggw5bs3a6sbjnwhjapcqr8g78f5c-attr-2.4.47/lib/libattr.so.1 (0x00007fce997bc000)
  /nix/store/la5imi1602jxhpds9675n2n2d0683lbq-glibc-2.20/lib/ld-linux-x86-64.so.2 (0x00007fce99f65000)

Content-addressed linkage means that files in the store are never mutated: they will never be overwritten by a software upgrade. Never. Never will I again gaze in horror at the frozen beardcicles of a Debian system in the throes of "oops I just deleted all your programs, like that time a few months ago, wasn't that cool, it's really cold down here, how do you like my frozen facial tresses and also the horns".

At the same time, I don't have to give up upgrades. Paradoxically, immutable software facilitates change and gives me the freedom to upgrade my system without anxiety and lost work.

nix and guix

So, there's Nix and there's Guix. Both are great. I'll get to comparing them, but first a digression on the ways they can be installed.

Both Nix and Guix can be installed either as the operating system of your computer, or just as a user-space package manager. I would actually recommend to people to start with the latter way of working, and move on to the OS if you feel comfortable. The fundamental observation here is that because /nix/store doesn't depend on or conflict with /usr, you can run Nix or Guix as a user on a (e.g.) Debian system with no problems. You can have a forest of symlinks in ~/.guix-profile/bin that links to nifty things you've installed in the store and that's cool, you don't have to tell Debian.

and now look at me

In my case I wanted to also have the system managed by Nix or Guix. GuixSD, the name of the Guix OS install, isn't appropriate for me yet because it doesn't do GNOME. I am used to GNOME and don't care to change, so I installed NixOS instead. It works fine. There have been some irritations -- for example it just took me 30 minutes to figure out how to install dict, with a local wordnet dictionary server -- but mostly it has the packages I need. Again, I don't recommend starting with the OS install though.

GuixSD, the OS installation of Guix, is a bit harder even than NixOS. It has fewer packages, though what it does have tends to be more up-to-date than Nix. There are two big things about GuixSD though. One is that it aims to be fully free, including avoiding non-free firmware. Because they build deterministic build products from source, Nix and Guix can offer completely reproducible builds, which is swell for software reliability. Many reliability people also care a lot about software freedom and although Nix does support software freedom very well, it also includes options to turn on the Flash plugin, for example, and of course includes the Linux kernel with all of the firmware. Well GuixSD eschews non-free firmware, and uses the Linux-Libre kernel. For myself I have a local build on another machine that uses the stock Linux kernel with firmware for my Intel wireless device, and I was really discouraged from even sharing the existence of this hack. I guess it makes sense, it takes a world to make software freedom, but that particular part is not my fight.

The other thing about Guix is that it's really GNU-focused. This is great but also affects the product in some negative ways. They use "dmd" as an init system, for example, which is kinda like systemd but not. One consequence of this is that GuixSD doesn't have an implementation of the org.freedesktop.login1 seat management interface, which these days is implemented by part of systemd, which in turn precludes a bunch of other things GNOME-related. At one point I started working on a fork of systemd that pulled logind out to a separate project, which makes sense to me for distros that want seat management but not systemd, but TBH I have no horse in the systemd race and in fact systemd works well for me. But, a system with elogind would also work well for me. Anyway, the upshot is that unless you care a lot about the distro itself or are willing to adapt to e.g. Xfce or Xmonad or something, NixOS is a more pragmatic choice.

i'm on a horse

I actually like Guix's tools better than Nix's, and not just because they are written in Guile. Guix also has all the tools I need for software development, so I prefer it and ended up installing it as a user-space package manager on this NixOS system. Sounds bizarre but it actually works pretty well.

So, the point of this article is to be a little guide of how to build V8 with Guix. Here we go!

up and running with guix

First, check the manual. It's great and well-written and answers many questions and in fact includes all of this.

Now, I assume you're on an x86-64 Linux system, so we're going to use the awesome binary installation mechanism. Check it out: because everything in /gnu/store is linked directly to each other, all you have to do is to copy a reified /gnu/store onto a working system, then copy a sqlite thing into /var, and you've installed Guix. Sweet, eh? And actually you can take a running system and clone it onto other systems in that way, and Guix even provides a tool to generate such a tarball for you. Neat stuff.

cd /tmp
wget ftp://alpha.gnu.org/gnu/guix/guix-binary-0.8.3.x86_64-linux.tar.xz
tar xf guix-binary-0.8.3.x86_64-linux.tar.xz
mv var/guix /var/ && mv gnu /

This Guix installation has a built-in profile for the root user, so let's go ahead and add a link from ~root to the store.

ln -sf /var/guix/profiles/per-user/root/guix-profile \
       ~root/.guix-profile

Since we're root, we can add the bin/ part of the Guix profile to our environment.

export PATH="$HOME/.guix-profile/bin:$HOME/.guix-profile/sbin:$PATH"

Perhaps we add that line to our ~root/.bash_profile. Anyway, now we have Guix. Or rather, we almost have Guix -- we need to start the daemon that actually manages the store. Create some users:

groupadd --system guixbuild

for i in `seq -w 1 10`; do
  useradd -g guixbuild -G guixbuild           \
          -d /var/empty -s `which nologin`    \
          -c "Guix build user $i" --system    \
          guixbuilder$i;
done

And now run the daemon:

guix-daemon --build-users-group=guixbuild

If your host distro uses systemd, there's a unit that you can drop into the systemd folder. See the manual.

A few more things. One, usually when you go to install something, you'll want to fetch a pre-built copy of that software if it's available. Although Guix is fundamentally a build-from-source distro, Guix also runs a continuous builder service to make sure that binaries are available, if you trust the machine building the binaries of course. To do that, we tell the daemon to trust hydra.gnu.org:

guix archive --authorize < ~root/.guix-profile/share/guix/hydra.gnu.org.pub

as a user

OK now we have Guix installed. Running Guix commands will install things into the store as needed, and populate the forest of symlinks in the current user's $HOME/.guix-profile. So probably what you want to do is to run, as your user:

/var/guix/profiles/per-user/root/guix-profile/bin/guix \
  package --install guix

This will make Guix available in your own user's profile. From here you can begin to install software; for example, if you run

guix package --install emacs

You'll then have an emacs in ~/.guix-profile/bin/emacs which you can run. Pretty cool stuff.

back on the horse

So what does it mean for software development? Well, when I develop software, I usually want to know exactly what the inputs are, and to not have inputs to the build process that I don't control, and not have my build depend on unrelated software upgrades on my system. That's what Guix provides for me. For example, when I develop V8, I just need a few things. In fact I need these things:

;; Save as ~/src/profiles/v8.scm
(use-package-modules gcc llvm base python version-control less ccache)

(packages->manifest
 (list clang
       coreutils
       diffutils
       findutils
       tar
       patch
       sed
       grep
       binutils
       glibc
       glibc-locales
       which
       gnu-make
       python-2
       git
       less
       libstdc++-4.9
       gcc-4.9
       (list gcc-4.9 "lib")
       ccache))

This set of Guix packages is what it took for me to set up a V8 development environment. I can make a development environment containing only these packages and no others by saving the above file as v8.scm and then sourcing this script:

~/.guix-profile/bin/guix package -p ~/src/profiles/v8 -m ~/src/profiles/v8.scm
eval `~/.guix-profile/bin/guix package -p ~/src/profiles/v8 --search-paths`
export GYP_DEFINES='linux_use_bundled_gold=0 linux_use_gold_flags=0 linux_use_bundled_binutils=0'
export CXX='ccache clang++'
export CC='ccache clang'
export LD_LIBRARY_PATH=$HOME/src/profiles/v8/lib

Let's take this one line at a time. The first line takes my manifest -- the set of packages that collectively form my build environment -- and arranges to populate a symlink forest at ~/src/profiles/v8.

$ ls -l ~/src/profiles/v8/
total 44
dr-xr-xr-x  2 root guixbuild  4096 Jan  1  1970 bin
dr-xr-xr-x  2 root guixbuild  4096 Jan  1  1970 etc
dr-xr-xr-x  4 root guixbuild  4096 Jan  1  1970 include
dr-xr-xr-x  2 root guixbuild 12288 Jan  1  1970 lib
dr-xr-xr-x  2 root guixbuild  4096 Jan  1  1970 libexec
-r--r--r--  2 root guixbuild  4138 Jan  1  1970 manifest
lrwxrwxrwx 12 root guixbuild    59 Jan  1  1970 sbin -> /gnu/store/1g78hxc8vn7q7x9wq3iswxqd8lbpfnwj-glibc-2.21/sbin
dr-xr-xr-x  6 root guixbuild  4096 Jan  1  1970 share
lrwxrwxrwx 12 root guixbuild    58 Jan  1  1970 var -> /gnu/store/1g78hxc8vn7q7x9wq3iswxqd8lbpfnwj-glibc-2.21/var
lrwxrwxrwx 12 root guixbuild    82 Jan  1  1970 x86_64-unknown-linux-gnu -> /gnu/store/wq6q6ahqs9rr0chp97h461yj8w9ympvm-binutils-2.25/x86_64-unknown-linux-gnu

So that's totally scrolling off the right for you, that's the thing about Nix and Guix names. What it means is that I have a tree of software, and most directories contain a union of links from various packages. It so happens that sbin though just has links from glibc, so it links directly into the store. Anyway. The next line in my v8.sh arranges to point my shell into that environment.

$ guix package -p ~/src/profiles/v8 --search-paths
export PATH="/home/wingo/src/profiles/v8/bin:/home/wingo/src/profiles/v8/sbin"
export CPATH="/home/wingo/src/profiles/v8/include"
export LIBRARY_PATH="/home/wingo/src/profiles/v8/lib"
export LOCPATH="/home/wingo/src/profiles/v8/lib/locale"
export PYTHONPATH="/home/wingo/src/profiles/v8/lib/python2.7/site-packages"

Having sourced this into my environment, my shell's ls for example now points into my new profile:

$ which ls
/home/wingo/src/profiles/v8/bin/ls

Neat. Next we have some V8 defines. On x86_64 on Linux, v8 wants to use some binutils things that it bundles itself, but oddly enough for months under Debian I was been seeing spurious intermittent segfaults while linking with their embedded gold linker. I don't want to use their idea of what a linker is anyway, so I make the defines to make v8's build tool not do that. (Incidentally, figuring out what those defines were took spelunking through makefiles, to gyp files, to the source of gyp itself, to the source of the standard shlex Python module to figure out what delimiters shlex.split actually splits on... yaaarrggh!)

Then some defines to use ccache, then a strange thing: what's up with that LD_LIBRARY_PATH?

Well. I'm not sure. However the normal thing for dynamic linking under Linux is that you end up with binaries that are just linked against e.g. libc.so.6, whereever the system will find libc.so.6. That's not what we want in Guix -- we want to link against a specific version of every dependency, not just any old version. Guix's builders normally do this when building software for Guix, but somehow in this case I haven't managed to make that happen, so the binaries that are built as part of the build process can end up not specifying the path of the libraries they are linked to. I don't know whether this is an issue with v8's build system, that it doesn't want to work well with Nix / Guix, or if it's something else. Anyway I hack around it by assuming that whatever's in my artisanally assembled symlink forest ("profile") is the right thing, so I set it as the search path for the dynamic linker. Suggestions welcome here.

And from here... well it just works! I've gained the ability to precisely specify a reproducible build environment for the software I am working on, which is entirely separated from the set of software that I have installed on my system, which I can reproduce precisely with a script, and yet which is still part of my system -- I'm not isolated from it by container or VM boundaries (though I can be; see NixOps for more in that direction).

OK I lied a little bit. I had to apply this patch to V8:

$ git diff
diff --git a/build/standalone.gypi b/build/standalone.gypi
index 2bdd39d..941b9d7 100644
--- a/build/standalone.gypi
+++ b/build/standalone.gypi
@@ -98,7 +98,7 @@
         ['OS=="win"', {
           'gomadir': 'c:\\goma\\goma-win',
         }, {
-          'gomadir': '<!(/bin/echo -n ${HOME}/goma)',
+          'gomadir': '<!(/usr/bin/env echo -n ${HOME}/goma)',
         }],
         ['host_arch!="ppc" and host_arch!="ppc64" and host_arch!="ppc64le"', {
           'host_clang%': '1',

See? Because my system is NixOS, there is no /bin/echo. It does helpfully install a /usr/bin/env though, which other shell invocations in this build script use, so I use that instead. I mention this as an example of what works and what workarounds there are.

dpkg --purgatory

So now I have NixOS as my OS, and I mostly use Guix for software development. This is a new setup and we'll see how it works in practice.

Installing NixOS on top of Debian was a bit irritating. I ended up making a bootable USB installation image, then installing over to my Debian partition, happy in the idea that it wouldn't conflict with my system. But in that I forgot about /etc and /var and all that. So I copied /etc to /etc-debian, just as a backup, and NixOS appeared to install fine. However it wouldn't boot, and that's because some systemd state from my old /etc which was still in place conflicted with... something? In the end I redid the install, moving my old /usr, /etc and such directories to backup names and letting NixOS have control. That worked fine.

I have GuixSD on a laptop but I really don't recommend it right now -- not unless you have time and are willing to hack on it. But that's OK, install NixOS and you'll be happy on the system side, and if you want Guix you can install it as a user.

Comments and corrections welcome, and happy hacking!

Syndicated 2015-08-04 16:23:19 from wingolog

4 Aug 2015 superuser   » (Journeyer)

Performance

  1. Given what I know of this person’s performance, and if it were my money, I would award this person the highest possible compensation increase and bonus.

  2. Given what I know of this person’s performance, I would always want him or her on my team.

  3. This person is at risk for low performance.

  4. This person is ready for promotion today.

From the Washington Post

Syndicated 2015-08-04 13:39:08 (Updated 2015-08-04 13:39:09) from Jason Lotito

4 Aug 2015 bagder   » (Master)

curl me if you can

I got this neat t-shirt in the mail yesterday. 3scale runs a sort of marketing campaign right now and they give away this shirt to the ones who participate, and they were kind enough to send one to me!

Curl me if you can

Syndicated 2015-08-04 08:57:53 from daniel.haxx.se

4 Aug 2015 amits   » (Journeyer)

Going to KVM Forum 2015

kvm-mascot

In its 8th edition, the KVM Forum is moving back to North America this year, co-located with LinuxCon NA in Seattle.  It starts with a KVM + Xen hackathon and the (invite-only) QEMU Summit on the 18th August, followed by talks and BOFs on the 19th, 20th, and the 21st.  The schedule is here.  To add all talks to your calendar, use this ics.  The KVM Forum wiki page will have information on live streaming of talks, videos, slides, etc.

If you’re going to be around, please come up and say hi!

Syndicated 2015-08-04 07:00:58 from Think. Debate. Innovate.

4 Aug 2015 mikal   » (Journeyer)

Searching for open bugs in a launchpad project

The launchpad API docs are OMG terrible, and it took me way too long to work out how to do this, so I thought I'd document it for later. Here's how you list all the open bugs in a launchpad project using the API:

2 Aug 2015 benad   » (Apprentice)

10s Everywhere

So recently I installed Windows 10 on my MacBook Pro alongside Mac OS X Yosemite 10.10. If you're keeping count, that's four 10s.

Upgrading Windows 8.1 to 10 was a strange experience. First, the Windows 10 notification icon never showed up. Looking at the Event Logs of GWX.exe ("Get Windows 10", I guess), it kept crashing with "data is invalid" for the past few months. Yet, the same logs showed clearly that my license was valid and ready to be upgraded to 10. Luckily, Microsoft now offers the Windows 10 ISO download, and the software used to download and "burn" a USB key also allowed for an in-place upgrade with no need for a USB key or DVD.

Yet, after the upgrade, I noticed that all network connections were disabled. Yes, the Boot Camp drivers were installed correctly, and Windows insisted the drivers were correctly working, but it's as if the entire TCP stack was removed. I tried everything for a few hours, getting lost in regedit, so I gave up and used the option to revert back to Windows 8.1. Once back, it was now worse, with even all keyboards disabled.

Before reverting back to 8.1, I attempted to remove all 3rd-party software that could have an impact on the network, including an old copy of the Cisco VPN client and the Avast anti-virus. The Cisco VPN client refused to be uninstalled for some reason. Back on 8.1, I could easily remove the VPN client (using the on-screen keyboard), but it's as if 8.1 kept trace of the Avast install even though Avast was not there anymore. Luckily, I found the download link to the full offline Avast 2015 installer in the user forums. After doing so, both the keyboard and the network were enabled.

Having learned that VPN and Anti-virus software can break things in Windows 10, I uninstalled all of these, and then upgraded to 10 again. I had to reinstall the Boot Camp drivers for my model of MacBook Pro, and this time everything was working fine. I could restore easily Avast, but the old Cisco VPN driver clearly couldn't work anymore. This isn't a big issue, since I keep a Windows 7 virtual machine for that.

What about using Boot Camp in a virtual machine? Well, there are two workarounds I had to do to make it work with Parallels Desktop. First, Article ID 122808 describes how to patch the file C:\Windows\inf\volume.inf so that Parallels can detect the Windows 10 partition. It just so happens that I already had my copy of Paragon NTFS for Mac, so changing the file when booted in the Mac partition was easy. Then, from Article ID 116582, since I'm using a 64-bit EFI installation of Windows, I had to run their strange bootcamp-enable-efi.sh script. It needs administrator privileges, so I temporarily enabled that on my user account to run it. After all of this, Windows got a bit confused about the product activation, but after a few reboots between native and virtual machine modes, it somehow picked up the activation.

So, what about Windows 10 itself? For me, It worked fine. It isn't a huge upgrade compared to Windows 8.1, but it's more usable in a desktop environment. For Windows 7 users, I would definitively recommend it, maybe after a few months until they fix the remaining bugs. As usual, backing up your files is highly recommended (even if you don't upgrade).

Syndicated 2015-08-02 19:57:31 from Benad's Blog

2 Aug 2015 mikal   » (Journeyer)

The End of All Things




ISBN: 1447290496
LibraryThing
I don't read as much as I should these days, but one author I always make time for is John Scalzi. This is the next book in the Old Man's War universe, and it continues from where The Human Division ended on a cliff hanger. So, let's get that out of the way -- ending a book on a cliff hanger is a dick move and John is a bad bad man. Then again I really enjoyed The Human Division, so I will probably forgive him.

I don't think this book is as good as The Human Division, but its a solid book. I enjoyed reading it and it wasn't a chore like some books this far into a universe can be (I'm looking at you, Asimov share cropped books). The conclusion to the story arc is sensible, and not something I would have predicted, so overall I'm going to put this book on my mental list of the very many non-terrible Scalzi books.

Tags for this post: book john_scalzi combat aliens engineered_human old_mans_war age colonization human_backup cranial_computer personal_ai
Related posts: The Last Colony ; The Human Division; Old Man's War ; The Ghost Brigades ; Old Man's War (2); The Ghost Brigades (2)


Comment

Syndicated 2015-08-02 00:39:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

2 Aug 2015 mako   » (Master)

Understanding Hydroplane Races for the New Seattleite

It’s Seafair weekend in Seattle. As always, the centerpiece is the H1 Unlimited hydroplane races on Lake Washington.

EllstromManufacturingHydroplaneIn my social circle, I’m nearly the only person I know who grew up in area. None of the newcomers I know had heard of hydroplane racing before moving to Seattle. Even after I explain it to them — i.e., boats with 3,000+ horse power airplane engines that fly just above the water at more than 320kph (200mph) leaving 10m+ (30ft) wakes behind them! — most people seem more puzzled than interested.

I grew up near the shore of Lake Washington and could see (and hear!) the races from my house. I don’t follow hydroplane racing throughout the year but I do enjoy watching the races at Seafair. Here’s my attempt to explain and make the case for the races to new Seattleites.

Before Microsoft, Amazon, Starbucks, etc., there were basically three major Seattle industries: (1) logging and lumber based industries like paper manufacturing; (2) maritime industries like fishing, shipbuilding, shipping, and the navy; (3) aerospace (i.e., Boeing). Vintage hydroplane racing represented the Seattle trifecta: Wooden boats with airplane engines!

The wooden U-60 Miss Thriftway circa 1955 (Thriftway is a Washinton-based supermarket that nobody outside has heard of) below is a picture of old-Seattle awesomeness. Modern hydroplanes are now made of fiberglass but two out of three isn’t bad.

miss_thriftwayAlthough the boats are racing this year in events in Indiana, San Diego, and Detroit in addition to the two races in Washington, hydroplane racing retains deep ties to the region. Most of the drivers are from the Seattle area. Many or most of the teams and boats are based in Washington throughout the year. Many of the sponsors are unknown outside of the state. This parochialness itself cultivates a certain kind of appeal among locals.

In addition to old-Seattle/new-Seattle cultural divide, there’s a class divide that I think is also worth challenging. Although the demographics of hydro-racing fans is surprisingly broad, it can seem like Formula One or NASCAR on the water. It seems safe to suggest that many of the demographic groups moving to Seattle for jobs in the tech industry are not big into motorsports. Although I’m no follower of motorsports in general, I’ve written before cultivated disinterest in professional sports, and it remains something that I believe is worth taking on.

It’s not all great. In particular, the close relationship between Seafair and the military makes me very uneasy. That said, even with the military-heavy airshow, I enjoy the way that Seafair weekend provides a little pocket of old-Seattle that remains effectively unchanged from when I was a kid. I’d encourage others to enjoy it as well!

Syndicated 2015-08-02 02:45:05 (Updated 2015-08-02 03:22:12) from copyrighteous

1 Aug 2015 mcr   » (Journeyer)

This is try two

Had to tweak some things in muse mode to generate things into the right directory, still think it is probably broken.

Syndicated 2015-08-01 19:43:00 from Michael's musings

1 Aug 2015 sye   » (Journeyer)

Sunday Aug. 23, 2014\5 brought to my attention this posting:

Senior Software Developer Job

Apply Now
Location: Princeton, NJ, USA
Company: Bloomberg
Job Requisition Number:45898

The Role:

The People engineering team at Bloomberg is responsible for building a social network of the world's most influential finance professionals. We source terabytes of structured, semi-structured data, run machine learning, and crowdsourcing algorithms among others to provide timely & relevant content.

We are reimagining the way we currently architect our systems and are looking for an experienced software engineer to join this effort. Our aim is to minimize the time required to get existing data as well as new types of data (in any form or structure), into Bloomberg from the wider world. This acquisition will be low-latency with a goal to make everything immediately and easily discoverable to all of our users.

Our new platform is a mix of dynamic languages, OOP and functional programming. We are using a number of large distributed computing systems and different styles of data persistence.
We expect you to have the correct balance between innovation, experimentation and getting things done. You are proactive, creative, open-minded and resourceful. We work on small teams and as a company pride ourselves on being entrepreneurial, agile and responsive. We adapt to changes and the problems we face are constantly evolving.

The work environment is fast-paced. Expect to make immediate contributions to our production code.

Qualifications:

- Knowledge of Python & JavaScript (3 years experience).
- Experience creating compelling GUIs for external customers.
- You have expertise in data-stores & can code in a highly concurrent environment.
- You are comfortable with distributed systems, RESTful architectures & have built scalable, low-latency systems that provide high availability.
- It's a hands-on development position; you are comfortable at design/architecture as well as hands-on implementation of a large software system.
- BS/MS in Computer Science/Computer Engineering, Science, Math or equivalent experience.
- You are comfortable with Linux, Unix environment & tools for debugging, profiling and scripting.
- Knowledge of search engines like Apache Lucene is a plus.
- Knowledge of algorithms, data structures and graph algorithms is expected.

The Company:

Bloomberg, the global business and financial information and news leader, gives influential decision makers a critical edge by connecting them to a dynamic network of information, people and ideas. The company's strength - delivering data, news and analytics through innovative technology, quickly and accurately - is at the core of the Bloomberg Professional service, which provides real time financial information to more than 310,000 subscribers globally. Bloomberg's enterprise solutions build on the company's core strength, leveraging technology to allow customers to access, integrate, distribute and manage data and information across organizations more efficiently and effectively. Through Bloomberg Law, Bloomberg Government, Bloomberg New Energy Finance and Bloomberg BNA, the company provides data, news and analytics to decision makers in industries beyond finance. And Bloomberg News, delivered through the Bloomberg Professional service, television, radio, mobile, the Internet and two magazines, Bloomberg Businessweek and Bloomberg Markets, covers the world with more than 2,300 news and multimedia professionals at 146 bureaus in 72 countries. Headquartered in New York, Bloomberg employs more than 15,000 people in 192 locations around the world.

Nearest Major Market: New Jersey
Job Segment: Developer, Engineer, Computer Science, Unix, Linux, Technology, Engineering


tar cvzf - Pictures/ |ssh sye@gogoclub "dd of=From1015PM.Picture.tar.gz"


783469+1 records in
783469+1 records out
401136581 bytes transferred in 148.714045 secs (2697369 bytes/sec)

1 Aug 2015 mcr   » (Journeyer)

This is try two

Had to tweak some things in muse mode to generate things into the right directory, still think it is probably broken.

Syndicated 2015-08-01 19:04:00 from Michael's musings

1 Aug 2015 marnanel   » (Journeyer)

Privation of good

I've always heard that the idea of "privation of good" was something Augustine came up with. (Summary: evil is not a thing in itself, but only the absence of good-- like how darkness is the absence of light.) But 300 years earlier, Epictetus was saying:

"As a mark is not set up for the sake of missing the aim, so neither does the nature of evil exist in the world." (Enchiridion, 27)

Isn't that the same idea?

This entry was originally posted at http://marnanel.dreamwidth.org/338987.html. Please comment there using OpenID.

Syndicated 2015-08-01 09:47:02 from Monument

31 Jul 2015 dyork   » (Master)

Firewalls Now Looking At Intercepting SSH Traffic Via A MITM Attack

conexion manual ssh

Comment

Syndicated 2015-07-28 16:38:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Advogato User Stats
Users13999
Observer9885
Apprentice744
Journeyer2336
Master1030

New Advogato Members

Recently modified projects

24 Jul 2015 Break Great Firewall
20 Jul 2015 Justice4all
25 May 2015 Molins framework for PHP5
25 May 2015 Beobachter
7 Mar 2015 Ludwig van
7 Mar 2015 Stinky the Shithead
18 Dec 2014 AshWednesday
11 Nov 2014 respin
20 Jun 2014 Ultrastudio.org
13 Apr 2014 Babel
13 Apr 2014 Polipo
19 Mar 2014 usb4java
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang

New projects

2 Dec 2014 Justice4all
11 Nov 2014 respin
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
1 Nov 2013 FAQ Linux
15 Apr 2013 Gramps
8 Apr 2013 pydiction