Recent blog entries

25 Mar 2017 fozbaca   » (Apprentice)

25 Mar 2017 mones   » (Journeyer)

Nice surprise

Absorbed like I was with the task of finishing the university project (see previous post), I did not even realized a little patch I have done to fix an untranslated string in dblatex was finally accepted some months ago!

Since the parameter name is different (and admittedly a better name) from my proposal, to keep the project generating a proper document I will have to make some change, a thing I wasn't expecting to do after having presented it :-).

Syndicated 2017-03-25 13:08:32 from Ricardo Mones

25 Mar 2017 mentifex   » (Master)

Ghost Perl AI uses the AudListen() mind-module to detect keyboard input.

Yesterday we may have finally learned how to let the Ghost Perl AI think indefinitely without stopping to wait for a human user to press "Enter" after typing a message to the AI Mind. We want the Perlmind only to pause periodically in case the human attendant wishes to communicate with the AI. Even if a human types a message and fails to press the Enter-key, we want the Perl AI to register a CR (carriage-return) by default and to follow chains of thought internally, with or without outside influence from a human user.

Accordingly today we create the AudListen() module in between the auditory memory modules and the AudInput() module. We move the new input code from AudInput() into AudListen(), but the code does not accept any input, so we remove the current code and store it in an archival test-file. Then we insert some obsolete but working code into AudListen(). We start getting primitive input like we did yesterday in the ghost181.pl program. Then we start moving in required functionality from the MindForth AI, such as the ability to press the "Escape" key to stop the program.

Eventually we obtain the proper recognition and storage of input words in auditory memory, but the ghost182.pl AI is not switching over to thinking. Instead, it is trying to process more input. Probably no escape is being made from the AudInput() loop that calls the AudListen() module. We implement an escape from the AudInput() module.

The ghost182.pl program is now able take in a sentence of input and generate a sentence of output, so we will upload it to the Web. We still need to port from MindForth the code that only pauses to accept human input and then goes back to the thinking of the AI.

25 Mar 2017 LaForge   » (Master)

Upcoming v3 of Open Hardware miniPCIe WWAN modem USB breakout board

Back in October 2016 I designed a small open hardware breakout board for WWAN modems in mPCIe form-factor. I was thinking some other people might be interested in this, and indeed, the first manufacturing batch is already sold out by now.

Instead of ordering more of the old (v2) design, I decided to do some improvements in the next version:

  • add mounting holes so the PCB can be mounted via M3 screws
  • add U.FL and SMA sockets, so the modems are connected via a short U.FL to U.FL cable, and external antennas or other RF components can be attached via SMA. This provides strain relief for the external antenna or cabling and avoids tearing off any of the current loose U.FL to SMA pigtails
  • flip the SIM slot to the top side of the PCB, so it can be accessed even after mounting the board to some base plate or enclosure via the mounting holes
  • more meaningful labeling of the silk screen, including the purpose of the jumpers and the input voltage.

A software rendering of the resulting v3 PCB design files that I just sent for production looks like this:

/images/mpcie-breakout-v3-pcb-rendering.png

Like before, the design of the board (including schematics and PCB layout design files) is available as open hardware under CC-BY-SA license terms. For more information see http://osmocom.org/projects/mpcie-breakout/wiki

It will take some expected three weeks until I'll see the first assembled boards.

Syndicated 2017-03-23 23:00:00 from LaForge's home page

22 Mar 2017 marnanel   » (Journeyer)

Tower of London

[ghosts, death; parody of "Streets of London" by Ralph McTell]

Have you seen the old girl
Who walks the Tower of London
Face full of grace with a queenly charm?
She's no breath for talking,
she just keeps right on walking
Carrying her head
Right underneath her arm.

So how can you tell me you're ghostly
And say your life has run out of time?
Let me take you by the hand
And lead you round the Tower of London
I'll show you something
That'll make you change your mind.

And in the topmost turret
You'll meet Sir Walter Raleigh
Cursing at his fall like an angry tar
Looking at the world
With a chip on his shoulder,
Each and every midnight
He smokes a mild cigar.

So how can you tell me you're ghostly
And say your life has run out of time?
Let me take you by the hand
And lead you round the Tower of London
I'll show you something
That'll make you change your mind.

And have you seen the playroom
Of a pair of ghostly princes?
Such endless games in a place like theirs!
Careful where you sit if you
Accept their invitation:
They don't have ghostly cushions
On all their ghostly chairs

So how can you tell me you're ghostly
And say your life has run out of time?
Let me take you by the hand
And lead you round the Tower of London
I'll show you something
That'll make you change your mind.

This entry was originally posted at http://marnanel.dreamwidth.org/386202.html. Please comment there using OpenID.

Syndicated 2017-03-22 13:01:57 from Monument

21 Mar 2017 mentifex   » (Master)

Machine Translation by Artificial Intelligence

As an independent scholar in polyglot artificial intelligence, I have just today on March 21, 2017, stumbled upon a possible algorithm for implementing machine translation (MT) in my bilingual Perlmind and MindForth programs. My Ghost Perl AI thinks heretofore in either English or Russian, but not in both languages interchangeably. Likewise my Forth AI MindForth thinks in English, while its Teutonic version Wotan thinks in German.

Today like Archimedes crying "Eureka" in the bathtub, while showering but not displacing bath-water I realized that I could add an associative tag mtx to the flag-panel of each conceptual memory engram to link and cross-identify any concept in one language to its counterpart or same concept in another language. The mtx variable stands for "machine-translation xfer (transfer)". The AI software will use the spreading-activation SpreadAct module to transfer activation from a concept in English to the same concept in Russian or German.

Assuming that an AI Mind can think fluently in two languages, with a large vocabulary in both languages, the nub of machine translation will be the simultaneous activation of semantically the same set of concepts in both languages. Thus the consideration of an idea expressed in English will transfer the conceptual activation to a target language such as Russian. The generation modules will then generate a translation of the English idea into a Russian idea.

Inflectional endings will not pass from the source language directly to the target language, because the mtx tag identifies only the basic psi concept in both languages. The generation modules of the target language will assign the proper inflections as required by the linguistic parameters governing each sentence being translated.

21 Mar 2017 LaForge   » (Master)

Osmocom - personal thoughts

As I just wrote in my post about TelcoSecDay, I sometimes worry about the choices I made with Osmocom, particularly when I see all the great stuff people doing in fields that I previously was working in, such as applied IT security as well as Linux Kernel development.

History

When people like Dieter, Holger and I started to play with what later became OpenBSC, it was just for fun. A challenge to master. A closed world to break open and which to attack with the tools, the mindset and the values that we brought with us.

Later, Holger and I started to do freelance development for commercial users of Osmocom (initially basically only OpenBSC, but then OsmoSGSN, OsmoBSC, OsmoBTS, OsmoPCU and all the other bits on the infrastructure side). This lead to the creation of sysmocom in 2011, and ever since we are trying to use revenue from hardware sales as well as development contracts to subsidize and grow the Osmocom projects. We're investing most of our earnings directly into more staff that in turn works on Osmocom related projects.

NOTE

It's important to draw the distinction betewen the Osmocom cellular infrastructure projects which are mostly driven by commercial users and sysmocom these days, and all the many other pure juts-for-fun community projects under the Osmocom umbrella, like OsmocomTETRA, OsmocomGMR, rtl-sdr, etc. I'm focussing only on the cellular infrastructure projects, as they are in the center of my life during the past 6+ years.

In order to do this, I basically gave up my previous career[s] in IT security and Linux kernel development (as well as put things like gpl-violations.org on hold). This is a big price to pay for crating more FOSS in the mobile communications world, and sometimes I'm a bit melancholic about the "old days" before.

Financial wealth is clearly not my primary motivation, but let me be honest: I could have easily earned a shitload of money continuing to do freelance Linux kernel development, IT security or related consulting. There's a lot of demand for related skills, particularly with some experience and reputation attached. But I decided against it, and worked several years without a salary (or almost none) on Osmocom related stuff [as did Holger].

But then, even with all the sacrifices made, and the amount of revenue we can direct from sysmocom into Osmocom development: The complexity of cellular infrastructure vs. the amount of funding and resources is always only a fraction of what one would normally want to have to do a proper implementation. So it's constant resource shortage, combined with lots of unpaid work on those areas that are on the immediate short-term feature list of customers, and that nobody else in the community feels like he wants to work on. And that can be a bit frustrating at times.

Is it worth it?

So after 7 years of OpenBSC, OsmocomBB and all the related projects, I'm sometimes asking myself whether it has been worth the effort, and whether it was the right choice.

It was right from the point that cellular technology is still an area that's obscure and unknown to many, and that has very little FOSS (though Improving!). At the same time, cellular networks are becoming more and more essential to many users and applications. So on an abstract level, I think that every step in the direction of FOSS for cellular is as urgently needed as before, and we have had quite some success in implementing many different protocols and network elements. Unfortunately, in most cases incompletely, as the amount of funding and/or resources were always extremely limited.

Satisfaction/Happiness

On the other hand, when it comes to metrics such as personal satisfaction or professional pride, I'm not very happy or satisfied. The community remains small, the commercial interest remains limited, and as opposed to the Linux world, most players have a complete lack of understanding that FOSS is not a one-way road, but that it is important for all stakeholders to contribute to the development in terms of development resources.

Project success?

I think a collaborative development project (which to me is what FOSS is about) is only then truly successful, if its success is not related to a single individual, a single small group of individuals or a single entity (company). And no matter how much I would like the above to be the case, it is not true for the Osmocom cellular infrastructure projects. Take away Holger and me, or take away sysmocom, and I think it would be pretty much dead. And I don't think I'm exaggerating here. This makes me sad, and after all these years, and after knowing quite a number of commercial players using our software, I would have hoped that the project rests on many more shoulders by now.

This is not to belittle the efforts of all the people contributing to it, whether the team of developers at sysmocom, whether those in the community that still work on it 'just for fun', or whether those commercial users that contract sysmocom for some of the work we do. Also, there are known and unknown donors/funders, like the NLnet foundation for some parts of the work. Thanks to all of you, and clearly we wouldn't be where we are now without all of that!

But I feel it's not sufficient for the overall scope, and it's not [yet] sustainable at this point. We need more support from all sides, particularly those not currently contributing. From vendors of BTSs and related equipment that use Osmocom components. From operators that use it. From individuals. From academia.

Yes, we're making progress. I'm happy about new developments like the Iu and Iuh support, the OsmoHLR/VLR split and 2G/3G authentication that Neels just blogged about. And there's progress on the SIMtrace2 firmware with card emulation and MITM, just as well as there's progress on libosmo-sigtran (with a more complete SUA, M3UA and connection-oriented SCCP stack), etc.

But there are too little people working on this, and those people are mostly coming from one particular corner, while most of the [commercial] users do not contribute the way you would expect them to contribute in collaborative FOSS projects. You can argue that most people in the Linux world also don't contribute, but then the large commercial beneficiaries (like the chipset and hardware makers) mostly do, as are the large commercial users.

All in all, I have the feeling that Osmocom is as important as it ever was, but it's not grown up yet to really walk on its own feet. It may be able to crawl, though ;)

So for now, don't panic. I'm not suffering from burn-out, mid-life crisis and I don't plan on any big changes of where I put my energy: It will continue to be Osmocom. But I also think we have to have a more open discussion with everyone on how to move beyond the current situation. There's no point in staying quiet about it, or to claim that everything is fine the way it is. We need more commitment. Not from the people already actively involved, but from those who are not [yet].

If that doesn't happen in the next let's say 1-2 years, I think it's fair that I might seriously re-consider in which field and in which way I'd like to dedicate my [I would think considerable] productive energy and focus.

Syndicated 2017-03-21 18:00:00 from LaForge's home page

21 Mar 2017 mjg59   » (Master)

Announcing the Shim review process

Shim has been hugely successful, to the point of being used by the majority of significant Linux distributions and many other third party products (even, apparently, Solaris). The aim was to ensure that it would remain possible to install free operating systems on UEFI Secure Boot platforms while still allowing machine owners to replace their bootloaders and kernels, and it's achieved this goal.

However, a legitimate criticism has been that there's very little transparency in Microsoft's signing process. Some people have waited for significant periods of time before being receiving a response. A large part of this is simply that demand has been greater than expected, and Microsoft aren't in the best position to review code that they didn't write in the first place.

To that end, we're adopting a new model. A mailing list has been created at shim-review@lists.freedesktop.org, and members of this list will review submissions and provide a recommendation to Microsoft on whether these should be signed or not. The current set of expectations around binaries to be signed documented here and the current process here - it is expected that this will evolve slightly as we get used to the process, and we'll provide a more formal set of documentation once things have settled down.

This is a new initiative and one that will probably take a little while to get working smoothly, but we hope it'll make it much easier to get signed releases of Shim out without compromising security in the process.

comment count unavailable comments

Syndicated 2017-03-21 20:29:30 from Matthew Garrett

20 Mar 2017 mjg59   » (Master)

Buying a Utah teapot

The Utah teapot was one of the early 3D reference objects. It's canonically a Melitta but hasn't been part of their range in a long time, so I'd been watching Ebay in the hope of one turning up. Until last week, when I discovered that a company called Friesland had apparently bought a chunk of Melitta's range some years ago and sell the original teapot[1]. I've just ordered one, and am utterly unreasonably excited about this.

[1] They have them in 0.35, 0.85 and 1.4 litre sizes. I believe (based on the measurements here) that the 1.4 litre one matches the Utah teapot.

comment count unavailable comments

Syndicated 2017-03-20 20:45:42 from Matthew Garrett

20 Mar 2017 badvogato   » (Master)

NJ Law Journal reports

20 Mar 2017 fozbaca   » (Apprentice)

Anthony Bourdain Does Not Want to Owe Anybody Even a Single Dollar | Wealthsimple

Anthony Bourdain Does Not Want to Owe Anybody Even a Single Dollar | Wealthsimple
Before he was the guy from Parts Unknown, he was 44, never had a savings account, hadn't filed taxes in 10 years, and was AWOL on his AmEx bill. That turned out…

Syndicated 2017-03-20 17:25:08 from fozbaca.org

19 Mar 2017 fozbaca   » (Apprentice)

19 Mar 2017 fozbaca   » (Apprentice)

I saw you posted something about Slam Bradley, who...

I saw you posted something about Slam Bradley, who...
Youknow, I’ve heard a few comics histories call the pre-superhero era of comicsthe “Platinum Age.” Here’s something interesting: for a decade after Zero Hour,…

Syndicated 2017-03-19 17:13:43 from fozbaca.org

19 Mar 2017 benad   » (Apprentice)

My Ubuntu Tips

Now that I’ve used a Ubuntu Linux laptop for a couple of months, I’ve realized that in day-to-day use I don’t have to use much, if any, arcane commands on the command-line. And while I can edit the source code to fix bugs myself (and I did for the duplicity backup tool) or install software by compiling its source code, I rarely needed to do so. In fact, most of my customizations and installed software were straightforward.

XPS 13 Specific Tips

For some unknown reason (licensing with Microsoft?), the “super” key (the one that still has the Windows logo) cannot be used to show the Ubuntu launcher by tapping the key. As mentioned on the Ubuntu forums, to restore that functionality, simply remove the dell-super-key package.

When I use an external monitor, I typically use the internal screen at half its resolution and with a global UI scaling factor of 1, and without the external monitor I set it back to its native resolution with a UI scaling factor of 2. To do so on the command-line, without having to log out, I use these command. The external display is DP1 and the internal one is eDP1. Note that the touch screen virtual mouse might be a bit confused until you log out. Also you may have to restart some programs, for them to pick up the scaling changes.

Half-res with scale of 1:

  xrandr --output eDP1 --mode 1600x900 && \
gsettings set org.gnome.desktop.interface scaling-factor 1 && \
gsettings set com.ubuntu.user-interface scale-factor "{'DP1': 8, 'eDP1':8}"

Full-res with scale of 2:

  xrandr --output eDP1 --mode 3200x1800 && \
gsettings set org.gnome.desktop.interface scaling-factor 2 && \
gsettings set com.ubuntu.user-interface scale-factor "{'DP1': 8, 'eDP1':16}"

I’m using the Plugable USB-C Hub, and for some reason the drivers tend to get confused with the hub’s network and audio ports if I just unplug the USB-C wire. To do a “clean eject” of the hub’s internal 2 USB hubs, I run the following. You should use lsusb -t before and after connecting the hub to find the bus IDs in your setup.

  sudo bash -c 'for i in 3 4 ; do echo 1 > /sys/bus/usb/devices/$i-1/remove ; done'

GUI Themes

While there are countless GUI themes on Linux, the one I currently use is Adapta with the Noto fonts. It mimicks the latest Android look, and while it’s not a huge fan of the latest “Material” look, at least it avoids the ugly “orange everywhere” Ubuntu look.

To set it up follow this guide. After that, install the packages unity-tweak-tool, fonts-noto, fonts-noto-mono, fonts-noto-hinted and fonts-noto-cjk, and finally use the Unity Tweak Tool to select Adapta as the theme and the various Noto fonts for the system fonts.

In Ubuntu parlance, the menu bar items on the top-right are called “indicators”. Any program can add indicators when launched, but “pure” indicator programs only need to be launched once and from then on they will automatically start each time you log in.

The indicator-multiload displays graphs of CPU load, I/O, network and so on.

The indicator-cpuprefs displays the current CPU frequency. Also, on some CPUs, you can select the specific frequency you want, or select the CPU “governor”. For example, on my Kaby Lake processor I can set it to “ondemand”, “convervative”, “performance” or “powersave”.

The my-weather-indicator simply displays the current weather, and lets you view the weather forecast.

The psensor program tracks your system’s various temperature sensors and when running adds a useful indicator menu with the current temperature.

Great Apps

My currently favourite editor, Visual Studio Code from Microsoft, is based on WebKit (similar to Atom, but simpler and more lightweight), and is available as a Debian and Ubuntu package.

A beta version of Skype for Linux, again from Microsoft (who knew?), is now available with both microphone and camera support.

NixNote is an open-source client for Evernote. It is generally more convenient to use than the Evernote web client, especially for offline use.

There is a desktop application for the DevDocs web site. It is a convenient way to search and read documentation for various programming languages and tools, even in offline more. I found it to be a compelling and often superior replacemnt to Dash.

Ubuntu has a built-in screenshot tool, but for recording the screen as a video file or a GIF there is Peek. It is a breeze to set up and use.

If you use Twitter, Corebird is a nice twitter client, and can be installed on Ubuntu by following the instructions here.

Other Tips

To open files or URLs from the command-like, you can use xdg-open. To view a man page in a GUI window, you can run xdg-open man:crontab.5.

The built-in VNC server is quite limited and doesn’t support the better image compression techniques. Instead, you can use x11vnc by following these instructions for Ubuntu.

Syndicated 2017-03-19 00:24:21 from Benad's Blog

17 Mar 2017 aicra   » (Journeyer)

I went to SCaLE and here's what I learned:


1. I was right about these people pushing projects - the devs are doing it for experience, community, whatever non commercial rewards are out there... being used by these evil empirical asses using us as their freaking resource and walking away with a large payday. Fuckers!

2. We are screwed as the "open source" lawyer I met - who actually had a degree (supposedly) in comp sci - didn't know the difference between open source and free software. A couple of us tried to explain it to him. I think he got it but he didn't understand - he didn't seem to respect it since he called me "Judgement proof" -- you know... since I've dedicated my life to the community and have 0 financial stature. That was nice. He's making hand over fist money so, I guess that makes him better than me.

No really, I mean, I can't just - you know lie...


For example, I would never be able to say on my linkedin that I interpret open and free software licenses and you... know not even know the difference... for 6 years.

WTF.

his name is Andrew Hall.

RUNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN

if you see this guy coming.
Since I'm judgement proof

RUNNNNNNNNNNNNNNN
if you see this douche coming!

17 Mar 2017 mentifex   » (Master)

Perl Mind Programming Journal 2017-03-16
2017-03-15: Porting AudRecog and AudMem from Forth into Perl

We start today by taking the 336,435 bytes of ghost176.pl from 2017-03-14 and renaming it as ghost177.pl in a text editor. Then in the Windows XP MS-DOS prompt we run the agi00045.F MindForth program of 166,584 bytes from 2016-09-18 in order to see a Win32Forth window with diagnostic messages and a display of "you see dogs" as input and "I SEE NOTHING" as a default output. From a NeoCities upload directory we put the agi00045.F source code up on the screen in a text editor so that we may use the Forth code to guide us in debugging the Perl Strong AI code.

Although in our previous PMPJ entry from yesterday we recorded our steps in trying to get the Perl AudRecog mind-module to work as flawlessly as the Forth AudRecog, today we will abandon the old Perl AudRecog by changing its name and we will create a new Perl AudRecog from scratch just as we did with the Forth AudRecog in 2016 when we were unable to tweak the old Forth AudRecog into a properly working version. So we stub in a new Perl AudRecog() and we comment out the old version by dint of renaming it "OldAudRecog()". Then we run "perl ghost177.pl" and the AI still runs but it treats every word of both input and output as a new concept, because the new AudRecog is not yet recognizing any English words.

Next we start porting the actual Forth AudRecog into Perl, but we must hit three of our Perl reference books to learn how to translate the Forth code testing ASCII values into Perl. We learn about the Perl "chr" function which lets us test input characters as if they were ASCII values such as CR-13 or SPACE-32.

Now we have faithfully ported the MindForth AudRecog into Perl, but words longer than one character are not being recognized. Let us comment out AudMem() by naming it OldAudMem() and let us start a new AudMem() from scratch as a port from MindForth.

We port the AudMem code from Forth into Perl, but we may not be getting the storage of SPACE or CR carriage-return.

2017-03-16: Uploading Ghost Perl Webserver Strong AI

Now into our third day in search of stable Perlmind code, we take the 344,365 bytes of ghost177.pl from 2017-03-15 and we save a new file as the ghost178.pl AI. We will try to track passage of characters from AudInput to AudMem to AudRec.

Through diagnostic messages in AudRecog, we discovered that a line of code meant to "disallow audrec until last letter of word" was zeroing out $audrec before the transfer from the end of AudRecog to AudMem.

In a departure from MindForth, we are having the Perl AudRecog mind-module fetch only the most recent recognition of a word. In keeping with MindForth, we implement the auditory storing of a $nxt new concept in the AudInput module, where we also increment the value of $nxt instead of in the NewConcept module.

16 Mar 2017 fozbaca   » (Apprentice)

15 Mar 2017 wingo   » (Master)

guile 2.2 omg!!!

Oh, good evening my hackfriends! I am just chuffed to share a thing with yall: tomorrow we release Guile 2.2.0. Yaaaay!

I know in these days of version number inflation that this seems like a very incremental, point-release kind of a thing, but it's a big deal to me. This is a project I have been working on since soon after the release of Guile 2.0 some 6 years ago. It wasn't always clear that this project would work, but now it's here, going into production.

In that time I have worked on JavaScriptCore and V8 and SpiderMonkey and so I got a feel for what a state-of-the-art programming language implementation looks like. Also in that time I ate and breathed optimizing compilers, and really hit the wall until finally paging in what Fluet and Weeks were saying so many years ago about continuation-passing style and scope, and eventually came through with a solution that was still CPS: CPS soup. At this point Guile's "middle-end" is, I think, totally respectable. The backend targets a quite good virtual machine.

The virtual machine is still a bytecode interpreter for now; native code is a next step. Oddly my journey here has been precisely opposite, in a way, to An incremental approach to compiler construction; incremental, yes, but starting from the other end. But I am very happy with where things are. Guile remains very portable, bootstrappable from C, and the compiler is in a good shape to take us the rest of the way to register allocation and native code generation, and performance is pretty ok, even better than some natively-compiled Schemes.

For a "scripting" language (what does that mean?), I also think that Guile is breaking nice ground by using ELF as its object file format. Very cute. As this seems to be a "Andy mentions things he's proud of" segment, I was also pleased with how we were able to completely remove the stack size restriction.

high fives all around

As is often the case with these things, I got the idea for removing the stack limit after talking with Sam Tobin-Hochstadt from Racket and the PLT group. I admire Racket and its makers very much and look forward to stealing fromworking with them in the future.

Of course the ideas for the contification and closure optimization passes are in debt to Matthew Fluet and Stephen Weeks for the former, and Andy Keep and Kent Dybvig for the the latter. The intmap/intset representation of CPS soup itself is highly endebted to the late Phil Bagwell, to Rich Hickey, and to Clojure folk; persistent data structures were an amazing revelation to me.

Guile's virtual machine itself was initially heavily inspired by JavaScriptCore's VM. Thanks to WebKit folks for writing so much about the early days of Squirrelfish! As far as the actual optimizations in the compiler itself, I was inspired a lot by V8's Crankshaft in a weird way -- it was my first touch with fixed-point flow analysis. As most of yall know, I didn't study CS, for better and for worse; for worse, because I didn't know a lot of this stuff, and for better, as I had the joy of learning it as I needed it. Since starting with flow analysis, Carl Offner's Notes on graph algorithms used in optimizing compilers was invaluable. I still open it up from time to time.

While I'm high-fiving, large ups to two amazing support teams: firstly to my colleagues at Igalia for supporting me on this. Almost the whole time I've been at Igalia, I've been working on this, for about a day or two a week. Sometimes at work we get to take advantage of a Guile thing, but Igalia's Guile investment mainly pays out in the sense of keeping me happy, keeping me up to date with language implementation techniques, and attracting talent. At work we have a lot of language implementation people, in JS engines obviously but also in other niches like the networking group, and it helps to be able to transfer hackers from Scheme to these domains.

I put in my own time too, of course; but my time isn't really my own either. My wife Kate has been really supportive and understanding of my not-infrequent impulses to just nerd out and hack a thing. She probably won't read this (though maybe?), but it's important to acknowledge that many of us hackers are only able to do our work because of the support that we get from our families.

a digression on the nature of seeking and knowledge

I am jealous of my colleagues in academia sometimes; of course it must be this way, that we are jealous of each other. Greener grass and all that. But when you go through a doctoral program, you know that you push the boundaries of human knowledge. You know because you are acutely aware of the state of recorded knowledge in your field, and you know that your work expands that record. If you stay in academia, you use your honed skills to continue chipping away at the unknown. The papers that this process reifies have a huge impact on the flow of knowledge in the world. As just one example, I've read all of Dybvig's papers, with delight and pleasure and avarice and jealousy, and learned loads from them. (Incidentally, I am given to understand that all of these are proper academic reactions :)

But in my work on Guile I don't actually know that I've expanded knowledge in any way. I don't actually know that anything I did is new and suspect that nothing is. Maybe CPS soup? There have been some similar publications in the last couple years but you never know. Maybe some of the multicore Concurrent ML stuff I haven't written about yet. Really not sure. I am starting to see papers these days that are similar to what I do and I have the feeling that they have a bit more impact than my work because of their medium, and I wonder if I could be putting my work in a more useful form, or orienting it in a more newness-oriented way.

I also don't know how important new knowledge is. Simply being able to practice language implementation at a state-of-the-art level is a valuable skill in itself, and releasing a quality, stable free-software language implementation is valuable to the world. So it's not like I'm negative on where I'm at, but I do feel wonderful talking with folks at academic conferences and wonder how to pull some more of that into my life.

In the meantime, I feel like (my part of) Guile 2.2 is my master work in a way -- a savepoint in my hack career. It's fine work; see A Virtual Machine for Guile and Continuation-Passing Style for some high level documentation, or many of these bloggies for the nitties and the gritties. OKitties!

getting the goods

It's been a joy over the last two or three years to see the growth of Guix, a packaging system written in Guile and inspired by GNU stow and Nix. The laptop I'm writing this on runs GuixSD, and Guix is up to some 5000 packages at this point.

I've always wondered what the right solution for packaging Guile and Guile modules was. At one point I thought that we would have a Guile-specific packaging system, but one with stow-like characteristics. We had problems with C extensions though: how do you build one? Where do you get the compilers? Where do you get the libraries?

Guix solves this in a comprehensive way. From the four or five bootstrap binaries, Guix can download and build the world from source, for any of its supported architectures. The result is a farm of weirdly-named files in /gnu/store, but the transitive closure of a store item works on any distribution of that architecture.

This state of affairs was clear from the Guix binary installation instructions that just have you extract a tarball over your current distro, regardless of what's there. The process of building this weird tarball was always a bit ad-hoc though, geared to Guix's installation needs.

It turns out that we can use the same strategy to distribute reproducible binaries for any package that Guix includes. So if you download this tarball, and extract it as root in /, then it will extract some paths in /gnu/store and also add a /opt/guile-2.2.0. Run Guile as /opt/guile-2.2.0/bin/guile and you have Guile 2.2, before any of your friends! That pack was made using guix pack -C lzip -S /opt/guile-2.2.0=/ guile-next glibc-utf8-locales, at Guix git revision 80a725726d3b3a62c69c9f80d35a898dcea8ad90.

(If you run that Guile, it will complain about not being able to install the locale. Guix, like Scheme, is generally a statically scoped system; but locales are dynamically scoped. That is to say, you have to set GUIX_LOCPATH=/opt/guile-2.2.0/lib/locale in the environment, for locales to work. See the GUIX_LOCPATH docs for the gnarlies.)

Alternately of course you can install Guix and just guix package -i guile-next. Guix itself will migrate to 2.2 over the next week or so.

Welp, that's all for this evening. I'll be relieved to push the release tag and announcements tomorrow. In the meantime, happy hacking, and yes: this blog is served by Guile 2.2! :)

Syndicated 2017-03-15 22:56:33 from wingolog

15 Mar 2017 mentifex   » (Master)

Perlmind Programming Journal (PMPJ)
Updating the Ghost Perl AI in conformance with MindForth AI.

Today we return to Perl AI coding after updating the MindForth code in July and August of 2016. In Forth we re-organized the calling of the subordinate mind-modules beneath the MainLoop module so as no longer to call the Think module directly, but rather to call the FreeWill module first so that eventually the FreeWill or Volition module will call Emotion and Think and Motorium.

We have discovered, however, that the MindForth code properly handles input which encounters a bug in the Perl code, so we must first debug the Perl code. When we enter, "you see dogs", MindForth properly answers "I SEE NOTHING", which is the default output for anything involving VisRecog since we have no robot camera eye attached to the Mind program. The old Perl Mind, however, incorrectly recognizes the input of "DOGS" as if it were a form of the #830 "DO" verb, and so we must correct the Perl code by making it as good as the Forth code. So we take the 335,790 bytes of ghost175.pl from from 2016-08-07 and we rename it as ghost176.pl for fresh coding.

We start debugging the Perl AudRecog module by inserting a diagnostic message to reveal the "$audpsi" value at the end of AudRecog. We learn that "DOGS" is misrecognized as "DO" when the input length reaches two characters. We know that MindForth does not misrecognize "DOGS", so we must determine where the Perl AudRecog algorithm diverges from the Forth algorithm. We are fortunate to be coding the AI in both Forth and Perl, so that in Perl we may implement what already works in Forth.

In Perl we try commenting out some AudRecog code that checks for a $monopsi. The AI still misrecognizes "DOGS" as the verb "DO". Next we try commenting out some Perl code that declares a $psibase when incoming word-length is only two. The AI still misrecognizes. Next we try commenting out a declaration of $subpsi. We still get misrecognition. We try commenting out another $psibase. Still misrecognition. We even try commenting out a major $audrec declaration, and we still get misrecognition. When we try commenting out a $prc declaration, AudRecog stops recognizing the verb "SEE". Then from MindForth we bring in a provisional $audrec, but the verb "SEE" is not being recognized.

Although in the MS-DOS CLI prompt we can evidently not run MindForth and the Perlmind simultanously, today we learn that we can run MindForth and leave the Win32Forth window open, then go back to running the Perl AI. Thus we can compare the diagnostic messages in both Forth and Perl so as to further debug the Perl AI. We notice that the Forth AudMem module sends a diagnostic message even for the blank space ASCII 32 even after "SEE", which the Perl AI does not do.

14 Mar 2017 johnw   » (Master)

A case of reflection

A case of reflection

A while back, Edward Kmett wrote a library called reflection, based on a 2004 paper by Oleg Kiselyov and Chung-chieh Shan that describes a neat trick for reifying data into types (here the word “reify” can be understood as turning a value into something that can be referenced at the type level). There was also an article written by Austin Seipp on how to use the library, and some great answers on reddit and stackoverflow that go into detail about how it works.

And yet, in all these years, though I’ve been on the lookout for a way to make use of this library, I wasn’t able to fit it into my workflow – until today! So let’s look at my real world use for reflection, which solves a problem that maybe others have encountered as well.

As you may know, the QuickCheck library provides a facility for generating arbitrary data sets. The property testing features of QuickCheck make use of this generation to search for test data that might violate a set of properties.

However, the generation facility can also be used on its own, separate from the testing components, to randomly generate data for any purpose. The library for producing this random data offers lots of combinators, and is based around instances for a type class called Arbitrary. Here’s a basic example:

module Main where

import Test.QuickCheck.Arbitrary
import Test.QuickCheck.Gen

data Foo = Foo [Int] [String]
    deriving Show

instance Arbitrary Foo where
    arbitrary = do
        xs  <- listOf chooseAny
        len <- choose (1, 100)
        ys  <- vectorOf len (shuffle "Hello, world")
        return $ Foo xs ys

main :: IO ()
main = print =<< generate (arbitrary :: Gen Foo)

This creates a specifically shaped set of random data, where the list of integers may be of any length, and any value, but the list of strings will always be from 1 to 100 elements long, and the strings will only consist of random arrangements of the characters found in "Hello, world".

Now, what if you wanted to guide the generation process for Foo using external information? Such as picking the length of the list of strings from a value provided by the user? Since Arbitrary does not allow the use of Reader, how do we get that user-supplied value into the arbitrary function above? And without using global IORefs or unsafePerformIO?

The reflection library allows us to reify a runtime value into a type (whose name we’ll never know, requiring us to reference it through a type variable), and then communicate that type via a constraint, such that we can reflect the value back out as needed. If this sounds a bit confusing, maybe an example can make it clearer:

{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE UndecidableInstances #-}

module Main where

import Data.Proxy
import Data.Reflection
import Test.QuickCheck.Arbitrary
import Test.QuickCheck.Gen
import System.Environment

data Foo s = Foo [Int] [String]
    deriving Show

instance Reifies s Int => Arbitrary (Foo s) where
    arbitrary = do
        xs  <- listOf chooseAny
        len <- choose (1, reflect (Proxy :: Proxy s))
        ys  <- vectorOf len (shuffle "Hello, world")
        return $ Foo xs ys

main :: IO ()
main = do
    [len] <- getArgs
    reify (read len :: Int) $ \(Proxy :: Proxy s) ->
        print =<< generate (arbitrary :: Gen (Foo s))

There are a few additional things to note here:

  1. A phantom type variable has been added to Foo. This type variable associates the reified data to our type, so it can be reflected back out in the instance for this type.

  2. The Arbitrary instance for Foo s has incurred a new contraint, stating that the type represented by s somehow reifies an Int. How this happens is the magic of the reflection library, and uses a clever GHC trick representing Edward’s unique twist on Oleg and Chung-chieh’s work. This instance requires the UndecidableInstances extension.

  3. We now call reify with the data we want to pass along. This function takes a lambda whose first argument is a Proxy s, giving us a way to know which type variable to use in the type of the call to arbitrary. This requires the ScopedTypeVariables extension.

That’s it: reflection gives us a way to plumb extra data into instances at runtime, at the cost of adding a single phantom type.

If the phantom type seems excessive for one use case, or if adding the phantom would effect a large family of types, then an alternative is to enable the FlexibleInstances extension, and use Edward’s tagged library to carry the phantom instead:

{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE UndecidableInstances #-}

module Main where

import Data.Proxy
import Data.Tagged
import Data.Reflection
import Test.QuickCheck.Arbitrary
import Test.QuickCheck.Gen
import System.Environment

data Foo = Foo [Int] [String]
    deriving Show

instance Reifies s Int => Arbitrary (Tagged s Foo) where
    arbitrary = fmap Tagged $ do
        xs  <- listOf chooseAny
        len <- choose (1, reflect (Proxy :: Proxy s))
        ys  <- vectorOf len (shuffle "Hello, world")
        return $ Foo xs ys

main :: IO ()
main = do
    [len] <- getArgs
    reify (read len :: Int) $ \(Proxy :: Proxy s) ->
        print . unTagged =<< generate (arbitrary :: Gen (Tagged s Foo))

This way we leave the original type alone – which may be the only option if you’re generating arbitrary data for types from libraries. You’ll just have to wrap and unwrap the Tagged newtype wrapper as necessary.

Another benefit of using Tagged is that, because it can be wrapped and unwrapped as necessary, it becomes possible to change the refied information in cases where nested types are involved. In this last example, the user is allowed to specify the value that should be supplied to the Bar constructor during data generation.

{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE UndecidableInstances #-}

module Main where

import Data.Proxy
import Data.Tagged
import Data.Reflection
import Test.QuickCheck.Arbitrary
import Test.QuickCheck.Gen
import System.Environment

newtype Bar = Bar Int
    deriving Show

data Foo = Foo [Bar] [String]
    deriving Show

instance Reifies s Int => Arbitrary (Tagged s Bar) where
    arbitrary = return $ Tagged $ Bar $ reflect (Proxy :: Proxy s)

instance Reifies s (Int, Int) => Arbitrary (Tagged s Foo) where
    arbitrary = fmap Tagged $ do
        let (len, bar) = reflect (Proxy :: Proxy s)
        xs <- listOf (reify bar $ \(Proxy :: Proxy r) ->
                          unTagged <$> (arbitrary :: Gen (Tagged r Bar)))
        l  <- choose (1, len)
        ys <- vectorOf l (shuffle "Hello, world")
        return $ Foo xs ys

main :: IO ()
main = do
    [len, barValue] <- getArgs
    reify (read len :: Int, read barValue :: Int) $ \(Proxy :: Proxy s) ->
        print . unTagged =<< generate (arbitrary :: Gen (Tagged s Foo))

Syndicated 2017-02-23 00:00:00 from Lost in Technopolis

13 Mar 2017 fozbaca   » (Apprentice)

STEM: Still No Shortage – I. M. H. O. – Medium

STEM: Still No Shortage – I. M. H. O. – Medium
On a myth that just won’t die

Syndicated 2017-03-13 20:21:59 from fozbaca.org

12 Mar 2017 badvogato   » (Master)

魏尔伦《月光曲》 梁宗岱译

Votre âme est un paysage choisi
你的魂是片迷幻的风景
Que vont charmant masques et bergamasques
斑衣的俳优在那里游行,
Jouant du luth et dansant et quasi
他们弹琴而且跳舞——终竟
Tristes sous leurs déguisements fantasques.
彩装下掩不住欲颦的心。

Tout en chantant sur le mode mineur
他们虽也曼声低唱,歌颂
L'amour vainqueur et la vie opportune
那胜利的爱和美满的生,
Ils n'ont pas l'air de croire à leur bonheur
终不敢自信他们的好梦,
Et leur chanson se mêle au clair de lune,
他们的歌声却散入月明——

Au calme clair de lune triste et beau,
散入微茫,凄美的月明里,
Qui fait rêver les oiseaux dans les arbres
去萦绕树上小鸟的梦魂,
Et sangloter d'extase les jets d'eau,
又使喷泉在白石丛深处
Les grands jets d'eau sveltes parmi les marbres.
喷出丝丝的欢乐的咽声。
https://www.douban.com/note/489092495/

11 Mar 2017 fozbaca   » (Apprentice)

Do What You Can't: The Transcript • r/caseyneistat

Do What You Can't: The Transcript • r/caseyneistat
To the haters. The doubters. My seventh-grade vice-principal. To everyone who’s ever told anyone with a dream: “they can’t.” This video’s for you. Keep your…

Syndicated 2017-03-11 20:16:35 from fozbaca.org

11 Mar 2017 fozbaca   » (Apprentice)

Peter Jackson’s ‘The Hobbit’ Trilogy, Fan-Edited Down To Two Hours

Peter Jackson’s ‘The Hobbit’ Trilogy, Fan-Edited Down To Two Hours
← Main About Contact Table of Contents . (updated November 2015) Hi! I’m Fiona van Dahl/FekketCantenel. I grew up watching and re-watching The Hobbit, the…

Syndicated 2017-03-11 20:16:23 from fozbaca.org

11 Mar 2017 fozbaca   » (Apprentice)

That fainting life

That fainting life
Isabella Rotman drew a comic for The Nib about her life as a hemophobe (someone who faints at the sight of blood). Once at a former deli job, I passed out onto…

Syndicated 2017-03-11 20:16:01 from fozbaca.org

11 Mar 2017 fozbaca   » (Apprentice)

10 Mar 2017 MikeGTN   » (Journeyer)

Going to the Dogs: An Island Apart

I've always been a little nervous about guided walks. From the awkward, rather typically British issue of trying to identify your fellow walkers at the outset - ideally without actually asking anyone - to the tricky etiquette of dispersing at the walk's end, they're a minefield. I once thought I'd like to lead walks - the idea of ambling around places I love with a respectful and engaged bunch of people both asking questions and adding their knowledge was attractive, if unlikely. Of course the reality is often different: bored tourists "doing" the sights, loudmouthed know-alls trying to upstage the...

Syndicated 2017-03-04 23:03:00 from Lost::MikeGTN

9 Mar 2017 mentifex   » (Master)

Strong AI Theory of Mind Considerations

We may need to add a tru tag to the conceptual flag-panel in the various AI Minds, such as in Forth and in Perl. Only the first word in the thought-engram will need a tru tag. We may want to have the following tags in the panel.

tru psi act hlc pos jux pre iob tkb seq num mfn dba rv

Active code will probably assign a numeric "true" value, so that only the most current thoughts will have an assumption of truth and believability. Preterite be-verb assertions like "Kilroy is here" shall have decayed down to a low tru value so that they will not be taken at face value by the thinking Mind. On the other hand, non-be-verb knowledge about the ontology of the world will need to be regarded as true.

As the thinking AI associates from thought to thought, sentence-engrams with a low truth-value should not come into play. Various criteria may cause some engrams to go to a mid-range truth-value and other engrams to a minimal truth-value, so that reliable knowledge may come into play.

The tru-tag will permit rather elaborate ideas to emerge back into consciousness with emphasis on special considerations such as the inclusion of a prepositional phrase in the idea, as in a sentence like, "A man with a boat needs money". The 3D AI will therefore need not only modules for thinking with prepositional phrases, but also modules for conjunctions to be used in sentences like, "I know that time is money" or "I think that boats cost money." The routines for comprehension will need to be modernized or adjusted to allow parts of a long input sentence to be comprehended upon selection of a likely subject.

To some extent, we are aiming for a conscious AI Mind that realizes that it lives inside a computer and that it has only limited interaction with the outside world. It may need the ASCII bell-function as a way of deliberately summoning the attention of a human user.

8 Mar 2017 sye   » (Journeyer)

wechat machine translation

📰哈群新闻编辑部📖
2017年2月27日,星期一,祝大家愉快! (本日主编 Judy)
 
1.  上周末是哈佛本科2018 届家长日。家长们聚会校园,听讲座看比赛参观博物馆,和子女共进午餐其乐融融。超过百位华人家长出席活动。(哈群编辑部)
2.  一印度工程师在美遭枪击身亡,另有两名受伤。开枪者系美国退伍老兵,在射击前曾大喊"滚出我们的国家"。(FOX)
3.  新奥尔良狂欢节游行人群遭一卡车冲撞,至少28人受伤。警方已拘留一名疑似醉酒的肇事男子。(CNN)
4.  前美国劳动部部长Tom Perez 以235:200投票结果胜出,当选美国民主党主席。(CBS)
5.  川普继续与媒体开战,CNN、纽约时报、洛杉矶时报等被拒媒体发布会。川普推文宣布将不参加今年的白宫记者协会晚宴。(综合)
6.  巴菲特有望赢得与对冲基金Protege Partners的10年100万美元的赌局。截至2015年末回报率:巴菲特66%,Protege 22%。(WSJ)
7.  谷歌团队攻破网络加密基石SHA-1算法。(WSJ)
8.  周六中国拥有全部自主知识产权的"子弹头"列车开跑京广线。(BBC)
9.  沙特超过俄罗斯再度成为中国的第一大原油供应国。(WSJ)
10. 中国力争2020年末黄金年产量550吨,以增强中国的金融和经济基础以及抵御风险的能力。(WSJ)
11. 英国的香港半年报告书指出,过去半年的连串事态发展,令港人及国际社会对一国两制的落实表示关注。中国外交部要求英国谨言慎行,停止干预香港事务。(BBC/中国日报)
12. 马来西亚卫生部长周日说,金正男遭受大剂量VX神经毒剂攻击,身体主要器官在15-20分钟内衰竭致死亡。(BBC)
13. 中国国内首条"环中国海"邮轮航线开通。(中新网)
14. 第89届奥斯卡奖揭晓:《月光男孩》获最佳影片奖。哈佛校友达米恩·查泽雷获《爱乐之城》最佳导演奖,另一位校友贾斯汀·赫维玆获最佳原创音乐和最佳原创歌曲奖。详情: http://oscar.go.com/winners
 
【特刊】"妈妈常常说'人生就如同一盒巧克力,你永远无法知道下一粒是什么味道'。"      --《阿甘正传》 (Forrest Gump) 1994年奥斯卡最佳影片
 
(欢迎转载,请保留全文 — ©️哈佛家长群) 📰 haqun Newsroom  on Monday, February 27, 2017, I wish you all happy!  (Our editor Judy) 1. Last weekend, a Harvard undergraduate 2018 parents day. Parents party campus, lecture the game visit the Museum, lunch with children happy. More than hundreds of Chinese parents attended the event.  (Group of editors) 2. An Indian engineer was shot and killed in the United States, and another two were injured. Fire Department of Veterans of the United States, prior to the shooting had shouted "get out of our country." (FOX) 3. New Orleans Mardi Gras marchers by a truck collision, at least 28 people were injured. Police have detained a suspected drunken men of the incident. (CNN) 4. the former US Labor Secretary Tom Perez won with 235:200 votes, was elected Chairman of the Democratic Party of the United States. (CBS) 5. Trump continued to battle with the media, such as CNN, the New York Times, the Los Angeles Times refused a press release. Trump tweets announced that it would not participate in this year's White House Correspondents ' Association dinner.  (Consolidation) 6. Buffett is expected to win with the hedge fund Protege Partners of the 10-year, $1 million gamble. By the end of 2015 return: he 66%,Protege 22%.  (WSJ) 7. Google Team Foundation SHA-1 against network data encryption algorithm.  (WSJ) 8. Saturday China has full independent intellectual property rights of "bullet" trains running the Beijing-Guangzhou line.  (BBC) 9. Saudi Arabia over Russia again to become China's largest crude oil supplier. (WSJ) 10. strive to the end of 2020, China gold output of 550 tons, to enhance China's financial and economic, as well as the ability to resist risks. (WSJ) 11. British Hong Kong six months report points out that a series of developments over the past six months, Hong Kong people and the international community expressed concern about the implementation of the one country, two systems. China's Foreign Ministry asked the British speak, stop interfering in Hong Kong's Affairs. (China daily BBC/) 12. Malaysia's Health Minister said Sunday that Kim Jong Nam suffered from large doses of VX nerve agent attack, major organ failure in 15-20 minutes of the body will die. (BBC) 13. China's first "China Sea" cruise line opened. (Talmadge) 14. The 89th annual Academy Awards: the Moonlight boy won the best film award. Harvard Alumni Damien·chazelei Philharmonic, city of of the best Director prize, another alumnus Justin d Zi won the best original music award and best original song. Details: http://Oscar.go.com/winners "Special Edition," "Mama always said life was like a box of chocolates, you never know when the next one is going to get. "--Gump (Forrest Gump) 1994 Academy Award for best picture (welcome to reprint, please retain the full text- ️ Harvard parents group)


syndicated from nuniabiz.blogspot.com

Syndicated 2017-02-27 17:40:00 (Updated 2017-03-08 16:02:08) from badvogato

21 Mar 2017 LaForge   » (Master)

Returning from TelcoSecDay 2017 / General Musings

I'm just on my way back from the Telecom Security Day 2017 <https://www.troopers.de/troopers17/telco-sec-day/>, which is an invitation-only event about telecom security issues hosted by ERNW back-to-back with their Troopers 2017 <https://www.troopers.de/troopers17/> conference.

I've been presenting at TelcoSecDay in previous years and hence was again invited to join (as attendee). The event has really gained quite some traction. Where early on you could find lots of IT security / hacker crowds, the number of participants from the operator (and to smaller extent also equipment maker) industry has been growing.

The quality of talks was great, and I enjoyed meeting various familiar faces. It's just a pity that it's only a single day - plus I had to head back to Berlin still today so I had to skip the dinner + social event.

When attending events like this, and seeing the interesting hacks that people are working on, it pains me a bit that I haven't really been doing much security work in recent years. netfilter/iptables was at least somewhat security related. My work on OpenPCD / librfid was clearly RFID security oriented, as was the work on airprobe, OsmocomTETRA, or even the EasyCard payment system hack

I have the same feeling when attending Linux kernel development related events. I have very fond memories of working in both fields, and it was a lot of fun. Also, to be honest, I believe that the work in Linux kernel land and the general IT security research was/is appreciated much more than the endless months and years I'm now spending my time with improving and extending the Osmocom cellular infrastructure stack.

Beyond the appreciation, it's also the fact that both the IT security and the Linux kernel communities are much larger. There are more people to learn from and learn with, to engage in discussions and ping-pong ideas. In Osmocom, the community is too small (and I have the feeling, it's actually shrinking), and in many areas it rather seems like I am the "ultimate resource" to ask, whether about 3GPP specs or about Osmocom code structure. What I'm missing is the feeling of being part of a bigger community. So in essence, my current role in the "Open Source Cellular" corner can be a very lonely one.

But hey, I don't want to sound more depressed than I am, this was supposed to be a post about TelcoSecDay. It just happens that attending IT Security and/or Linux Kernel events makes me somewhat gloomy for the above-mentioned reasons.

Meanwhile, if you have some interesting projcets/ideas at the border between cellular protocols/systems and security, I'd of course love to hear if there's some way to get my hands dirty in that area again :)

Syndicated 2017-03-21 17:00:00 from LaForge's home page

11 Mar 2017 fozbaca   » (Apprentice)

8 Mar 2017 mjg59   » (Master)

The Internet of Microphones

So the CIA has tools to snoop on you via your TV and your Echo is testifying in a murder case and yet people are still buying connected devices with microphones in and why are they doing that the world is on fire surely this is terrible?

You're right that the world is terrible, but this isn't really a contributing factor to it. There's a few reasons why. The first is that there's really not any indication that the CIA and MI5 ever turned this into an actual deployable exploit. The development reports[1] describe a project that still didn't know what would happen to their exploit over firmware updates and a "fake off" mode that left a lit LED which wouldn't be there if the TV were actually off, so there's a potential for failed updates and people noticing that there's something wrong. It's certainly possible that development continued and it was turned into a polished and usable exploit, but it really just comes across as a bunch of nerds wanting to show off a neat demo.

But let's say it did get to the stage of being deployable - there's still not a great deal to worry about. No remote infection mechanism is described, so they'd need to do it locally. If someone is in a position to reflash your TV without you noticing, they're also in a position to, uh, just leave an internet connected microphone of their own. So how would they infect you remotely? TVs don't actually consume a huge amount of untrusted content from arbitrary sources[2], so that's much harder than it sounds and probably not worth it because:

YOU ARE CARRYING AN INTERNET CONNECTED MICROPHONE THAT CONSUMES VAST QUANTITIES OF UNTRUSTED CONTENT FROM ARBITRARY SOURCES

Seriously your phone is like eleven billion times easier to infect than your TV is and you carry it everywhere. If the CIA want to spy on you, they'll do it via your phone. If you're paranoid enough to take the battery out of your phone before certain conversations, don't have those conversations in front of a TV with a microphone in it. But, uh, it's actually worse than that.

These days audio hardware usually consists of a very generic codec containing a bunch of digital→analogue converters, some analogue→digital converters and a bunch of io pins that can basically be wired up in arbitrary ways. Hardcoding the roles of these pins makes board layout more annoying and some people want more inputs than outputs and some people vice versa, so it's not uncommon for it to be possible to reconfigure an input as an output or vice versa. From software.

Anyone who's ever plugged a microphone into a speaker jack probably knows where I'm going with this. An attacker can "turn off" your TV, reconfigure the internal speaker output as an input and listen to you on your "microphoneless" TV. Have a nice day, and stop telling people that putting glue in their laptop microphone is any use unless you're telling them to disconnect the internal speakers as well.

If you're in a situation where you have to worry about an intelligence agency monitoring you, your TV is the least of your concerns - any device with speakers is just as bad. So what about Alexa? The summary here is, again, it's probably easier and more practical to just break your phone - it's probably near you whenever you're using an Echo anyway, and they also get to record you the rest of the time. The Echo platform is very restricted in terms of where it gets data[3], so it'd be incredibly hard to compromise without Amazon's cooperation. Amazon's not going to give their cooperation unless someone turns up with a warrant, and then we're back to you already being screwed enough that you should have got rid of all your electronics way earlier in this process. There are reasons to be worried about always listening devices, but intelligence agencies monitoring you shouldn't generally be one of them.

tl;dr: The CIA probably isn't listening to you through your TV, and if they are then you're almost certainly going to have a bad time anyway.

[1] Which I have obviously not read
[2] I look forward to the first person demonstrating code execution through malformed MPEG over terrestrial broadcast TV
[3] You'd need a vulnerability in its compressed audio codecs, and you'd need to convince the target to install a skill that played content from your servers

comment count unavailable comments

Syndicated 2017-03-08 01:30:19 from Matthew Garrett

11 Mar 2017 fozbaca   » (Apprentice)

8 Mar 2017 fozbaca   » (Apprentice)

7 Mar 2017 zeenix   » (Journeyer)

GDP meets GSoC

Are you a student? Passionate about Open Source? Want your code to run on next generation of automobiles? You're in luck! Genivi Development Platform will be participating in Google Summer of Code this summer and you are welcome to participate. We have collected a bunch of ideas for what would be a good 3 month project for a student but you're more than welcome to suggest your own project. The ideas page, also has instructions on how to get started with GDP.

We look forward to your participation!

Syndicated 2017-03-07 18:11:00 (Updated 2017-03-07 18:11:57) from zeenix

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Advogato User Stats
Users13993
Observer9877
Apprentice748
Journeyer2333
Master1031

New Advogato Members

Recently modified projects

24 Feb 2017 Xrsf
15 Feb 2017 Justice4all
28 Sep 2016 Geomview
28 Sep 2016 SaVi
14 Jun 2016 luxdvd
8 Mar 2016 ShinyCMS
8 Feb 2016 OpenBSC
5 Feb 2016 Abigail
29 Dec 2015 mod_virgule
19 Sep 2015 Break Great Firewall
25 May 2015 Molins framework for PHP5
25 May 2015 Beobachter
7 Mar 2015 Ludwig van
7 Mar 2015 Stinky the Shithead
18 Dec 2014 AshWednesday

New projects

8 Mar 2016 ShinyCMS
5 Feb 2016 Abigail
2 Dec 2014 Justice4all
11 Nov 2014 respin
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
1 Nov 2013 FAQ Linux