Recent blog entries

23 May 2017 LaForge   » (Master)

Power-cycling a USB port should be simple, right?

Every so often I happen to be involved in designing electronics equipment that's supposed to run reliably remotely in inaccessible locations,without any ability for "remote hands" to perform things like power-cycling or the like. I'm talking about really remote locations, possible with no but limited back-haul, and a very high cost of ever sending somebody there for remote maintenance.

Given that a lot of computer peripherals (chips, modules, ...) use USB these days, this is often some kind of an embedded ARM (rarely x86) SoM or SBC, which is hooked up to a custom board that contains a USB hub chip as well as a line of peripherals.

One of the most important lectures I've learned from experience is: Never trust reset signals / lines, always include power-switching capability. There are many chips and electronics modules available on the market that have either no RESET, or even might claim to have a hardware RESET line which you later (painfully) discover just to be a GPIO polled by software which can get stuck, and hence no way to really hard-reset the given component.

In the case of a USB-attached device (even though the USB might only exist on a circuit board between two ICs), this is typically rather easy: The USB hub is generally capable of switching the power of its downstream ports. Many cheap USB hubs don't implement this at all, or implement only ganged switching, but if you carefully select your USB hub (or in the case of a custom PCB), you can make sure that the given USB hub supports individual port power switching.

Now the next step is how to actually use this from your (embedded) Linux system. It turns out to be harder than expected. After all, we're talking about a standard feature that's present in the USB specifications since USB 1.x in the late 1990ies. So the expectation is that it should be straight-forward to do with any decent operating system.

I don't know how it's on other operating systems, but on Linux I couldn't really find a proper way how to do this in a clean way. For more details, please read my post to the linux-usb mailing list.

Why am I running into this now? Is it such a strange idea? I mean, power-cycling a device should be the most simple and straight-forward thing to do in order to recover from any kind of "stuck state" or other related issue. Logical enabling/disabling of the port, resetting the USB device via USB protocol, etc. are all just "soft" forms of a reset which at best help with USB related issues, but not with any other part of a USB device.

And in the case of e.g. an USB-attached cellular modem, we're actually talking about a multi-processor system with multiple built-in micro-controllers, at least one DSP, an ARM core that might run another Linux itself (to implement the USB gadget), ... - certainly enough complex software that you would want to be able to power-cycle it...

I'm curious what the response of the Linux USB gurus is.

Syndicated 2017-05-23 22:00:00 from LaForge's home page

22 May 2017 MikeGTN   » (Journeyer)

London's Other Orbitals: Walking the A110

I realised as I sat on the near-empty Northern Line train, shuddering noisily into the light somewhere in Finchley, that my previous visit to this part of the London had been a very long time ago. In fact my last traversal of this part of the Underground network was before I kept records of such things. In the mid-1990s I'd embarked on a project to cover as much of the Tube as I could - despite a crippling phobia regarding escalators - but this involved nothing more fastidious than marking the lines on a map. Now I was clanking into...

Syndicated 2017-05-06 22:05:00 from Lost::MikeGTN

18 May 2017 mikal   » (Journeyer)

The Collapsing Empire




ISBN: 076538888X
LibraryThing
This is a fun fast read, as is everything by Mr Scalzi. The basic premise here is that of a set of interdependent colonies that are about to lose their ability to trade with each other, and are therefore doomed. Oh, except they don't know that and are busy having petty trade wars instead. It isn't a super intellectual read, but it is fun and does leave me wanting to know what happens to the empire...

Tags for this post: book john_scalzi
Related posts: The Last Colony ; The End of All Things; Zoe's Tale; Agent to the Stars; Redshirts; Fuzzy Nation


Comment

Syndicated 2017-05-17 21:46:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

14 May 2017 lloydwood   » (Journeyer)

Keyboard logging on Hewlett-Packard laptops

Versions of the Conexant audio driver on HP laptops can log every keystroke to disk, writing to a visible file in C:\Users\Public. Your passwords, everything.

So HP issued a driver update. But that driver update is reported to still have the logging capability, turned off. Logging can be reactivated with a simple registry hack.

My future plans do not include buying devices from Hewlett-Packard, or investing in Conexant stock.

Update: HP's official security bulletin on the issue. Meanwhile, the logger can be reused and exploited.

13 May 2017 mones   » (Journeyer)

Disabling "flat-volumes" in pulseaudio

Today I've just faced another of those happy ideas some people implements in software, which can be useful for some cases, but can also also be bad as default behaviour.

The problems caused were already posted to Debian mailing lists, fortunately, as well as its solution, which basically in a default Debian configuration means to:

$ sudo echo "flat-volumes = no" >> /etc/pulse/daemon.conf
$ pulseaudio -k && pulseaudio

And I think the default for Stretch should be set as above: raising volume to 100% just because of a system notification, while useful for some, it's not what common users expect.

Syndicated 2017-05-13 15:12:48 from Ricardo Mones

12 May 2017 badvogato   » (Master)

12 May 2017 mikal   » (Journeyer)

Python3 venvs for people who are old and grumpy

I've been using virtualenvwrapper to make venvs for python2 for probably six or so years. I know it, and understand it. Now some bad man (hi Ramon!) is making me do python3, and virtualenvwrapper just isn't a thing over there as best as I can tell.

So how do I make a venv? Its really not too bad...

First, install the dependencies:

10 May 2017 MikeGTN   » (Journeyer)

London's Other Orbitals

I set out to write a fairly ordinary report on a long, satisfying walk I recently completed - and as the introduction grew, I knew I was in fact writing something else. A justification, a manifesto, or just a project plan perhaps - in any case an explanation of what led me to decide to walk along an uncelebrated North London A-road from somewhere to somewhere else. I didn't feel the need to justify this to anyone except perhaps myself - but I felt the uneasy stirrings of a project forming - and that's always dangerous. So, the ramblings below...

Syndicated 2017-05-06 21:05:00 from Lost::MikeGTN

10 May 2017 mones   » (Journeyer)

Building on a RPi without disc

Nothing like a broken motherboard to experiment alternative ways of building software. This time I've tried to use a Raspberry Pi and, to avoid wearing out the SD card too much, a NFS mount on a Synology NAS. It happens both items were generously donated by two different Claws Mail users some years ago, thanks to them! ;-)

So, after installing all build dependencies and a build helper, how long it took?

configure-claws: Tue May 9 13:28:55 UTC 2017
cd b-claws && env PKG_CONFIG_PATH=/opt/claws/lib/pkgconfig ./configure --enable-maintainer-mode --prefix=/opt/claws > /home/mones/nfs/claws/log-configure-claws.txt 2>&1 && cd ..
configure-claws: Tue May 9 13:34:09 UTC 2017
compile-claws: Tue May 9 13:34:09 UTC 2017
cd b-claws && make -j2 > /home/mones/nfs/claws/log-compile-claws.txt 2>&1 && cd ..
compile-claws: Tue May 9 15:44:28 UTC 2017

Yep, that is more than 5 minutes for configuring and more than 130 minutes for compiling. Not for being in a hurry, but I've built kernels which took more, some decades ago :-)

And if you want to know how to break a motherboard...


One day you're converting some raw photos to JPEG with RawTherapee and the computer shuts down. Then you try again and notice the temperature of the CPU is too high, and it shuts down again, of course. You boot into BIOS and then realize the thermal protection shutdown was enabled (and you thank your past self for having enabled it!). The next day is Sunday and you try to clean inside the case, but there's not much dirt to clean. Dismounting the CPU cooler reveals that the thermal compound has nearly gone though.

The following day you try to buy some thermal grease, but the corner store only has a "P.R.C." labeled syringe and some thermal pad from CoolBox. The thermal pad seems to work fine on first boot, until you try RawTherapee and it shuts down again. Crying doesn't help as you see the temperature monitor increase one degree per second while you're staring at the BIOS (and it shuts down again).

Another day passes and you go to another local store, a bit further than the first, and firmly determined to get a real thermal compound. Nevertheless the store only has two options: expensive one and a cheap one. Store employee says the cheaper works fine, and the label shows indeed better specs than the expensive one. So, not without some hesitation, you buy the cheaper one, which is made by (you figured it out) CoolBox.

Back at home you remove the thermal pad and try to clean the cooler and the processor and to apply not too much compound. Somehow here is where the things go wrong. Maybe while trying to put the cooler in place, maybe while applying compound a second time. The fact is that now there's no video output anymore, and no power is being delivered to USB ports. No video, no keyboard and no idea about what's next.

Anyway, there's not much alternatives, the problem is to know which is the damaged part: CPU, motherboard or both. Ideas welcome ;-)

Syndicated 2017-05-09 23:32:56 from Ricardo Mones

9 May 2017 mjg59   » (Master)

Intel AMT on wireless networks

More details about Intel's AMT vulnerablity have been released - it's about the worst case scenario, in that it's a total authentication bypass that appears to exist independent of whether the AMT is being used in Small Business or Enterprise modes (more background in my previous post here). One thing I claimed was that even though this was pretty bad it probably wasn't super bad, since Shodan indicated that there were only a small number of thousand machines on the public internet and accessible via AMT. Most deployments were probably behind corporate firewalls, which meant that it was plausibly a vector for spreading within a company but probably wasn't a likely initial vector.

I've since done some more playing and come to the conclusion that it's rather worse than that. AMT actually supports being accessed over wireless networks. Enabling this is a separate option - if you simply provision AMT it won't be accessible over wireless by default, you need to perform additional configuration (although this is as simple as logging into the web UI and turning on the option). Once enabled, there are two cases:

  1. The system is not running an operating system, or the operating system has not taken control of the wireless hardware. In this case AMT will attempt to join any network that it's been explicitly told about. Note that in default configuration, joining a wireless network from the OS is not sufficient for AMT to know about it - there needs to be explicit synchronisation of the network credentials to AMT. Intel provide a wireless manager that does this, but the stock behaviour in Windows (even after you've installed the AMT support drivers) is not to do this.
  2. The system is running an operating system that has taken control of the wireless hardware. In this state, AMT is no longer able to drive the wireless hardware directly and counts on OS support to pass packets on. Under Linux, Intel's wireless drivers do not appear to implement this feature. Under Windows, they do. This does not require any application level support, and uninstalling LMS will not disable this functionality. This also appears to happen at the driver level, which means it bypasses the Windows firewall.
Case 2 is the scary one. If you have a laptop that supports AMT, and if AMT has been provisioned, and if AMT has had wireless support turned on, and if you're running Windows, then connecting your laptop to a public wireless network means that AMT is accessible to anyone else on that network[1]. If it hasn't received a firmware update, they'll be able to do so without needing any valid credentials.

If you're a corporate IT department, and if you have AMT enabled over wifi, turn it off. Now.

[1] Assuming that the network doesn't block client to client traffic, of course

comment count unavailable comments

Syndicated 2017-05-09 20:18:21 from Matthew Garrett

8 May 2017 zeenix   » (Journeyer)

Rust Memory Management

In the light of my latest fascination with Rust programming language, I've started to make small presentation about Rust at my office, since I'm not the only one at our company who is interested in Rust. My first presentation in Feb was about a very general introduction to the language but at that time I had not yet really used the language for anything real myself so I was a complete novice myself and didn't have a very good idea of how memory management really works. While working on my gps-share project in my limited spare time, I came across quite a few issues related to memory management but I overcame all of them with help from kind folks at #rust-beginners IRC channel and the small but awesome Rust-GNOME community.

Having learnt some essentials of memory management, I thought I share my knowledge/experience with folks at the office. The talk was not well-attended due to conflicts with other meetings at office but the few folks who attended were very interested and asked some interesting and difficult questions (i-e the perfect audience). One of the questions was if I could put this up as a blog post so here I am. :)

Basics


Let's start with some basics: In Rust,

  1. stack allocation is preferred over the heap allocation and that's where everything is allocated by default.
  2. There is strict ownership semantics involved so each value can only and only have one owner at a particular time.
  3. When you pass a value to a function, you move the ownership of that value to the function argument and similarly, when you return a value from a function, you pass the ownership of the return value to the caller.

Now these rules make Rust very secure but at the same time if you had no way to allocate on the heap or be able to share data between different parts of your code and/or threads, you can't get very far with Rust. So we're provided with mechanisms to (kinda) work around these very strict rules, without compromising on safety these rules provide. Let's start with a simple code that will work fine in many other languages:

fn add_first_element(v1: Vec<i32>, v2: Vec<i32>) -> i32 {
return v1[0] + v2[0];
}

fn main() {
let v1 = vec![1, 2, 3];
let v2 = vec![1, 2, 3];

let answer = add_first_element(v1, v2);

// We can use `v1` and `v2` here!
println!("{} + {} = {}", v1[0], v2[0], answer);
}

This gives us an error from rustc:

error[E0382]: use of moved value: `v1`
--> sample1.rs:13:30
|
10 | let answer = add_first_element(v1, v2);
| -- value moved here
...
13 | println!("{} + {} = {}", v1[0], v2[0], answer);
| ^^ value used here after move
|
= note: move occurs because `v1` has type `std::vec::Vec<i32>`, which does not implement the `Copy` trait

error[E0382]: use of moved value: `v2`
--> sample1.rs:13:37
|
10 | let answer = add_first_element(v1, v2);
| -- value moved here
...
13 | println!("{} + {} = {}", v1[0], v2[0], answer);
| ^^ value used here after move
|
= note: move occurs because `v2` has type `std::vec::Vec<i32>`, which does not implement the `Copy` trait

What's happening is that we passed 'v1' and 'v2' to add_first_element() and hence we passed its ownership to add_first_element() as well and hence we can't use it afterwards. If Vec was a Copy type (like all primitive types), we won't get this error because Rust will copy the value for add_first_element and pass those copies to it. In this particular case the solution is easy:

Borrowing


fn add_first_element(v1: &Vec<i32>, v2: &Vec<i32>) -> i32 {
return v1[0] + v2[0];
}

fn main() {
let v1 = vec![1, 2, 3];
let v2 = vec![1, 2, 3];

let answer = add_first_element(&v1, &v2);

// We can use `v1` and `v2` here!
println!("{} + {} = {}", v1[0], v2[0], answer);
}

This one compiles and runs as expected. What we did was to convert the arguments into reference types. References are Rust's way of borrowing the ownership. So while add_first_element() is running, it owns 'v1' and 'v2' but not after it returns. Hence this code works.

While borrowing is very nice and very helpful, in the end it's temporary. The following code won't build:

struct Heli {
reg: String
}

impl Heli {
fn new(reg: String) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = "G-HONI".to_string();
let heli = Heli::new(reg);

println!("Registration {}", reg);
heli.hover();
}

rustc says:

error[E0382]: use of moved value: `reg`
--> sample3.rs:20:33
|
18 | let heli = Heli::new(reg);
| --- value moved here
19 |
20 | println!("Registration {}", reg);
| ^^^ value used here after move
|
= note: move occurs because `reg` has type `std::string::String`, which does not implement the `Copy`

If String had Copy trait implemented for it, this code would have compiled. But if efficiency is a concern at all for you (it is for Rust), you wouldn't want most values to be copied around all the time. We can't use a reference here as Heli::new() above needs to keep the passed 'reg'. Also note that the issue here is not that 'reg' was passed to Heli:new() and used afterwards by Heli::hover() afterwards but the fact that we tried to use 'reg' after we have given its ownership to Heli instance through Heli::new().

I realize that the above code doesn't make use of borrowing but if we were to make use of that, we'll have to declare lifetimes for the 'reg' field and the code still won't work because we want to keep the 'reg' in our Heli struct. There is a better solution here:

Rc


use std::rc::Rc;                                                                                         

struct Heli {
reg: Rc<String>
}

impl Heli {
fn new(reg: Rc<String>) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = Rc::new("G-HONI".to_string());
let heli = Heli::new(reg.clone());

println!("Registration {}", reg);
heli.hover();
}

This code builds and runs successfully. Rc stands for "Reference Counted" so by putting data into this generic container, adds reference counting to the data in question. Note that while you had to explicitly call clone() method of Rc to increment its refcount, you don't need to do anything to decrease the refcount. Each time an Rc reference goes out of scope, the reference is decremented automatically and when it reaches 0, the container Rc and its contained data are freed.

Cool, Rc is super easy to use so we can just use it in all situations where we need shared ownership? Not quite! You can't use Rc to share data between threads. So this code won't compile:

use std::rc::Rc;                                                                                         
use std::thread;

struct Heli {
reg: Rc<String>
}

impl Heli {
fn new(reg: Rc<String>) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = Rc::new("G-HONI".to_string());
let heli = Heli::new(reg.clone());

let t = thread::spawn(move || {
heli.hover();
});
println!("Registration {}", reg);

t.join().unwrap();
}

It results in:

error[E0277]: the trait bound `std::rc::Rc<std::string::String>: std::marker::Send` is not satisfied in `[closure@sample5.rs:22:27: 24:6 heli:Heli]`
--> sample5.rs:22:13
|
22 | let t = thread::spawn(move || {
| ^^^^^^^^^^^^^ within `[closure@sample5.rs:22:27: 24:6 heli:Heli]`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<std::string::String>`
|
= note: `std::rc::Rc<std::string::String>` cannot be sent between threads safely
= note: required because it appears within the type `Heli`
= note: required because it appears within the type `[closure@sample5.rs:22:27: 24:6 heli:Heli]`
= note: required by `std::thread::spawn`

The issue here is that to be able to share data between more than one threads, the data must be of a type that implements Send trait. However not only implementing Send for all types would be very impractical solution, there is also performance penalties associated with implementing Send (which is why Rc doesn't implement Send).

Introducing Arc


Arc stands for Atomic Reference Counting and it's the thread-safe sibling of Rc.

use std::sync::Arc;                                                                                      
use std::thread;

struct Heli {
reg: Arc<String>
}

impl Heli {
fn new(reg: Arc<String>) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = Arc::new("G-HONI".to_string());
let heli = Heli::new(reg.clone());

let t = thread::spawn(move || {
heli.hover();
});
println!("Registration {}", reg);

t.join().unwrap();
}

This one works and the only difference is that we used Arc instead of Rc. Cool, so now we have a very efficient by thread-unsafe way to share data between different parts of the code but also a thread-safe mechanism as well. We're done then? Not quite! This code won't work:

use std::sync::Arc;                                                                                      
use std::thread;

struct Heli {
reg: Arc<String>,
status: Arc<String>
}

impl Heli {
fn new(reg: Arc<String>, status: Arc<String>) -> Heli {
Heli { reg: reg,
status: status }
}

fn hover(& self) {
self.status.clear();
self.status.push_str("hovering");
println!("{} is {}", self.reg, self.status);
}
}

fn main() {
let reg = Arc::new("G-HONI".to_string());
let status = Arc::new("".to_string());
let mut heli = Heli::new(reg.clone(), status.clone());

let t = thread::spawn(move || {
heli.hover();
});
println!("main: {} is {}", reg, status);

t.join().unwrap();
}

This gives us two errors:

error: cannot borrow immutable borrowed content as mutable
--> sample7.rs:16:9
|
16 | self.status.clear();
| ^^^^^^^^^^^ cannot borrow as mutable

error: cannot borrow immutable borrowed content as mutable
--> sample7.rs:17:9
|
17 | self.status.push_str("hovering");
| ^^^^^^^^^^^ cannot borrow as mutable

The issue is that Arc is unable to handle mutation of data from difference threads and hence doesn't give you mutable reference to contained data.

Mutex


For sharing mutable data between threads, you need another type in combination with Arc: Mutex. Let's make the above code work:

use std::sync::Arc;                                                                                      
use std::sync::Mutex;
use std::thread;

struct Heli {
reg: Arc<String>,
status: Arc<Mutex<String>>
}

impl Heli {
fn new(reg: Arc<String>, status: Arc<Mutex<String>>) -> Heli {
Heli { reg: reg,
status: status }
}

fn hover(& self) {
let mut status = self.status.lock().unwrap();
status.clear();
status.push_str("hovering");
println!("thread: {} is {}", self.reg, status.as_str());
}
}

fn main() {
let reg = Arc::new("G-HONI".to_string());
let status = Arc::new(Mutex::new("".to_string()));
let heli = Heli::new(reg.clone(), status.clone());

let t = thread::spawn(move || {
heli.hover();
});

println!("main: {} is {}", reg, status.lock().unwrap().as_str());

t.join().unwrap();
}

This code will work. Notice how you don't have to explicitly unlock the mutex after using. Rust is all about scopes. When the unlocked value goes out of the scope, mutex is automatically unlocked.

Other container types


Mutexes are rather expensive and sometimes you have shared date between threads but not all threads are mutating it (all the time) and that's where RwLock becomes useful. I won't go into details here but it's almost identical to Mutex, except that threads can take read-only locks and since it's possible to safely share non-mutable state between threads, it's a lot more efficient than threads locking other threads each time they access the data.

Another container types I didn't mention above, is Box. The basic use of Box is that it's a very generic and simple way of allocating data on the heap. It's typically used to turn an unsized type into a sized type. The module documentation has a simple example on that.

What about lifetimes


One of my colleagues who had had some experience with Rust was surprised that I didn't cover lifetimes in my talk. Firstly, I think it deserves a separate talk of it's own. Secondly, if you make clever use of the container types available to you and described above, most often you don't have to deal with lifetimes. Thirdly, lifetimes is Rust is something that I still struggle with, each time I have to deal with it so I feel a bit unqualified to teach others about how they work.

The end


I hope you find some of the information above useful. If you are looking for other resources on learning Rust, the Rust book is currently your best bet. I am still a newbie at Rust so if you see some mistakes in this post, please do let me know in the comments section.

Happy safe hacking!

Syndicated 2017-05-08 07:00:00 (Updated 2017-05-08 20:49:43) from zeenix

8 May 2017 mikal   » (Journeyer)

Things I read today: the best description I've seen of metadata routing in neutron

I happened upon a thread about OVN's proposal for how to handle nova metadata traffic, which linked to this very good Suse blog post about how metadata traffic is routed in neutron. I'm just adding the link here because I think it will be useful to others. The OVN proposal is also an interesting read.

Tags for this post: openstack nova neutron metadata ovn
Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Nova vendordata deployment, an excessively detailed guide; One week of Nova Kilo specifications; Specs for Kilo; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler

Comment

Syndicated 2017-05-07 17:52:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

6 May 2017 eMBee   » (Journeyer)

Everlasting Life Through Cyberspace

The idea has been brought up a few times, that we would upload our minds into a computer and in this way would be able to live forever.

How would that look like?

At it's base, every uploaded mind would represent a program with access to data storage and computing resources. These programs can interact with each other not unlike programs on the internet today. For example like in a virtual reality game world. (Second Life, etc)

In uploaded form we would manipulate our environment through other programs, that we create ourselves or that others create for us. There might be a market to trade these programs and its products.

Now, anything that is programmable will have bugs and is exploitable, therefore it will be necessary to protect against such attacks. Much like we use encryption today to keep things private, encryption and self protection will play a large role in such a cyberworld.

Unlike today where we can rely on physical protection, healtcare, etc to support us, in cyberspace, all protection comes in form of programs, and that means instead of relying on others to act in our behalf in order to pretect us, every one of us will be able to get a copy of a protection program that we then will be able to control by ourselves. It's all bits and bytes, so there is no reason to assume that we would not be able to have full control over our environment.

We could build cyberspace without these protections, but it is inconceivable that anyone would accept a world where their own well being is not ensured. Either from the outside, or from the inside. But if everyone is uploaded, then there is no more outside, and therefore all protection must come from the inside, through programs. And since even now, most people are not skilled programmers, and that is unlikely to change much in the future, it is hard to imagine that people would willingly enter a world where their life depends on their programming skills. No, people must be able to trust that no harm will come to them in this new world where they otherwise would not have the skill to protect themselves.

The reason we feel safe in this world is because we agreed to a set of laws which we enforce, and for the most part, crimes are visible and can be persecuted. People who live in places where this is not the case don't feel safe, and noone would willingly leave their save home to move to such an area.

In a cyberworld, such safety can only be achieved by making crime impossible to begin with, because given enough resources, a computer program can do damage without leaving a trace.

This has a few severe implications.

If real crime is impossible and we further have full control over our own protection, controlling what data we receive that could possibly offend us, then effectively we can no longer be hurt. There is no physical pain anyways, and any virtual pain we could just switch off.

If we can not be hurt, the corollary is that we can not really hurt anyone. We can not do anything that has any negative consequences on anyone else.

We can not even steal someones computing resources, or rather we probably could but there would be no point because even if computing resources are unevenly distributed, it would not matter.

There is no sense of time, since any sense of time is simulated and can be controlled. So if we were trying to build something that takes lots of time, we could adjust our sense of time so that we would not have to feel the wait for the computation to complete. With that in mind, stealing resources to make computation faster would become meaningless.

And if we could take all resources from someone, then that would effectively kill them as their program could no longer run. It would be frozen in storage. Allowing this could start a war of attrition that would end with one person controlling everyone else and everyone fearing for their life. It just doesn't make sense to allow that.

In other words we no longer have freedom to do evil. Or more drastically, we no longer have complete free will. Free will implies the ability to chose evil. Without that choice free will is limited.

In summary life in cyberspace has the following attributes:

  • We will all live there eternally (well, as long as the computer keeps running).
  • There is no sense of time.
  • We will keep our sense of identity.
  • We will be able to interact with every human ever uploaded.
  • We will continue to advance and develop.
  • There is no power to do evil.
  • We will be able to affect the physical world, and the physical world will affect us.

Here is how cyberspace looks like from the outside:

  • When a person is uploaded, its physical body ceases to function and decays.
  • Everyone can be uploaded, there is no specific requirements or conditions that would prevent anyone from being uploaded.
  • We are assuming that we will be able to communicate with those in cyberspace, but imagine how it would look like if we could not communicate with an uploaded person. We would then actually not be able to tell if they successfully uploaded or not. We would in fact not even be able to tell whether cyberspace exists at all, and we would have to take a leap of faith that it is real.

These attributes, are all attributes of life after death as described at least by the baha'i faith, and possibly other religions.

So maybe, cyberspace already exists, and death is just the upload process? Maybe we are simply not yet advanced enough to perceive or understand our life beyond the point of upload from the outside?

Maybe we just need to evolve further before we are able to communicate with those who are uploaded?

Syndicated 2017-05-06 04:06:33 (Updated 2017-05-06 06:05:15) from DevLog

4 May 2017 johnw   » (Master)

Monads are monoid objects

Monads are monoid objects

Lately I’ve been working again on my Category Theory formalization in Coq, and just now proved, in a completely general setting, the following statement:

Monads are monoid (objects) in the (monoidal) category of endofunctors (which is monoidal with respect to functor composition).

The proof, using no axioms, is here.

Now, just how much category theory was needed to establish this fact?

Categories

We start with concept of a category, which has objects of some Type, and arrows between objects of some other Type. In this way, objects and arrows can be almost anything, except they must provide: identity arrows on every object, and composition of arrows, with composition being associative and identity arrows having no effect on composition.

All arrows between two objects forms a set of arrows, called a “hom-set”. In my library, these are actually constructive hom-setoids, allowing a category-specific definition of what it means for two members of a hom-setoid to be “equivalent”. The fact that it is constructive means that the witness to this equivalence must be available to later functions and proofs, and not only the fact that a witness had been found.

Functors

Given two categories, which may have different objects, arrows and hom equivalences, it is sometime possible to map objects to objects, arrows to arrows, and equivalences to equivalences, so long as identity arrows, composition, and the related laws are preserved. In this case we call such a mapping a “functor”.

Natural transformations

While functors map between categories, natural transformations map between functors, along with a “naturality” condition that performing the transformation before or after utilizing the related functors has no effect on the result.

Isomorphisms

Two objects in a category are said to be isomorphic if there are arrows for one to the other, and back, and the composition of these two arrows is equivalent to identity in both directions.

Note that since the type of objects and arrows is unknown in the general case, the “meaning” of isomorphism can vary from category to category, as we will see below in the case of Cat, the category of all categories.

Cartesian categories

Although objects are just abstract symbols, sometimes it’s possible to reveal additional structure about a category through the identification of arrows that give us details about the internal structure of some object.

One such structure is “cartesian products”. This identifies a product object in the category, in terms of introduction and elimination arrows, along with a universal property stating that all product-like objects in the category must be mappable (in terms of their being a product) to the object identified by the cartesian structure.

For example, I could pick tuples (a, b) in Haskell as a product , or some custom data type Tuple, or even a larger data structure (a, b, c), and all of these would be products for a and b. However, only tuples and Tuple are universal, in the sense that every other product has a mapping to them, but not vice versa. Further, the mapping between tuple and Tuple must be a isomorphism. This leaves me free to choose either as the product object for the Haskell category.

Product categories

Whereas cartesion categories tell us more about the internal structure of some product object in a category, product categories are a construction on top of some category, without adding anything to our knowledge of its internals. In particular, a product category is a category whose objects are pairs of objects from some other category, and whose arrows are pairs of the corresponding arrows between those two objects. Arrow equivalence, identity and composition, follow similarly. Thus, every object in a product category is a product, and arrows must always “operate on products”.

Bifunctors

If a functor maps from a product category to some other category (which could also be another product category, but doesn’t have to be), we call it a bifunctor. Another way to think of it is as a “functor of two arguments”.

Endofunctors

A functor that maps a category to itself (though it may map objects to different objects, etc) is called an endofunctor on that category.

The category of endofunctors

The category of endofunctors on some category has as objects every endofunctor, and as arrows natural transformations between these endofunctors. Here identity is the identity transformation, and composition is composition between natural transformations. We can designate the category of endofunctors using the name [C, C], for some category C.

Monoidal categories

A monoidal category reveals the structure of a tensor operation in the category, plus a special object, the unit of the tensor operation. Along with these come laws expressed in terms of isomorphisms between the results of the tensor:

tensor : C × C ⟶ C where "x ⨂ y" := (tensor (x, y));
I : C;

unit_left  {X} : I ⨂ X ≅ X;
unit_right {X} : X ⨂ I ≅ X;

tensor_assoc {X Y Z} : (X ⨂ Y) ⨂ Z ≅ X ⨂ (Y ⨂ Z)

Note that the same category may be monoidal in multiple different ways. Also, we needed product categories, since the tensor is a bifunctor from the product of some category C to itself.

We could also have specified the tensor in curried form, as a functor from C to the category of endofunctors on C:

tensor : C ⟶ [C, C]

However, this adds no information (the two forms are isomorphic), and just made some of the later proofs a bit more complicated.

Monoidal composition

The category of endofunctors on C is a monoidal category, taking the identity endofunctor as unit, and endofunctor composition as the tensor. It is monoidal in other ways too, but this is the structure of interest concerning monads.

Monoid categories

A monoid object in a monoidal category is an object in the category, plus a pair of arrows. Let’s call the arrows mappend and mempty. These map from a tensor product of the monoid object to itself, and from the monoidal unit to the monoid object, along with preservation of the monoid laws in terms of arrow equivlances. In Coq it looks like this:

Context `{C : Category}.
Context `{@Monoidal C}.

(* Here [mon] is the monoid object. *)
Class Monoid (mon : C) := {
  mappend : mon ⨂ mon ~> mon;
  mempty : I ~> mon;

  mempty_left : (* I ⨂ mon ≈ mon *)
    mappend ∘ bimap mempty id ≈ to (@unit_left C _ mon);
  mempty_right : (* mon ⨂ I ≈ mon *)
    mappend ∘ bimap id mempty ≈ to (@unit_right C _ mon);

  (* (mon ⨂ mon) ⨂ mon ≈ mon ⨂ (mon ⨂ mon) *)
  mappend_assoc :
    mappend ∘ bimap mappend id
      ≈ mappend ∘ bimap id mappend ∘ to tensor_assoc
}.

Monads are monoid objects

Given all of the above, we can now state that every monad is a monoid object in the monoidal category of endofunctors, taking composition as the tensor product. return is the mempty natural transformation of that object, and join, the mappend natural transformation:

Context `{C : Category}.
Context `{M : C ⟶ C}.

Definition Endofunctors `(C : Category) := ([C, C]).

Program Definition Monoid_Monad
        (m : @Monoid (Endofunctors C) Composition_Monoidal M) : 
  Monad := {|
  ret  := transform[mempty[m]];
  join := transform[mappend[m]]
|}.

This makes no assumptions about the structure of the category C, other than what has been stated above, and no other aspects of category theory are needed. The proof, again, is here.

Note that there is another way to arrive at monads, from the adjunction of two functors, which I also have a proof for, but this can wait until another post.

Footnotes: [1] We say small here to avoid the paradox of Cat not containing itself.

Syndicated 2017-05-04 00:00:00 from Lost in Technopolis

3 May 2017 LaForge   » (Master)

Overhyped Docker

Overhyped Docker missing the most basic features

I've always been extremely skeptical of suddenly emerging over-hyped technologies, particularly if they advertise to solve problems by adding yet another layer to systems that are already sufficiently complex themselves.

There are of course many issues with containers, ranging from replicated system libraries and the basic underlying statement that you're giving up on the system packet manager to properly deal with dependencies.

I'm also highly skeptical of FOSS projects that are primarily driven by one (VC funded?) company. Especially if their offering includes a so-called cloud service which they can stop to operate at any given point in time, or (more realistically) first get everybody to use and then start charging for.

But well, despite all the bad things I read about it over the years, on one day in May 2017 I finally thought let's give it a try. My problem to solve as a test balloon is fairly simple.

My basic use case

The plan is to start OsmoSTP, the m3ua-testtool and the sua-testtool, which both connect to OsmoSTP. By running this setup inside containers and inside an internal network, we could then execute the entire testsuite e.g. during jenkins test without having IP address or port number conflicts. It could even run multiple times in parallel on one buildhost, verifying different patches as part of the continuous integration setup.

This application is not so complex. All it needs is three containers, an internal network and some connections in between. Should be a piece of cake, right?

But enter the world of buzzword-fueled web-4000.0 software-defined virtualised and orchestrated container NFW + SDN vodoo: It turns out to be impossible, at least not with the preferred tools they advertise.

Dockerfiles

The part that worked relatively easily was writing a few Dockerfiles to build the actual containers. All based on debian:jessie from the library.

As m3ua-testsuite is written in guile, and needs to build some guile plugin/extension, I had to actually include guile-2.0-dev and other packages in the container, making it a bit bloated.

I couldn't immediately find a nice example Dockerfile recipe that would allow me to build stuff from source outside of the container, and then install the resulting binaries into the container. This seems to be a somewhat weak spot, where more support/infrastructure would be helpful. I guess the idea is that you simply install applications via package feeds and apt-get. But I digress.

So after some tinkering, I ended up with three docker containers:

  • one running OsmoSTP
  • one running m3ua-testtool
  • one running sua-testtool

I also managed to create an internal bridged network between the containers, so the containers could talk to one another.

However, I have to manually start each of the containers with ugly long command line arguments, such as docker run --network sigtran --ip 172.18.0.200 -it osmo-stp-master. This is of course sub-optimal, and what Docker Services + Stacks should resolve.

Services + Stacks

The idea seems good: A service defines how a given container is run, and a stack defines multiple containers and their relation to each other. So it should be simple to define a stack with three services, right?

Well, it turns out that it is not. Docker documents that you can configure a static ipv4_address [1] for each service/container, but it seems related configuration statements are simply silently ignored/discarded [2], [3], [4].

This seems to be related that for some strange reason stacks can (at least in later versions of docker) only use overlay type networks, rather than the much simpler bridge networks. And while bridge networks appear to support static IP address allocations, overlay apparently doesn't.

I still have a hard time grasping that something that considers itself a serious product for production use (by a company with estimated value over a billion USD, not by a few hobbyists) that has no support for running containers on static IP addresses. that. How many applications out there have I seen that require static IP address configuration? How much simpler do setups get, if you don't have to rely on things like dynamic DNS updates (or DNS availability at all)?

So I'm stuck with having to manually configure the network between my containers, and manually starting them by clumsy shell scripts, rather than having a proper abstraction for all of that. Well done :/

Exposing Ports

Unrelated to all of the above: If you run some software inside containers, you will pretty soon want to expose some network services from containers. This should also be the most basic task on the planet.

However, it seems that the creators of docker live in the early 1980ies, where only TCP and UDP transport protocols existed. They seem to have missed that by the late 1990ies to early 2000s, protocols like SCTP or DCCP were invented.

But yet, in 2017, Docker chooses to

Now some of the readers may think 'who uses SCTP anyway'. I will give you a straight answer: Everyone who has a mobile phone uses SCTP. This is due to the fact that pretty much all the connections inside cellular networks (at least for 3G/4G networks, and in reality also for many 2G networks) are using SCTP as underlying transport protocol, from the radio access network into the core network. So every time you switch your phone on, or do anything with it, you are using SCTP. Not on your phone itself, but by all the systems that form the network that you're using. And with the drive to C-RAN, NFV, SDN and all the other buzzwords also appearing in the Cellular Telecom field, people should actually worry about it, if they want to be a part of the software stack that is used in future cellular telecom systems.

Summary

After spending the better part of a day to do something that seemed like the most basic use case for running three networked containers using Docker, I'm back to step one: Most likely inventing some custom scripts based on unshare to run my three test programs in a separate network namespace for isolated execution of test suite execution as part of a Jenkins CI setup :/

It's also clear that Docker apparently don't care much about playing a role in the Cellular Telecom world, which is increasingly moving away from proprietary and hardware-based systems (like STPs) to virtualised, software-based systems.

[1]https://docs.docker.com/compose/compose-file/#ipv4address-ipv6address
[2]https://forums.docker.com/t/docker-swarm-1-13-static-ips-for-containers/28060
[3]https://github.com/moby/moby/issues/31860
[4]https://github.com/moby/moby/issues/24170

Syndicated 2017-05-02 22:00:00 from LaForge's home page

1 May 2017 mjg59   » (Master)

Intel's remote AMT vulnerablity

Intel just announced a vulnerability in their Active Management Technology stack. Here's what we know so far.

Background

Intel chipsets for some years have included a Management Engine, a small microprocessor that runs independently of the main CPU and operating system. Various pieces of software run on the ME, ranging from code to handle media DRM to an implementation of a TPM. AMT is another piece of software running on the ME, albeit one that takes advantage of a wide range of ME features.

Active Management Technology

AMT is intended to provide IT departments with a means to manage client systems. When AMT is enabled, any packets sent to the machine's wired network port on port 16992 will be redirected to the ME and passed on to AMT - the OS never sees these packets. AMT provides a web UI that allows you to do things like reboot a machine, provide remote install media or even (if the OS is configured appropriately) get a remote console. Access to AMT requires a password - the implication of this vulnerability is that that password can be bypassed.

Remote management

AMT has two types of remote console: emulated serial and full graphical. The emulated serial console requires only that the operating system run a console on that serial port, while the graphical environment requires drivers on the OS side. However, an attacker who enables emulated serial support may be able to use that to configure grub to enable serial console. Remote graphical console seems to be problematic under Linux but some people claim to have it working, so an attacker would be able to interact with your graphical console as if you were physically present. Yes, this is terrifying.

Remote media

AMT supports providing an ISO remotely. In older versions of AMT (before 11.0) this was in the form of an emulated IDE controller. In 11.0 and later, this takes the form of an emulated USB device. The nice thing about the latter is that any image provided that way will probably be automounted if there's a logged in user, which probably means it's possible to use a malformed filesystem to get arbitrary code execution in the kernel. Fun!

The other part of the remote media is that systems will happily boot off it. An attacker can reboot a system into their own OS and examine drive contents at their leisure. This doesn't let them bypass disk encryption in a straightforward way[1], so you should probably enable that.

How bad is this

That depends. Unless you've explicitly enabled AMT at any point, you're probably fine. The drivers that allow local users to provision the system would require administrative rights to install, so as long as you don't have them installed then the only local users who can do anything are the ones who are admins anyway. If you do have it enabled, though…

How do I know if I have it enabled?

Yeah this is way more annoying than it should be. First of all, does your system even support AMT? AMT requires a few things:

1) A supported CPU
2) A supported chipset
3) Supported network hardware
4) The ME firmware to contain the AMT firmware

Merely having a "vPRO" CPU and chipset isn't sufficient - your system vendor also needs to have licensed the AMT code. Under Linux, if lspci doesn't show a communication controller with "MEI" in the description, AMT isn't running and you're safe. If it does show an MEI controller, that still doesn't mean you're vulnerable - AMT may still not be provisioned. If you reboot you should see a brief firmware splash mentioning the ME. Hitting ctrl+p at this point should get you into a menu which should let you disable AMT.

What do we not know?

We have zero information about the vulnerability, other than that it allows unauthenticated access to AMT. One big thing that's not clear at the moment is whether this affects all AMT setups, setups that are in Small Business Mode, or setups that are in Enterprise Mode. If the latter, the impact on individual end-users will be basically zero - Enterprise Mode involves a bunch of effort to configure and nobody's doing that for their home systems. If it affects all systems, or just systems in Small Business Mode, things are likely to be worse.

What should I do?

Make sure AMT is disabled. If it's your own computer, you should then have nothing else to worry about. If you're a Windows admin with untrusted users, you should also disable or uninstall LSM by following these instructions.

Does this mean every Intel system built since 2008 can be taken over by hackers?

No. Most Intel systems don't ship with AMT. Most Intel systems with AMT don't have it turned on.

Does this allow persistent compromise of the system?

Not in any novel way. An attacker could disable Secure Boot and install a backdoored bootloader, just as they could with physical access.

But isn't the ME a giant backdoor with arbitrary access to RAM?

Yes, but there's no indication that this vulnerability allows execution of arbitrary code on the ME - it looks like it's just (ha ha) an authentication bypass for AMT.

Is this a big deal anyway?

Yes. Fixing this requires a system firmware update in order to provide new ME firmware (including an updated copy of the AMT code). Many of the affected machines are no longer receiving firmware updates from their manufacturers, and so will probably never get a fix. Anyone who ever enables AMT on one of these devices will be vulnerable. That's ignoring the fact that firmware updates are rarely flagged as security critical (they don't generally come via Windows update), so even when updates are made available, users probably won't know about them or install them.

Avoiding this kind of thing in future

Users ought to have full control over what's running on their systems, including the ME. If a vendor is no longer providing updates then it should at least be possible for a sufficiently desperate user to pay someone else to do a firmware build with the appropriate fixes. Leaving firmware updates at the whims of hardware manufacturers who will only support systems for a fraction of their useful lifespan is inevitably going to end badly.

How certain are you about any of this?

Not hugely - the quality of public documentation on AMT isn't wonderful, and while I've spent some time playing with it (and related technologies) I'm not an expert. If anything above seems inaccurate, let me know and I'll fix it.

[1] Eh well. They could reboot into their own OS, modify your initramfs (because that's not signed even if you're using UEFI Secure Boot) such that it writes a copy of your disk passphrase to /boot before unlocking it, wait for you to type in your passphrase, reboot again and gain access. Sealing the encryption key to the TPM would avoid this.

comment count unavailable comments

Syndicated 2017-05-01 22:52:01 from Matthew Garrett

1 May 2017 pabs3   » (Master)

FLOSS Activities April 2017

Changes

Issues

Review

Administration

  • Debian systems: quiet a logrotate warning, investigate issue with DNSSEC and alioth, deploy fix on our first stretch buildd, restore alioth git repo after history rewrite, investigate iptables segfaults on buildd and investigate time issues on a NAS
  • Debian derivatives census: delete patches over 5 MiB, re-enable the service
  • Debian wiki: investigate some 403 errors, fix alioth KGB config, deploy theme changes, close a bogus bug report, ping 1 user with bouncing email, whitelist 9 email addresses and whitelist 2 domains
  • Debian QA: deploy my changes
  • Debian mentors: security upgrades and service restarts
  • Openmoko: debug mailing list issue, security upgrades and reboots

Communication

  • Invite Wazo to the Debian derivatives census
  • Welcome ubilinux, Wazo and Roopa Prabhu (of Cumulus Linux) to the Debian derivatives census
  • Discuss HP/ProLiant wiki page with HPE folks
  • Inform git history rewriter about the git mailmap feature

Sponsors

The libconfig-crontab-perl backports and pyvmomi issue were sponsored by my employer. All other work was done on a volunteer basis.

Syndicated 2017-04-30 22:56:02 from Advogato

30 Apr 2017 mjg59   » (Master)

Looking at the Netgear Arlo home IP camera

Another in the series of looking at the security of IoT type objects. This time I've gone for the Arlo network connected cameras produced by Netgear, specifically the stock Arlo base system with a single camera. The base station is based on a Broadcom 5358 SoC with an 802.11n radio, along with a single Broadcom gigabit ethernet interface. Other than it only having a single ethernet port, this looks pretty much like a standard Netgear router. There's a convenient unpopulated header on the board that turns out to be a serial console, so getting a shell is only a few minutes work.

Normal setup is straight forward. You plug the base station into a router, wait for all the lights to come on and then you visit arlo.netgear.com and follow the setup instructions - by this point the base station has connected to Netgear's cloud service and you're just associating it to your account. Security here is straightforward: you need to be coming from the same IP address as the Arlo. For most home users with NAT this works fine. I sat frustrated as it repeatedly failed to find any devices, before finally moving everything behind a backup router (my main network isn't NATted) for initial setup. Once you and the Arlo are on the same IP address, the site shows you the base station's serial number for confirmation and then you attach it to your account. Next step is adding cameras. Each base station is broadcasting an 802.11 network on the 2.4GHz spectrum. You connect a camera by pressing the sync button on the base station and then the sync button on the camera. The camera associates with the base station via WDS and now you're up and running.

This is the point where I get bored and stop following instructions, but if you're using a desktop browser (rather than using the mobile app) you appear to need Flash in order to actually see any of the camera footage. Bleah.

But back to the device itself. The first thing I traced was the initial device association. What I found was that once the device is associated with an account, it can't be attached to another account. This is good - I can't simply request that devices be rebound to my account from someone else's. Further, while the serial number is displayed to the user to disambiguate between devices, it doesn't seem to be what's used internally. Tracing the logon traffic from the base station shows it sending a long random device ID along with an authentication token. If you perform a factory reset, these values are regenerated. The device to account mapping seems to be based on this random device ID, which means that once the device is reset and bound to another account there's no way for the initial account owner to regain access (other than resetting it again and binding it back to their account). This is far better than many devices I've looked at.

Performing a factory reset also changes the WPA PSK for the camera network. Newsky Security discovered that doing so originally reset it to 12345678, which is, uh, suboptimal? That's been fixed in newer firmware, along with their discovery that the original random password choice was not terribly random.

All communication from the base station to the cloud seems to be over SSL, and everything validates certificates properly. This also seems to be true for client communication with the cloud service - camera footage is streamed back over port 443 as well.

Most of the functionality of the base station is provided by two daemons, xagent and vzdaemon. xagent appears to be responsible for registering the device with the cloud service, while vzdaemon handles the camera side of things (including motion detection). All of this is running as root, so in the event of any kind of vulnerability the entire platform is owned. For such a single purpose device this isn't really a big deal (the only sensitive data it has is the camera feed - if someone has access to that then root doesn't really buy them anything else). They're statically linked and stripped so I couldn't be bothered spending any significant amount of time digging into them. In any case, they don't expose any remotely accessible ports and only connect to services with verified SSL certificates. They're probably not a big risk.

Other than the dependence on Flash, there's nothing immediately concerning here. What is a little worrying is a family of daemons running on the device and listening to various high numbered UDP ports. These appear to be provided by Broadcom and a standard part of all their router platforms - they're intended for handling various bits of wireless authentication. It's not clear why they're listening on 0.0.0.0 rather than 127.0.0.1, and it's not obvious whether they're vulnerable (they mostly appear to receive packets from the driver itself, process them and then stick packets back into the kernel so who knows what's actually going on), but since you can't set one of these devices up in the first place without it being behind a NAT gateway it's unlikely to be of real concern to most users. On the other hand, the same daemons seem to be present on several Broadcom-based router platforms where they may end up being visible to the outside world. That's probably investigation for another day, though.

Overall: pretty solid, frustrating to set up if your network doesn't match their expectations, wouldn't have grave concerns over having it on an appropriately firewalled network.

comment count unavailable comments

Syndicated 2017-04-30 05:09:46 from Matthew Garrett

30 Apr 2017 mentifex   » (Master)

Mentifex on predictive textlike brain mechanism

The predictive textlike brain mechanism mentioned in the article works perhaps because each word as a concept has many associative tags over to other concept-words frequently used in connection with the triggering word. A similar neural network of associative tags is at work in the Mind.Forth Strong AI which has been ported into Strawberry Perl 5 and which you may download free-of-charge in order to study the Theory of Mind depicted in the diagram below also available as an animated brain-mind GiF:
./^^^^^^^\..SEMANTIC MEMORY../^^^^^^^\
| Visual. | .. syntax ..... |Auditory |
| Memory..| .. /..\---------|-------\ |
| Channel.| . ( .. )function|Memory | |
| . . . . | .. \__/---/ . \ | . . . | |
| . /-----|---\ |flush\___/ | . . . | |
| . | . . | . | |vector | . | .word | |
| ._|_ .. | . v_v_____. | . | .stem | |
| / . \---|--/ . . . .\-|---|--/ .\ | |
| \___/ . | .\________/ | . | .\__/ | |
| percept | . concepts _V . | .. | .| |
| . . . . | . . . . . / . \-|----' .| |
| . . . . | . . . . .( . . )| ending| |
| . . . . | inflection\___/-|->/..\_| |
| . . . . | . . . . . . . . | .\__/.. |
Syntax generates thought from concepts.
AI Mind Maintainer jobs will be like working in a nuclear power plant control room.

29 Apr 2017 badvogato   » (Master)

「序
小北
此书胡兰成先生自称原以日文起笔,后以中文改写。终未曾见有过日文版。我以为是日文版《自然学》的延续,后来借“革命要诗与学问”之名有过部分草稿,之后胡先生即应邀到了台湾。
这是他在台湾出版的第一部作品,因张其昀先生之建议,题名《华学科学与哲学》,十多年后朱天文拟编《胡兰成全集》,又改回《革命要诗与学问》出过一版,增补了《机论》《建国立极》两章。彼时胡先生还写有《致邓小平书》《上蒋经国书》两通书信,分别致两岸领导人,也属同一范畴的立国之言。
昔日子贡曰:“譬之宫墙,赐之墙也及肩,窥见室家之好。夫子之墙数仞,不得其门而入,不见宗庙之美,百官之富。得其门者或寡矣。夫子之云,不亦宜乎!”胡先生自不能与夫子相比,但譬之今日,胡先生的高低亦非浅薄如我者能尽窥,如陈丹青先生之谓,胡兰成与木心皆是民国时期一等一的高手。
譬如谁者之言,人世的问题,没有对错,惟有境界之分。胡先生呈现给我们的是一种境界。他的许多建设性的意见以及学问上的灵机一悟,被今日一些主流学者斥之谬论或诳语,那先已是今日的主流学者们自己生在了不同的层次之中。棋逢对手,是必要有对手。所以胡先生是好比一滩江湖之水,虽难以尽归大海,却仍可润泽大地,如春风点化山水。
胡先生所讲的东西,不能以知识去辩证,惟可以生命去体证,这就必要有强大的生活积淀、充沛的人生阅历,不止是知识的积累。
自然,胡先生非不可批评,在我看来甚至大可批评,然批评者也要有批评者的底气,与批评者的志气。仁者见仁智者见智,你若有底气有志气,
+则必先是仁者是智者。反之,赞誉者亦如此。对于胡先生这个人,简单的毁誉都不得当。有一老者像是言中了,曾经看山就是山,现是看山不是山,将来又会看山仍是山。
且看这本书,将中国的学问概括为华学,并立于科学与哲学来讨论,在完全西化了的时代,是多大的气魄。书中涵盖了胡先生晚年的学思精华。
他言世界文明之正统,明文明的东西之辨,作成《山河岁月》的续篇,通过对文明的反思,提出了几十年来切切于心的现代政治与产业制度的发想。
胡先生晚年从易经出发,提出了大自然的五基本法则,通达于究极的自然。胡先生深交当时的大数学家冈洁,大物理学家汤川秀树,故能由此及彼,由彼及此。他领先于近代所有知识分子,破了科学的迷思,民主的迷思,也破了宗教的虚妄。
胡先生终其一生,在追逐一种理想化的政治。惟王建国,悠悠以世,为政治正名,这便是孔子一生的大志。礼崩乐坏,失天地之正,是春秋之病。而两千年后,春秋再起。胡先生想凭一己之力挽人世之谬,似是对抗了时潮,不知天下之人皆要说他糊涂。无怪昔日张爱玲先生亦为他的口燥唇干而心疼。
「今日惟贩夫走卒对他有许多亲切的好感,即因他们是来自民间的真实。
胡先生的一生是基于政治的,所谓王天下之道,所以文章在他是小道,毋宁要文章华国,才是言之有物。在我看来,他写政治的文章如写诗,做起学问来则又像写小说。不切题而切题。《华学科学与哲学》我是也当诗读,也当小说看。
但是政治二字,岂是今日之语。今天我们所谈的惟是权力与斗争,有了乌烟瘴气,便政治是成王败寇之事。成王败寇,虽可飞扬跋扈一时,却难以为历史正名。古今多少事,转瞬已灰飞烟灭,后世人们反复记忆的必是人性的精华,文学之经典。胡先生基于政治,而高于政治,则他的身上有可我们记忆的内容。
二〇一三年六月于北京」
摘自:《华学科学与哲学》 — 胡兰成
在豆瓣阅读书店查看:https://read.douban.com/ebook/3311264/

26 Apr 2017 badvogato   » (Master)

beg your pardon, transplanted from this src:
https://book.douban.com/reading/20063312/

《天桥》试读:序 (熊式一)
一个三十年来在海外以卖英文糊口的人,一朝回到了居民十九都是同胞的香港来,自然不免要想重新提起毛笔,写点中文东西。回想三四十年前,我在国内以卖中文餬口的时候,并无想在英美文艺界争一席地的野心与计画。到了伦敦之后,偶然听了伦敦大学一位朋友,聂可尔教Professor Allardyce Nicoll的劝告,用英文写了《王宝川》一剧,一切事便出人意表。最初是舞台方面的权威人物,都说它的文学意味太高,绝不能得到广大的观众;换句话说,不是生意经。他们劝我,既然写得出如此的英文剧本,何不写写小说,书局一定会欢迎的。后来《王宝川》的剧本由伦敦麦勋书局出版,极得佳评,因此人民国立剧院,把它搬上舞台,结果竟大受观众的赞赏,三年不辍。我因得此鼓励,便跃跃欲试,预备写《天桥》这本小说。
当初我还在起腹稿的时候,有一位好朋友,极力劝我为人不可不成“家”。他说你专写剧本,自然算是戏剧家——我因此便写了《大学教授》,《财神》,《孟母三迁》,《西厢记》等剧——若写小说,非但不成“家”,反变为杂牌军队,万应紫金锭,同仁堂的老鼠矢之类的东西了。这么一来,许多年也就过去了。最后一方面是经不住一位出版家朋友的鼓励——也可以说是利诱——一方面到底是我自己想多辟门径,认为许多大著作家都兼长诗歌戏剧小说,我未尝不可尝试尝试写小说的滋味,于是便毅然决然的闭门造《天桥》了。
《天桥》由英文小说而变成中文小说在香港出版,也是由于我这种喜欢走新路的老脾气。从前在国内以写作为生,卖了十几年的文——也有文言,也有语体文——到了英国之后,除了写作之外,绝少提起毛笔,专门以英文写作为生,不觉又是二十多年了。现在到了香港,有了机会,自然不知不觉的又做了下车的冯妇。
我最初把自己改编的《王宝川》译为中文话剧出版,随后又依照我所译的《西厢记》英文本,校订为中文本,在香港出版;这两出戏都在香港电台广播了,而且又搬上了舞台,在艺术节时和香港的观众相见了。去年我又编了一出社会讽刺喜剧《梁上佳人》出版,大大的和香港各种风头人物开玩笑。在舞台上,在电视上,在香港广播电台,都受到了香港观众和听众极大的鼓励,后来又由本地电影界的名手,把它改编改写,变成香港最通行的电影形式,搬上了银幕,使我相信我并没有变成一个完全不通中文的华侨。今年我又把这一本自己的英文小说,写成中文小说。谁都知道在香港卖文,难求一饱。我的《王宝川》,《西厢记》,《梁上佳人》三本书,在香港出版,并没有收到半文版税;但我仍是再接再励,一本书一本书继续的出版,希望总有一天,大家努力合作,明白杀死生金蛋的鹅,并不是致富捷径;把文艺一事,扶到轨道上去。
《天桥》在英国出版的时候,蒙文艺各界,一致予以好评。可是我心中最引以为荣幸的,是这三个人的重视:一是当今英国桂冠诗人(Poet Laureate)梅斯菲尔(John Masefield)的代序诗,二是大文豪威尔斯(H.G.Wells),在他的著作中对《天桥》的评论,三是清华大学历史系教授陈寅恪读后的赠诗。梅氏的代序诗不易翻译,威氏的评论如次:
“我觉得熊式一的《天桥》是一本比任何关于目前中国趋势的论著式报告更启发的小说,从前他写了《王宝川》使全伦敦的人士为之一快,但是这本书却是绝不相同的一种戏剧,是一幅完整的、动人心弦的、呼之欲出的图画,描述一个大国家的革命过程。”(见威著《近年回忆录》“A Contemporary Memoir”八十四页。)
陈氏的诗,其中有两首绝句,其一:
海外林熊各擅场,卢前王后费评量,
北都旧俗非吾识,爱听天桥话故乡。
其二:
名列仙班目失明,结因兹土待来生,
把君此卷且归去,何限天涯祖国情。
此外还有一首七律:
沉沉夜漏绝尘哗,听读伽卢百感加,
故国华胥宁有梦,旧时王谢早无家。
文章瀛海娱衰病,消息神州竞鼓笳,
万里乾坤迷去住,词人终古泣天涯。
诗中一用“听”,一用“听读”、不用“阅”或“阅读”,是因为陈氏那时双目已失明。“海外林熊”一语,是指曾作英文小说《京华烟云》的林语堂氏。“旧时王谢早无家”一语,是因为《天桥》中述及戊戌政变事,陈氏之祖湖南巡抚陈宝箴,陈氏之父吏部主事陈三立,都在政变时遭了革职永不叙用的处分,无怪他老先生百感交加了。
《天桥》在英国美国出版之后,马上就有法文、德文、西班牙文、瑞典文、捷克文、荷兰文等各种文的译本,在各国问世。虽然风行一时,翻译得如何,我却没有如此渊博的语文学问来评判。可是我真万万没有想到,最后还要由我自己把它翻译成中文来。当时我以为在我把整本书完全翻译了之后,我想我自己可以很容易的看得出,到底还是英文本,抑是中文本,比较差强人意一点。但是今天把这两种本子比较,这才发现文学作品是不能比较的。用某种眼光来看,英文本中不妥之处,在所不免:用另一种眼光来看,中文本中也有不少的毛病。我真要诚心诚意的请教精通这两种文学的读者,尤其是对于这两种文字的文学作品,有湛深研究的博学家,不吝赐教。
当这篇小说,自元旦起,逐日在《星晚》上刊登时,常常有爱护我的读者,或写信或打电话到报馆中,意在指正我这小说中的错误。虽然其中并不是我的错误,而大半是读者忘了这是清季的背景,许多地方和官衔,甚至于有的普通名词,都和民国初年绝不相同,可是我仍是衷心感激他们,足见他们重视我的著作,这等于他们认为白圭之上,最好是洁白无玷的意思,我觉得这真是第一件我最荣幸的事。
后来又有许多读者,以及朋友,不断的询问我,李大同这个人何以不见于历史?也有人说,李大同是不是康有为,或者是不是谭嗣同;竟有人说,李大同是不是熊式一夫子自道!
我在这儿只能说,康有为是康有为,谭嗣同是谭嗣同,李大同是李大同,熊式一是熊式一。李大同是书中主角;康有为和谭嗣同在书中都一再提到过;熊式一是本书的作者,书中没有提过他;他的名字,只是在书封面上印着。康有为生于前清咸丰戊午八年二月初五日(阳历一八五八年三月十九日),死于民国十六年(一九二七年),三月三十一日(阴历丁卯年二月二十八日),谭嗣同生于前清同治乙丑四年二月十三日(阳历一八六五年三月十日),死于光绪戊戌二十四年八月十三日(阳历一八九八年九月二十八日),李大同生于前清光绪庚辰年庚辰月庚辰日庚辰时,即光绪六年三月十三日(阳历一八八〇年四月二十一日),他比康有为小二十二岁,比谭嗣同小十五岁。日子过得真快!当年大同诞生的时候,我还记得清清楚楚的,那是应该由我负完全的责任。不觉得眨一眨眼,他已是八十开外的老人了!
读者关心史实,不断的询问,我现在只好在这儿作一个总答复:我所写的《天桥》,是一部以历史为背景的社会讽刺小说,并不是正史,也不是想要补充历史中所语而不详,或是遗漏了的事实。历史注重事实;小说全凭幻想。一部历史,略略的离开了事实,便没有了价值;一部小说,缺少了幻想,便不是好小说。不过许多读者,把我的小说当做历史一般去研究,这是重视我的著作,我根本就不应该去争辩。这成了第二件我感觉最荣幸的事。
当初我写这部小说的时候,觉得西洋人不知道也不明了中国近几十年的趋势,近代的历史,和人民的思想生活近况等等,所以我要以真实的历史为背景,而且小说中尽量的放许多历史人物进去,尤其是外国人所知道的人物,如袁世凯、慈禧、光绪、以及英国的传教士李提摩太。那知道我写完了大半部之后,于无意中发现写得大错特错,全功尽弃,只得另起炉灶,几乎要重头再写。
我从前觉得西洋出版关于中国的东西,不外两种人写的:一种是曾经到过中国一两个星期,甚至四五十年,或终生生长在中国的洋人——商贾、退职官员或教士——统称之为支那通,一种是可以用英文写点东西的中国人。后者是少而又少,前者则比比皆是。他们共同的目的,无非是把中国说成一个稀奇古怪的国家,把中国人写了成荒谬绝伦的民族,好来骗骗外国读者的钱。所以这种书中,不是有许多杀头、缠足、抽鸦片烟、街头乞丐等的插图,便是大谈特谈这一类的事。近来还有一位老牌的女作家,用了她同行冤家的笔名,写一部英文的自传,除以杀头为开场之外,还说她父亲有六个太太,她自己便是姨太太生的。
我不能否认他们所根据的是事实,他们有照片为证,这位作家有她自己本人为证,但是我在英美讲演时,总是告诉他们现在中国人大多数都不抽大烟,不缠足,不留长辫儿,不蓄妾,不杀头,但是这有甚么用?我在荷兰时,曾亲眼看见一条小街上,坐着一个青田女人,用一块方布盖着脚。过路的人,给她一点钱,她便揭开方布让那人看一看她一双赤着的三寸金莲!我在意大利船上,碰见过一位德国教授,特别在香港买了一枝鸦片枪带回国去示人:而且现在鸦片烟灯,仍是香港畅销的旅行纪念品。还有那位女作家,她也到四处去讲演,好让人家鉴赏鉴赏姨太太女儿的丰彩!
所以我决定了要写一本以历史事实、社会背景为重的小说,把中国人表现得入情入理,大家都是完完全全有理性的动物,虽然其中有智有愚,有贤有不肖的,这也和世界各国的人一样。因此我一定要找两个西洋人,放在里边。我有一部陈恭录教授所著,商务印书馆出版的大学丛书教本——大学丛书委员会的委员,包括蔡元培、蒋梦麟、张伯苓、马寅初、冯友兰、郑振铎、王世杰、朱家驿、翁文滪、顾颉刚、胡适等五十五人,中国的名流学者,应有尽有,几乎不缺半个——皇皇巨著《中国近代史》,我在其中发现了到中国来传教的西洋人,有一位英国教士李提摩太(Timothy Richard),真爱中国,真是好人。还有一位美国教士林乐知( Young John Allen),陈教授在他的书中上卷第十篇(变法运动)中,说他们都是开明之士,非常的爱护中国,对于中国维新变法,极有帮助,极有影响。
好了,既然是有这种人,我便把李提摩太写成书中的洋主角,帮助中国的正主角李大同求学,做事,救国,反衬一位标准心地狭窄的传教士马克劳。我又到了伦敦的中国内地会(China Inland Mission),去打听打听李提摩太的体态和生平,不幸毫无结果。后来偶然和塞夫人(Lady Hosie)讲起此人,她说她的父亲,前牛津大学中文教授苏提尔是李提摩太的好朋友,用了他所遗的文件,写了一本《李提摩太传》,现在虽绝版,不过她马上就送了我一本。我仔仔细细一读,写了三百多页的小说稿,只好把它扔了!我也不必骂陈教授误人,我只说后来我所描写的李提摩太,他的性格,他对光绪上的维新条陈,都是根据了苏氏的《李提摩太传》,不是和陈教授一样捕风捉影的。
我得了这一次教训之后,对于历史上的人物,以及他们的语言和行为都特别小心,总是先要有了可靠的根据,才肯落笔。这虽是一部小说,有关史实的地方,总不可以任意捏造,使得读者有错误的印象。不过一个人的学问有限,经历更是狭小,这本书中想必未尽完善之处很多很多,还要希望爱我的读者,见闻渊博,多多指正。

26 Apr 2017 sye   » (Journeyer)

洋人看京戏及其他001

洋人看京戏及其他


http://www.saohua.com/shuku/zhangailingquanji/001-37.htm

Peking Opera and other tales through the eyes of clueless onlookers
written by Eileen Chang, originally published in 1943

If we examine affairs of all things Chinese as if we were foreigners watching Peking Opera, it could be fun and meaningful again.  With bamboo cloth line and baby diapers hanging over viewers' heads; Glasses over the counter top filled with "longevity spirit from Ginshen";  Radio playing famed Mei Lanfang operatic voices over the wavelength from one corner; and from another, advertisement of miracle ointment to cure and sooth contagious scabies or itching ... that's all what it means to be living in Chinese atmosphere, their multiplicity, poignancy, mysterious or too funny for words.

Many young generation worldly men loved China but had no idea what attracted them to its culture.  Unconditional love is quite admirable -- the only danger is that sooner or later,  when reality bites, they are unprepared for its deadly touch, inhaling into their chests its full blown coldness and their hearts gradually turned into icicles. We are unfortunately to live among local Chinese populace, unlike other expatriates, safely worshiped their motherland afar with awe and affectionate undying love. So come and take a much closer look!  Re-examine Chinese life through clueless foreigner's glasses while they listen to our Peking opera, with big surprises and intrigues, we might gain loving insights that otherwise we don't know it is always there.

Whenever I had a conversation longer than three sentences, I couldn't continue without mentioning Peking opera. And why is that, you may wonder. That's because I don't make a living to sing Peking opera yet I am full of curiosity towards it. As far as living a life, who wouldn't admit that they only have half of the clues? I am particularly fond of using Peking opera to set a proper attitude towards life as how Chinese live in it.

Those fair ladies from Peking opera troops who've played big roles on stage, when they learned that you like watching Peking opera, they would smile and say: " You know Peking opera, that is a sophisticated showbiz all in itself. Directing each stage and scene setting with proper costumes, that involves much subtleties and minute details, you could probably spend your whole life to understand what it might entail. "  Of course, I wouldn't have any clue if they were put on wrong costumes for a historical period of time; and if their tunes were a bit off the scores, I wouldn't notice anything either. I only love to sit in front rows, totally immerse myself into actors' actions on stages;  let myself being blown away by those colorful blue and golden painted face under bulky armories, long capes flowing up and showing its red inlines,  jade green trousers flipping out purple underlines.

Opinions by outside observers are sometime invaluable, or else whenever American journalists interview some big shots,  why they always like to pick topics that is totally unrelated to their professed expertise? For example, when they interview a female murder suspect, they want to know if she was optimistic on how our world might end; interviewing a boxing champion, they ask him if he approves adaptation of Shakespearian play into modern fashion show. Of course, they need to attract viewers, to make them laugh and feel good about themselves, thinking: "I know even more than these social celebrities.  Famed people can be dumber than me!"  On the other hand, outsiders might have a fresher and simpler outlook, that is worth uncovering.

In order not to take myself too serious, let's talk about 话剧里的平剧罢。《秋海棠》一剧风魔了全上海,不能不归功于故事里京戏气氛的浓。紧跟着《秋海棠》空前的成功,同时有五六出话剧以平剧的穿插为号召。中国的写实派新戏剧自从它的产生到如今,始终是站在平剧的对面的,可是第一出深入民间的话剧之所以得人心,却是借重了平剧——这现象委实使人吃惊。


syndicated from nuniabiz.blogspot.com

Syndicated 2017-04-25 23:49:00 (Updated 2017-04-27 00:01:11) from badvogato

24 Apr 2017 badvogato   » (Master)

Howdy, how y'all doing?! God has risen. Amen.

23 Apr 2017 broonie   » (Journeyer)

Bronica Motor Drive SQ-i

I recently got a Bronica SQ-Ai medium format film camera which came with the Motor Drive SQ-i. Since I couldn’t find any documentation at all about it on the internet and had to figure it out for myself I figured I’d put what I figured out here. Hopefully this will help the next person trying to figure one out, or at least by virtue of being wrong on the internet I’ll be able to get someone who knows what they’re doing to tell me how the thing really works.

Bottom plate

The motor drive attaches to the camera using the tripod socket, a replacement tripod socket is provided on the base of plate. There’s also a metal plate with the bottom of the hand grip attached to it held on to the base plate with a thumb screw. When this is released it gives access to the screw holding in the battery compartment which (very conveniently) takes 6 AA batteries. This also provides power to the camera body when attached.

Bottom plate with battery compartment visible

On the back of the base of the camera there’s a button with a red LED next to it which illuminates slightly when the button is pressed (it’s visible in low light only). I’m not 100% sure what this is for, I’d have guessed a battery check if the light were easier to see.

Top of drive

On the top of the camera there is a hot shoe (with a plastic blanking plate, a nice touch), a mode selector and two buttons. The larger button on the front replicates the shutter release button on the body (which continues to function as well) while the smaller button to the rear of the camera controls the motor – depending on the current state of the camera it cocks the shutter, winds the film and resets the mirror when it is locked up. The mode dial offers three modes: off, S and C. S and C appear to correspond to the S and C modes of the main camera, single and continuous mirror lockup shots.

Overall with this grip fitted and a prism attached the camera operates very similarly to a 35mm SLR in terms of film winding and so on. It is of course heavier (the whole setup weighs in at 2.5kg) but balanced very well and the grip is very comfortable to use.

Syndicated 2017-04-23 13:17:45 from Technicalities

23 Apr 2017 joolean   » (Journeyer)

gzochi

gzochi 0.11 is out. Enjoy it in good health.

The major innovation over the previous release is that the client side of the distributed storage engine now releases the locks it requests from the meta server. This wasn't easy to orchestrate, so I want to say a little bit about how it works.

Some context: The distributed storage engine is based on a paper by Tim Blackman and Jim Waldo, and it works a bit like a cache: The client requests an intentional lock (read or write) on a particular key from the server, and if the server grants the client's request, it serves up the value for the key along with a temporary lease - essentially a timeout. For the duration of the lease, the client is guaranteed that its lock intentions will be honored. If it's holding a read lock, no other clients can have a write lock; if it's got a write lock, no other clients can obtain a read or write lock. Within the client, the key is added to a transactional B+tree (a special instance of the in-memory storage engine) and game application threads can execute transactions that access or modify the data in the B+tree just as they would in a single-node configuration. When a transaction completes, new and modified values are transmitted back up to the meta server, but they also remain in the local B+tree for access by subsequent transactions. When the lease for a key expires - and the last active transaction using the affected key either commits or rolls back - its lock is released, and the client must petition the meta server to re-obtain the lock before it can do anything else with that key.

The tricky parts happen at the edges of this lifecycle; that is, when a lock is obtained and when it is released. In both cases, the client's view of available data from the perspective of active transactions must change. When the client obtains a lock, it gains access to a new key-value pair, and when it releases a lock, it loses that access. These changes occur asynchronously with respect to transaction execution: The arrival of a message from the meta server notifies the client that a lock has been granted (or denied) and lock release is governed by a local timeout. It's tempting to try implement these processes as additional transactions against the B+tree, such that when a new key is added or an expired key is removed, the modification to the B+tree occurs in a transaction executed alongside whatever other transactions are being processed at the time. Unfortunately, this can lead to contention and even deadlock, since adding or removing keys can force modifications to the structure of the B+tree itself, in the form of node splitting or merging. What to do, then, given that it's not acceptable that these "system" transactions fail, since they're responsible for maintaining the integrity of the local cache? You could adjust the deadlock resolution algorithm to always rule in favor of system transactions when it comes to choosing a transaction to mark for rollback, but since lock acquisition and release are relatively frequent, this just transfers the pain to "userland" transactions, which would in turn see an undue amount of contention and rollback.

The answer, I think, involves going "around" the transactional store. For newly arriving keys, this is straightforward: When a new key-value pair is transmitted from the meta server as part of a lock acquisition response, don't add it to the B+tree store transactionally; instead, add it to a non-transactional data structure like a hash table that'll serve as a cache. Transactions can consult the cache if they find that the key doesn't exist in the B+tree. The first transaction to modify the value can write it to the B+tree, and subsequent transactions will see this modified value since they check the B+tree before the cache.

Orchestrating the removal of expired keys is more challenging. Because the B+tree takes precedence over the cache of incoming values, it's crtical that the B+tree reflect key expiration accurately, or else new modifications from elsewhere in the cluster may be ignored, as in the following pathological case:
  1. Transaction on node 1 obtains write lock on key A, value is stored in cache
  2. Transaction on node 1 modifies key A, writing new value to the B+tree
  3. Transaction on node 1 releases lock on key A
  4. Transaction on node 2 obtains write lock on key A, modifies it, and commits
  5. Transaction on node 2 releases lock on key A
  6. Transaction on node 1 obtains read lock on key A, new value stored in cache
  7. Transaction on node 1 attempts to read key A, but sees old value from B+tree

After some consideration, I borrowed a page from BigTable, and attacked the problem using the concept behind tombstone records. Every record - even deleted records - written to either the B+tree or the incoming value cache are prefixed with a timestamp. To find the effective value for a key, both the B+tree and the cache must be consulted; the version with the most recent timestamp "wins." In most BigTable implementations (e.g., HBase and Cassandra) there's a scheduled, asynchronous "compaction" process that sweeps stale keys. I didn't want to run more threads, so I prevent stale keys from piling up by keeping a running list of released keys. Once that list's length exceeds a configurable threshold, the next transaction commit or rollback triggers a stop-the-world event in which no new transactions can be initiated, and a single thread sweeps any released keys that haven't since been refreshed. With a judiciously configured threshold, the resulting store performs well, since the sweep is quite fast when only a single transaction is active.

This was a real head-scratcher, and I feel proud of having figured it out despite having skipped all databases courses during my undergraduate coursework. Download my project and try it out for yourself!

22 Apr 2017 johnw   » (Master)

Putting lenses to work

Putting lenses to work

I gave a talk a couple of weeks ago at BayHac 2017 on “Putting lenses to work”, to show in a practical context how we use lenses at my workplace. I specifically avoided any theory about lenses, or the complex types, or the many operators, to show that at its core, lens is a truly invaluable library:

The videos are now available on YouTube, and the slides for this talk are on GitHub.

The code in the slides are taken directly (using Emacs) from a test file in that same repository, Lenses.hs, to serve as a way of preserving helpful examples, and to make it easy to cargo cult specific patterns into your code.

Syndicated 2017-04-22 00:00:00 from Lost in Technopolis

22 Apr 2017 MikeGTN   » (Journeyer)

Will Ashon - Strange Labyrinth

Most male mid-life crises follow similar and depressingly predictable patterns - they involve embarrassing encounters with much younger women, much faster cars and far more taxing physical pursuits than are ever strictly advisable. Then, somehow, they quietly slip into being either part of the subject's new, more insufferably identity - or perhaps simply disappearing, not to be mentioned by those who witness the manifold indignities. My own mid-life crises have been a little different - and in terms of lasting impact, somewhat more prosaic. If you discount my dash across the ocean to marry someone a continent away - who...

Syndicated 2017-04-21 18:04:00 from Lost::MikeGTN

19 Apr 2017 glyph   » (Master)

So You Want To Web A Twisted

As a rehearsal for our upcoming tutorial at PyCon, Creating And Consuming Modern Web Services with Twisted, Moshe Zadka, we are doing a LIVE STREAM WEBINAR. You know, like the kids do, with the video games and such.

As the webinar gods demand, there is an event site for it, and there will be a live stream.

This is a practice run, so expect “alpha” quality content. There will be an IRC channel for audience participation, and the price of admission is good feedback.

See you there!

Syndicated 2017-04-19 03:29:00 from Deciphering Glyph

18 Apr 2017 johnw   » (Master)

Submitting Haskell functions to Z3

Submitting Haskell functions to Z3

Conal Elliott has been working for several years now on using categories, specifically cartesian closed category, as a way to abstract Haskell functions at compile-time, so you can render the resulting “categorical term” into other categories.

Here’s an example Haskell function:

\x -> f x (g x)

And here’s its categorical rendering, just to give the flavor of the idea:

eval ∘ (f' △ g')

Where eval means uncurry ($), and f' and g' are the renderings of those two functions; and the operator is (&&&). I’m not using the typical Haskell names for these, by the way, in order to convince myself not to “think in Haskell” when working with these terms, but rather I’m choosing whatever symbols I find most often using in the literature on catgeory theory.

There are a few things to notice about these categorical terms:

  1. They must be point-free. There is no such thing as naming a term, only morphisms that use or produce objects. Hence Awodey calls category theory “the algebra of functions”.

  2. They quickly become very large and unreadable. All but the simplest terms are nearly impossible to understand just by looking at them. Think of it as the binary code for categories.

  3. Because they are just, in effect, chains of composition, without any name binding or scoping issue to consider, the nature of the computation is laid out in a very direct (albeit verbose) way, making rewrite rules available throughout the abstract term.

Although it seems a bit technical at first, the idea is quite simple: Discern the abstract, categorical meaning of a Haskell function, then realize that term in any other category that is cartesian (has products) and closed (has functions as objects, i.e., higher-order constructions). Nothing else needs to be known about the target category for the abstract term to have meaning there. That’s the beauty of using category theory as a universal language for expressing ideas: the meaning transports everywhere.

Here’s an equation meant for the solver, written in plain Haskell:

equation :: (Num a, Ord a) => a -> a -> Bool
equation x y =
    x < y &&
    y < 100 &&
    0 <= x - 3 + 7 * y &&
    (x == y || y + 20 == x + 30)

Here’s how I run the solver, using z3cat, which is built on top of Conal’s concat library:

mres <- liftIO $ runZ3 (ccc (uncurry (equation @Int))) $ do
    x <- mkFreshIntVar "x"
    y <- mkFreshIntVar "y"
    return $ PairE (PrimE x) (PrimE y)
case mres of
    Nothing  -> error "No solution found."
    Just sol -> putStrLn $ "Solution: " ++ show sol

And the result, also showing the equation submitted to Z3:

(let ((a!1 (ite (<= 0 (+ (- x!0 3) (* 7 y!1)))
                (ite (= x!0 y!1) true (= (+ y!1 20) (+ x!0 30)))
                false)))
  (ite (< x!0 y!1) (ite (< y!1 100) a!1 false) false))
Solution: [-8,2]

Now with one function, I have either a predicate function I can use in Haskell, or an input for Z3 to find arguments for which it is true!

In addition to using Conal’s work in Haskell, I’m also working on a Coq rendering of his idea, which I hope will give me a more principled way to extract Coq programs into Haskell, by way of their categorical representation.

Syndicated 2017-04-18 00:00:00 from Lost in Technopolis

3 May 2017 LaForge   » (Master)

OsmoDevCon 2017 Review

After the public user-oriented OsmoCon 2017, we also recently had the 6th incarnation of our annual contributors-only Osmocom Developer Conference: The OsmoDevCon 2017.

This is a much smaller group, typically about 20 people, and is limited to actual developers who have a past record of contributing to any of the many Osmocom projects.

We had a large number of presentation and discussions. In fact, so large that the schedule of talks extended from 10am to midnight on some days. While this is great, it also means that there was definitely too little time for more informal conversations, chatting or even actual work on code.

We also have such a wide range of topics and scope inside Osmocom, that the traditional ad-hoch scheduling approach no longer seems to be working as it used to. Not everyone is interested in (or has time for) all the topics, so we should group them according to their topic/subject on a given day or half-day. This will enable people to attend only those days that are relevant to them, and spend the remaining day in an adjacent room hacking away on code.

It's sad that we only have OsmoDevCon once per year. Maybe that's actually also something to think about. Rather than having 4 days once per year, maybe have two weekends per year.

Always in motion the future is.

Syndicated 2017-05-02 22:00:00 from LaForge's home page

17 Apr 2017 olea   » (Master)

Sobre OSL-UNIA

Por puro descuido no he hablado aqui del trabajo que estamos haciendo en la Oficina de Software Libre de UNIA para la Universidad de Almería. En parte es otra de mis bravuconadas burras, otro intento de forzar la máquina social con aspiraciones de progreso. Por otro creo realmente que es una actividad interesante, otro paso para contribuir a la modernización de la Universidad de Almería y la expansión de conocimientos y aptitudes para los estudiantes de informática y otros estudios tecnológicos en esta universidad. También creemos que nuestra aproximación es relativamente novedosa, aunque hay mucho más que podría hacerse.

En la práctica las actividades que estamos llevando a cabo consisten en:

  • prácticas de empresa curriculares
  • congresos

y en cuanto a congresos, llevamos una racha magnífica:

y hay algún otro más en consideración.

En ningún caso nada de esto podría ser realidad sin el compromiso de la asociación UNIA y el apoyo de la propia Universidad de Almería.

Otra líneas de trabajo que debemos desarollar son:

  • cursos y talleres de tecnologías opensource;
  • promover la participación de estudiantes en convocatorias como Google Summer of Code; se da la buena fortuna de que en Almería hay dos proyectos software que participan como anfitriones: P2PSP y MRPT;
  • promover la participación en concursos universitarios de programación como el certamen de proyectos libres de la UGR y el concurso universitario de software libre;
  • promover que prácticas de asignaturas, proyectos de fin de grado y de doctorado se realicen como proyectos opensource o, aún mejor, en comunidades de desarrollo ya existentes.

Si tenéis interés en saber más o participar dirigíos por favor a nuestro subforo de discusión.

En fin, estamos trabajando en ello.

Syndicated 2017-04-16 22:00:00 from Ismael Olea

14 Apr 2017 badvogato   » (Master)

note

Selina Sarah Tayler, (maiden name Peel, b.1881 Chorlton ) and her husband Bernard were missionaries in China; they were the parents of Gladys Yang
(b. c.1919, Peking) who, along with her husband, was a translator of Chinese literature. Selina died in 1970 on the Isle of Wight; one of her sisters, Annie Isabella Peel, who was my husband's paternal Grandmother, had an ironmonger's shop in Stretford. I'm given to understand that there was a family connection to Robert Peel but, so far, I've drawn a blank..


http://www.nytimes.com/1985/08/11/nyregion/ida-pruitt-96-who-fostered-friendship-with-the-chinese.html

11 Apr 2017 mjg59   » (Master)

Disabling SSL validation in binary apps

Reverse engineering protocols is a great deal easier when they're not encrypted. Thankfully most apps I've dealt with have been doing something convenient like using AES with a key embedded in the app, but others use remote protocols over HTTPS and that makes things much less straightforward. MITMProxy will solve this, as long as you're able to get the app to trust its certificate, but if there's a built-in pinned certificate that's going to be a pain. So, given an app written in C running on an embedded device, and without an easy way to inject new certificates into that device, what do you do?

First: The app is probably using libcurl, because it's free, works and is under a license that allows you to link it into proprietary apps. This is also bad news, because libcurl defaults to having sensible security settings. In the worst case we've got a statically linked binary with all the symbols stripped out, so we're left with the problem of (a) finding the relevant code and (b) replacing it with modified code. Fortuntely, this is much less difficult than you might imagine.

First, let's fine where curl sets up its defaults. Curl_init_userdefined() in curl/lib/url.c has the following code:
set->ssl.primary.verifypeer = TRUE;
set->ssl.primary.verifyhost = TRUE;
#ifdef USE_TLS_SRP
set->ssl.authtype = CURL_TLSAUTH_NONE;
#endif
set->ssh_auth_types = CURLSSH_AUTH_DEFAULT; /* defaults to any auth
type */
set->general_ssl.sessionid = TRUE; /* session ID caching enabled by
default */
set->proxy_ssl = set->ssl;

set->new_file_perms = 0644; /* Default permissions */
set->new_directory_perms = 0755; /* Default permissions */

TRUE is defined as 1, so we want to change the code that currently sets verifypeer and verifyhost to 1 to instead set them to 0. How to find it? Look further down - new_file_perms is set to 0644 and new_directory_perms is set to 0755. The leading 0 indicates octal, so these correspond to decimal 420 and 493. Passing the file to objdump -d (assuming a build of objdump that supports this architecture) will give us a disassembled version of the code, so time to fix our problems with grep:
objdump -d target | grep --after=20 ,420 | grep ,493

This gives us the disassembly of target, searches for any occurrence of ",420" (indicating that 420 is being used as an argument in an instruction), prints the following 20 lines and then searches for a reference to 493. It spits out a single hit:
43e864: 240301ed li v1,493
Which is promising. Looking at the surrounding code gives:
43e820: 24030001 li v1,1
43e824: a0430138 sb v1,312(v0)
43e828: 8fc20018 lw v0,24(s8)
43e82c: 24030001 li v1,1
43e830: a0430139 sb v1,313(v0)
43e834: 8fc20018 lw v0,24(s8)
43e838: ac400170 sw zero,368(v0)
43e83c: 8fc20018 lw v0,24(s8)
43e840: 2403ffff li v1,-1
43e844: ac4301dc sw v1,476(v0)
43e848: 8fc20018 lw v0,24(s8)
43e84c: 24030001 li v1,1
43e850: a0430164 sb v1,356(v0)
43e854: 8fc20018 lw v0,24(s8)
43e858: 240301a4 li v1,420
43e85c: ac4301e4 sw v1,484(v0)
43e860: 8fc20018 lw v0,24(s8)
43e864: 240301ed li v1,493
43e868: ac4301e8 sw v1,488(v0)

Towards the end we can see 493 being loaded into v1, and v1 then being copied into an offset from v0. This looks like a structure member being set to 493, which is what we expected. Above that we see the same thing being done to 420. Further up we have some more stuff being set, including a -1 - that corresponds to CURLSSH_AUTH_DEFAULT, so we seem to be in the right place. There's a zero above that, which corresponds to CURL_TLSAUTH_NONE. That means that the two 1 operations above the -1 are the code we want, and simply changing 43e820 and 43e82c to 24030000 instead of 24030001 means that our targets will be set to 0 (ie, FALSE) rather than 1 (ie, TRUE). Copy the modified binary back to the device, run it and now it happily talks to MITMProxy. Huge success.

(If the app calls Curl_setopt() to reconfigure the state of these values, you'll need to stub those out as well - thankfully, recent versions of curl include a convenient string "CURLOPT_SSL_VERIFYHOST no longer supports 1 as value!" in this function, so if the code in question is using semi-recent curl it's easy to find. Then it's just a matter of looking for the constants that CURLOPT_SSL_VERIFYHOST and CURLOPT_SSL_VERIFYPEER are set to, following the jumps and hacking the code to always set them to 0 regardless of the argument)

comment count unavailable comments

Syndicated 2017-04-11 22:27:28 from Matthew Garrett

10 Apr 2017 MikeGTN   » (Journeyer)

Walking the Beam: An urban country ramble

It felt like a quite a while since I'd walked beside water... My journey to London had been sleepy and distracted. Relaxing didn't come easy this morning after a long and trying week, but as I made progress towards the start of my walk, I began to feel a little more alert. The routine was familiar from my attempts to walk the fringes of London: over to Liverpool Street, out to the suburbs, then a bus to the start of my walk - which had seemed to grow increasingly further from civilisation over the months of walking rivers. As I...

Syndicated 2017-04-01 22:04:00 from Lost::MikeGTN

3 May 2017 LaForge   » (Master)

Book on Practical GPL Compliance

My former gpl-violations.org colleague Armijn Hemel and Shane Coughlan (former coordinator of the FSFE Legal Network) have written a book on practical GPL compliance issues.

I've read through it (in the bath tub of course, what better place to read technical literature), and I can agree wholeheartedly with its contents. For those who have been involved in GPL compliance engineering there shouldn't be much new - but for the vast majority of developers out there who have had little exposure to the bread-and-butter work of providing complete an corresponding source code, it makes an excellent introductory text.

The book focuses on compliance with GPLv2, which is probably not too surprising given that it's published by the Linux foundation, and Linux being GPLv2.

You can download an electronic copy of the book from https://www.linuxfoundation.org/news-media/research/practical-gpl-compliance

Given the subject matter is Free Software, and the book is written by long-time community members, I cannot help to notice a bit of a surprise about the fact that the book is released in classic copyright under All rights reserved with no freedom to the user.

Considering the sensitive legal topics touched, I can understand the possible motivation by the authors to not permit derivative works. But then, there still are licenses such as CC-BY-ND which prevent derivative works but still permit users to make and distribute copies of the work itself. I've made that recommendation / request to Shane, let's see if they can arrange for some more freedom for their readers.

Syndicated 2017-05-01 22:00:00 from LaForge's home page

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Advogato User Stats
Users13993
Observer9877
Apprentice748
Journeyer2333
Master1031

New Advogato Members

Recently modified projects

24 Feb 2017 Xrsf
15 Feb 2017 Justice4all
28 Sep 2016 Geomview
28 Sep 2016 SaVi
14 Jun 2016 luxdvd
8 Mar 2016 ShinyCMS
8 Feb 2016 OpenBSC
5 Feb 2016 Abigail
29 Dec 2015 mod_virgule
25 May 2015 Beobachter
7 Mar 2015 Ludwig van
18 Dec 2014 AshWednesday

New projects

8 Mar 2016 ShinyCMS
5 Feb 2016 Abigail
2 Dec 2014 Justice4all
11 Nov 2014 respin
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
1 Nov 2013 FAQ Linux