Recent blog entries

25 Feb 2017 StevenRainwater   » (Master)

Trekonomics by Manu Saadia

Star Trek TOS Replicator (AKA food synthesizer)

I’m always interested in the future of economics and, in particular, ways of adapting our world to deal with post scarcity economics. Nearly any book or paper on post scarcity economics, at one point or another, has to reference the most detailed known fictional example: Star Trek. So, when Manu Saadia’s new book, “Trekonomics: The Economics of Star Trek” was published, it went on my reading list immediately.

If you’re not familiar with post scarcity economics, it’s basically the future we’re headed towards, whether we like it or not. Industrialization, mass-production, 3D printing, nano-technology, automation, robots; all these things continually drive down the cost and scarcity of many goods and services. This is interesting because for the past few hundred years, all our economic models have been based around solving the so-called “economic problem” – that is, finding ways to allocate scarce resources to meet human needs and desires. The two favorite solutions to the economic problem, capitalism and communism/socialism, have developed religious-like ideological followings.

Pure capitalism relies on the collective action of everyone’s individual greed to allocate resources. Communism and socialism rely on central planning to allocate resources (the difference is that Communism is the result of a revolution from capitalism, while socialism is an evolution of capitalism). In the real world neither method has ever worked well in its pure form, though capitalism comes the closest. Attempts to rely solely on central planning have always failed unless some elements of individual freedom of action are incorporated. Likewise, attempts to rely solely on capitalism fail unless certain elements of central planning are incorporated (e.g. minimum wages, banking regulations, etc). But, however you mix the two, the goal is always to solve the economic problem of resource allocation. What would happen if that problem went away?

In the Star Trek universe the problem of resource allocation largely doesn’t exist. For the most part, anyone can obtain anything they want, any time they want, at no cost beyond the energy required to replicate it. Obviously the real world isn’t there yet but we’re headed that way. In economics, an externality is something that doesn’t come into play when calculating supply and demand – the cost of goods and services. For example, GPS is thought to be the first man-made service to be an economic externality. It exists all over the Earth and anyone can use it at no cost. No one has to worry about how to allocate GPS service as a resource because there is never a shortage of it and the very idea of supply vs demand is meaningless with regard to it. When many good and services become as ubiquitous as GPS, what happens?

Obviously, the two primary systems we’ve relied on in the past, capitalism and communism/socialism, would also become meaningless at that point. Supply would tend toward infinity while cost, labor, and employment tend towards zero. Some kind of new economic system is needed to cope with a world like that, but what? Answering that question is what post scarcity economics is all about.

Economists are just beginning to speculate on this sort of thing but science fiction writers have pondered it for decades. Star Trek’s Federation of Planets is the most detailed and well-known example of a fictional post scarcity economy. There is no money. No one is paid to work. Goods have effectively zero cost because they can be replicated at will by anyone. This sounds crazy at first to many people. Why would people work if they’re not paid? How could anything get done with no money? Trying to understand how such an economic system could function is what economists are after. Studying Star Trek’s model has proven to be a good starting point.

Star Trek TNG Replicator

Many people mistakenly think Star Trek portrays a communist or socialist government. But there are many clear examples showing that this is not the case. There is no prohibition against private property. For example, the Picard family owns and operates a vinyard that produces fine wines. Anyone is as free to start any sort of business enterprise as they are under a modern capitalist system. For example, Joseph Sisko operates a restaurant in New Orleans. There is no prohibition against anyone having or using currency if they want to, it’s just unnecessary within the Federation itself. When dealing with alien races outside the Federation, for example, various types of currency have been used.

But who runs the Federation economy? Star Fleet itself relies on central planning in the same way any large organization does but the Federation of Planets appears to do very little Federation-wide economic planning as there is simply no need for it. However, there are a few things that can’t be replicated even in the Star Trek universe, such as dilithium crystals or certain medical compounds. There are upper limits on production such as the number of starships that can be built per year. So the government does have a few things to keep it busy. But, otherwise, little central planning or control seems to be needed.

This brings us to Trekonomics. Unfortunately, you aren’t likely to learn as much as you might wish from this book. Despite the promising title, it’s mostly written from the point of view of someone who doesn’t really “get” the show, or at least is only interested in one particular incarnation known as Star Trek: Deep Space Nine. The author goes to some lengths in the final chapters to point out that he really thinks Star Trek and all science fiction is ultimately a waste of time. The author believes space travel itself is pointless, and that even leaving the Earth to visit Mars is misguided. He’s really only interested in the economic principles. Sadly, he never really gets around to talking much about the economics.

The majority of the book is a collection of personal anecdotes and lengthy retellings and paraphrasings of various Star Trek episodes that had an impact on his economic thoughts. Other than an introduction in which he describes post scarcity in general and a description of how Star Trek’s replicator effectively reduces the cost of goods to zero, there’s very little useful information. At the end of the book, you’ll know his favorite Star Trek characters, his favorite episodes, what he thinks of Elon Musk, where he got his first Isaac Asimov book, and a dozen other bits of useless trivia. But you won’t know much more about the actual economics of Star Trek than when you started.

However, I didn’t write all this just to tell you the book sucked. Rather, I’d like to point you to an alternative to the book. Rick Webb has written a good sized document titled, “The Economics of Star Trek” that does an excellent job of looking at all the economic clues we can discover from Star Trek. He even speculates a bit on how such things could actually work in a realistic economic system, something Saadia doesn’t event attempt in Trekonomics.

And one last tidbit. You may want to check out one of the earliest predictions that the world is headed towards a post scarcity system. In 1930 economist John Maynard Keynes wrote a paper called “Economic Possibilities for our Grandchildren“. Saadia mentions the paper in Trekonomics but doesn’t really go anywhere with it. Keynes notes in his paper that while scarcity has long been the fundamental assumption of economics, by 2030 we may be facing a world in which wealth and automation have rendered the economic problem solved. He predicts that capitalism will get us there but that it will be forced by technology to evolve into something else afterwards. He predicts that humans will have to adjust to a new lifestyle in which money is not important and the love of money will be viewed as a sickness. And finally he predicts that economics will become a mundane field as a result.

The one flaw in his vision is that he assumed capitalism would continue following the path of classic enlightenment liberalism, as it did in his time. If we allow economic inequality to grow rather than decline, his predictions will fail. It seems to me that capitalism has gone off the rails in that regard and is leading us toward disaster. We may still reach a utopian economic system like that of Star Trek but it may have to be by another route such as democratic socialism. Or, maybe it’s not too late to reform capitalism. It will be interesting to find out.

Syndicated 2017-02-25 04:22:52 from Steevithak of the Internet

24 Feb 2017 LaForge   » (Master)

Manual testing of Linux Kernel GTP module

In May 2016 we got the GTP-U tunnel encapsulation/decapsulation module developed by Pablo Neira, Andreas Schultz and myself merged into the 4.8.0 mainline kernel.

During the second half of 2016, the code basically stayed untouched. In early 2017, several patch series of (at least) three authors have been published on the netdev mailing list for review and merge.

This poses the very valid question on how do we test those (sometimes quite intrusive) changes. Setting up a complete cellular network with either GPRS/EGPRS or even UMTS/HSPA is possible using OsmoSGSN and related Osmocom components. But it's of course a luxury that not many Linux kernel networking hackers have, as it involves the availability of a supported GSM BTS or UMTS hNodeB. And even if that is available, there's still the issue of having a spectrum license, or a wired setup with coaxial cable.

So as part of the recent discussions on netdev, I tested and described a minimal test setup using libgtpnl, OpenGGSN and sgsnemu.

This setup will start a mobile station + SGSN emulator inside a Linux network namespace, which talks GTP-C to OpenGGSN on the host, as well as GTP-U to the Linux kernel GTP-U implementation.

In case you're interested, feel free to check the following wiki page: https://osmocom.org/projects/linux-kernel-gtp-u/wiki/Basic_Testing

This is of course just for manual testing, and for functional (not performance) testing only. It would be great if somebody would pick up on my recent mail containing some suggestions about an automatic regression testing setup for the kernel GTP-U code. I have way too many spare-time projects in desperate need of some attention to work on this myself. And unfortunately, none of the telecom operators (who are the ones benefiting most from a Free Software accelerated GTP-U implementation) seems to be interested in at least co-funding or otherwise contributing to this effort :/

Syndicated 2017-02-23 23:00:00 from LaForge's home page

24 Feb 2017 wingo   » (Master)

encyclopedia snabb and the case of the foreign drivers

Peoples of the blogosphere, welcome back to the solipsism! Happy 2017 and all that. Today's missive is about Snabb (formerly Snabb Switch), a high-speed networking project we've been working on at work for some years now.

What's Snabb all about you say? Good question and I have a nice answer for you in video and third-party textual form! This year I managed to make it to linux.conf.au in lovely Tasmania. Tasmania is amazing, with wild wombats and pademelons and devils and wallabies and all kinds of things, and they let me talk about Snabb.

You can check that video on the youtube if the link above doesn't work; slides here.

Jonathan Corbet from LWN wrote up the talk in an article here, which besides being flattering is a real windfall as I don't have to write it up myself :)

In that talk I mentioned that Snabb uses its own drivers. We were recently approached by a customer with a simple and honest question: does this really make sense? Is it really a win? Why wouldn't we just use the work that the NIC vendors have already put into their drivers for the Data Plane Development Kit (DPDK)? After all, part of the attraction of a switch to open source is that you will be able to take advantage of the work that others have produced.

Our answer is that while it is indeed possible to use drivers from DPDK, there are costs and benefits on both sides and we think that when we weigh it all up, it makes both technical and economic sense for Snabb to have its own driver implementations. It might sound counterintuitive on the face of things, so I wrote this long article to discuss some perhaps under-appreciated points about the tradeoff.

Technically speaking there are generally two ways you can imagine incorporating DPDK drivers into Snabb:

  1. Bundle a snapshot of the DPDK into Snabb itself.

  2. Somehow make it so that Snabb could (perhaps optionally) compile against a built DPDK SDK.

As part of a software-producing organization that ships solutions based on Snabb, I need to be able to ship a "known thing" to customers. When we ship the lwAFTR, we ship it in source and in binary form. For both of those deliverables, we need to know exactly what code we are shipping. We achieve that by having a minimal set of dependencies in Snabb -- only LuaJIT and three Lua libraries (DynASM, ljsyscall, and pflua) -- and we include those dependencies directly in the source tree. This requirement of ours rules out (2), so the option under consideration is only (1): importing the DPDK (or some part of it) directly into Snabb.

So let's start by looking at Snabb and the DPDK from the top down, comparing some metrics, seeing how we could make this combination.

snabb dpdk
Code lines 61K 583K
Contributors (all-time) 60 370
Contributors (since Jan 2016) 32 240
Non-merge commits (since Jan 2016) 1.4K 3.2K

These numbers aren't directly comparable, of course; in Snabb our unit of code change is the merge rather than the commit, and in Snabb we include a number of production-ready applications like the lwAFTR and the NFV, but they are fine enough numbers to start with. What seems clear is that the DPDK project is significantly larger than Snabb, so adding it to Snabb would fundamentally change the nature of the Snabb project.

So depending on the DPDK makes it so that suddenly Snabb jumps from being a project that compiles in a minute to being a much more heavy-weight thing. That could be OK if the benefits were high enough and if there weren't other costs, but there are indeed other costs to including the DPDK:

  • Data-plane control. Right now when I ship a product, I can be responsible for the whole data plane: everything that happens on the CPU when packets are being processed. This includes the driver, naturally; it's part of Snabb and if I need to change it or if I need to understand it in some deep way, I can do that. But if I switch to third-party drivers, this is now out of my domain; there's a wall between me and something that running on my CPU. And if there is a performance problem, I now have someone to blame that's not myself! From the customer perspective this is terrible, as you want the responsibility for software to rest in one entity.

  • Impedance-matching development costs. Snabb is written in Lua; the DPDK is written in C. I will have to build a bridge, and keep it up to date as both Snabb and the DPDK evolve. This impedance-matching layer is also another source of bugs; either we make a local impedance matcher in C or we bind everything using LuaJIT's FFI. In the former case, it's a lot of duplicate code, and in the latter we lose compile-time type checking, which is a no-go given that the DPDK can and does change API and ABI.

  • Communication costs. The DPDK development list had 3K messages in January. Keeping up with DPDK development would become necessary, as the DPDK is now in your dataplane, but it costs significant amounts of time.

  • Costs relating to mismatched goals. Snabb tries to win development and run-time speed by searching for simple solutions. The DPDK tries to be a showcase for NIC features from vendors, placing less of a priority on simplicity. This is a very real cost in the form of the way network packets are represented in the DPDK, with support for such features as scatter/gather and indirect buffers. In Snabb we were able to do away with this complexity by having simple linear buffers, and our speed did not suffer; adding the DPDK again would either force us to marshal and unmarshal these buffers into and out of the DPDK's format, or otherwise to reintroduce this particular complexity into Snabb.

  • Abstraction costs. A network function written against the DPDK typically uses at least three abstraction layers: the "EAL" environment abstraction layer, the "PMD" poll-mode driver layer, and often an internal hardware abstraction layer from the network card vendor. (And some of those abstraction layers are actually external dependencies of the DPDK, as with Mellanox's ConnectX-4 drivers!) Any discrepancy between the goals and/or implementation of these layers and the goals of a Snabb network function is a cost in developer time and in run-time. Note that those low-level HAL facilities aren't considered acceptable in upstream Linux kernels, for all of these reasons!

  • Stay-on-the-train costs. The DPDK is big and sometimes its abstractions change. As a minor player just riding the DPDK train, we would have to invest a continuous amount of effort into just staying aboard.

  • Fork costs. The Snabb project has a number of contributors but is really run by Luke Gorrie. Because Snabb is so small and understandable, if Luke decided to stop working on Snabb or take it in a radically different direction, I would feel comfortable continuing to maintain (a fork of) Snabb for as long as is necessary. If the DPDK changed goals for whatever reason, I don't think I would want to continue to maintain a stale fork.

  • Overkill costs. Drivers written against the DPDK have many considerations that simply aren't relevant in a Snabb world: kernel drivers (KNI), special NIC features that we don't use in Snabb (RDMA, offload), non-x86 architectures with different barrier semantics, threads, complicated buffer layouts (chained and indirect), interaction with specific kernel modules (uio-pci-generic / igb-uio / ...), and so on. We don't need all of that, but we would have to bring it along for the ride, and any changes we might want to make would have to take these use cases into account so that other users won't get mad.

So there are lots of costs if we were to try to hop on the DPDK train. But what about the benefits? The goal of relying on the DPDK would be that we "automatically" get drivers, and ultimately that a network function would be driver-agnostic. But this is not necessarily the case. Each driver has its own set of quirks and tuning parameters; in order for a software development team to be able to support a new platform, the team would need to validate the platform, discover the right tuning parameters, and modify the software to configure the platform for good performance. Sadly this is not a trivial amount of work.

Furthermore, using a different vendor's driver isn't always easy. Consider Mellanox's DPDK ConnectX-4 / ConnectX-5 support: the "Quick Start" guide has you first install MLNX_OFED in order to build the DPDK drivers. What is this thing exactly? You go to download the tarball and it's 55 megabytes. What's in it? 30 other tarballs! If you build it somehow from source instead of using the vendor binaries, then what do you get? All that code, running as root, with kernel modules, and implementing systemd/sysvinit services!!! And this is just step one!!!! Worse yet, this enormous amount of code powering a DPDK driver is mostly driver-specific; what we hear from colleagues whose organizations decided to bet on the DPDK is that you don't get to amortize much knowledge or validation when you switch between an Intel and a Mellanox card.

In the end when we ship a solution, it's going to be tested against a specific NIC or set of NICs. Each NIC will add to the validation effort. So if we were to rely on the DPDK's drivers, we would have payed all the costs but we wouldn't save very much in the end.

There is another way. Instead of relying on so much third-party code that it is impossible for any one person to grasp the entirety of a network function, much less be responsible for it, we can build systems small enough to understand. In Snabb we just read the data sheet and write a driver. (Of course we also benefit by looking at DPDK and other open source drivers as well to see how they structure things.) By only including what is needed, Snabb drivers are typically only a thousand or two thousand lines of Lua. With a driver of that size, it's possible for even a small ISV or in-house developer to "own" the entire data plane of whatever network function you need.

Of course Snabb drivers have costs too. What are they? Are customers going to be stuck forever paying for drivers for every new card that comes out? It's a very good question and one that I know is in the minds of many.

Obviously I don't have the whole answer, as my role in this market is a software developer, not an end user. But having talked with other people in the Snabb community, I see it like this: Snabb is still in relatively early days. What we need are about three good drivers. One of them should be for a standard workhorse commodity 10Gbps NIC, which we have in the Intel 82599 driver. That chipset has been out for a while so we probably need to update it to the current commodities being sold. Additionally we need a couple cards that are going to compete in the 100Gbps space. We have the Mellanox ConnectX-4 and presumably ConnectX-5 drivers on the way, but there's room for another one. We've found that it's hard to actually get good performance out of 100Gbps cards, so this is a space in which NIC vendors can differentiate their offerings.

We budget somewhere between 3 and 9 months of developer time to create a completely new Snabb driver. Of course it usually takes less time to develop Snabb support for a NIC that is only incrementally different from others in the same family that already have drivers.

We see this driver development work to be similar to the work needed to validate a new NIC for a network function, with the additional advantage that it gives us up-front knowledge instead of the best-effort testing later in the game that we would get with the DPDK. When you add all the additional costs of riding the DPDK train, we expect that the cost of Snabb-native drivers competes favorably against the cost of relying on third-party DPDK drivers.

In the beginning it's natural that early adopters of Snabb make investments in this base set of Snabb network drivers, as they would to validate a network function on a new platform. However over time as Snabb applications start to be deployed over more ports in the field, network vendors will also see that it's in their interests to have solid Snabb drivers, just as they now see with the Linux kernel and with the DPDK, and given that the investment is relatively low compared to their already existing efforts in Linux and the DPDK, it is quite feasible that we will see the NIC vendors of the world start to value Snabb for the performance that it can squeeze out of their cards.

So in summary, in Snabb we are convinced that writing minimal drivers that are adapted to our needs is an overall win compared to relying on third-party code. It lets us ship solutions that we can feel responsible for: both for their operational characteristics as well as their maintainability over time. Still, we are happy to learn and share with our colleagues all across the open source high-performance networking space, from the DPDK to VPP and beyond.

Syndicated 2017-02-24 17:37:00 from wingolog

23 Feb 2017 fozbaca   » (Apprentice)

Found in an Atlanta urban garden


via Instagram http://ift.tt/2lN0Ym1

Syndicated 2017-02-23 02:47:25 from fozbaca.org

19 Feb 2017 fozbaca   » (Apprentice)

The California Raisins Puffy Stick-Ons from 1987


via Instagram http://ift.tt/2lXFCCX

Syndicated 2017-02-19 19:30:45 from fozbaca.org

19 Feb 2017 fozbaca   » (Apprentice)

The California Raisins Puffy Stick-Ons from 1987


via Instagram http://ift.tt/2lXFCCX

Syndicated 2017-02-19 18:30:55 from fozbaca.org

19 Feb 2017 fozbaca   » (Apprentice)

The California Raisins Puffy Stick-Ons from 1987


via Instagram http://ift.tt/2lXFCCX

Syndicated 2017-02-19 17:30:58 from fozbaca.org

19 Feb 2017 fozbaca   » (Apprentice)

The California Raisins Puffy Stick-Ons from 1987


via Instagram http://ift.tt/2lXFCCX

Syndicated 2017-02-19 16:30:57 from fozbaca.org

19 Feb 2017 fozbaca   » (Apprentice)

The California Raisins Puffy Stick-Ons from 1987


via Instagram http://ift.tt/2lXFCCX

Syndicated 2017-02-19 15:36:52 from fozbaca.org

17 Feb 2017 LaForge   » (Master)

Cellular re-broadcast over satellite

I've recently attended a seminar that (among other topics) also covered RF interference hunting. The speaker was talking about various real-world cases of RF interference and illustrating them in detail.

Of course everyone who has any interest in RF or cellular will know about fundamental issues of radio frequency interference. To the biggest part, you have

  • cells of the same operator interfering with each other due to too frequent frequency re-use, adjacent channel interference, etc.
  • cells of different operators interfering with each other due to intermodulation products and the like
  • cells interfering with cable TV, terrestrial TV
  • DECT interfering with cells
  • cells or microwave links interfering with SAT-TV reception
  • all types of general EMC problems

But what the speaker of this seminar covered was actually a cellular base-station being re-broadcast all over Europe via a commercial satellite (!).

It is a well-known fact that most satellites in the sky are basically just "bent pipes", i.e. they consist of a RF receiver on one frequency, a mixer to shift the frequency, and a power amplifier. So basically whatever is sent up on one frequency to the satellite gets re-transmitted back down to earth on another frequency. This is abused by "satellite hijacking" or "transponder hijacking" and has been covered for decades in various publications.

Ok, but how does cellular relate to this? Well, apparently some people are running VSAT terminals (bi-directional satellite terminals) with improperly shielded or broken cables/connectors. In that case, the RF emitted from a nearby cellular base station leaks into that cable, and will get amplified + up-converted by the block up-converter of that VSAT terminal.

The bent-pipe satellite subsequently picks this signal up and re-transmits it all over its coverage area!

I've tried to find some public documents about this, an there's surprisingly little public information about this phenomenon.

However, I could find a slide set from SES, presented at a Satellite Interference Reduction Group: Identifying Rebroadcast (GSM)

It describes a surprisingly manual and low-tech approach at hunting down the source of the interference by using an old nokia net-monitor phone to display the MCC/MNC/LAC/CID of the cell. Even in 2011 there were already open source projects such as airprobe that could have done the job based on sampled IF data. And I'm not even starting to consider proprietary tools.

It should be relatively simple to have a SDR that you can tune to a given satellite transponder, and which then would look for any GSM/UMTS/LTE carrier within its spectrum and dump their identities in a fully automatic way.

But then, maybe it really doesn't happen all that often after all to rectify such a development...

Syndicated 2017-02-15 23:00:00 from LaForge's home page

15 Feb 2017 fozbaca   » (Apprentice)

Gordon Ramsay Challenges Amateur Cook to Keep Up with Him | Bon Appetit