Massively-Distributed Real-time Video Broadcasting

Posted 9 Mar 2009 at 16:37 UTC (updated 10 Mar 2009 at 15:20 UTC) by lkcl Share This

The British Broadcasting Company has made a request for contributions to an open standard to be made, for the distribution of audio and video, both offline and real-time broadcasting. Their plan is effectively to act as the mediator between box manufacturers and content producers, with themselves as one of the content producers, but definitely not as set-top box manufacturers.

Challenges faced include an assumption that it is reasonable to expect ISPs to insert cacheing boxes on their premises, and an assumption that "downloading" - especially at high speed - is "the way to go". Also, there is yet again the risk of some idiot content producers trying to DRM an open standard.

This article will provide some answers to these tricky issues, and they're not all "Technical" answers. For the most part, the solutions are psychological, and take comfort in the fact that most users are ordinary people not interested in blatant copyright theft, they just want to watch stuff. Ultimately, content producers are going to have to get used to the fact that they are simply going to have to trust people.

The Issues

There are a ton of issues to consider, but these are some of the big ones:

  • 1) Actual real-time distribution. Nobody's ever used the Internet for massive real-time A/V distribution before, on the scale proposed by the BBC! There is plenty of copying and distribution of video content going on, but it's most definitely not real-time.
  • 2) DRM is mutually exclusively incompatible with broadcasting. This is a simple mathematical fact, and content providers are just going to have to accept it. If people cannot get content easily and at a fair price, they simply find other ways.
  • 3) Massive differences in upload and download speeds. There is an assumption that all that people want to do is "consume", "consume", "consume". Unfortunately, that means that most ISPs provide cheap bandwidth and cheap infrastructure.
  • 4) ISP Throttling. Many ISPs not only place a monthly limit on bandwidth but also place a daily limit on their customers as well. Any algorithm used must be able to cope with deliberate drop-outs and interference from the ISP.
  • 5) Cacheing proxies. Who's going to pay for them! The BBC has made an assumption that ISPs will pay large amounts of money for cacheing proxy servers, in order to reduce the load on their infrastructure. ISPs have made the assumption that not all of their customers will be in "consume", "consume", "consume" mode, all of the time, and with Real-Time video set to become a reality, ISPs are in for a nasty shock when it comes to their lazy pricing assumptions.
  • In particular, the infrastructure on which 3G Data Services have been laid was never designed to cope with more than "phone calls". It's a really simple decision to make when faced with the choice of supplying 100 customers with 3k/second audio, or one 3G data customer at 300k/second just so they can watch some seedy TV program or download some proprietary formatted and bloated (microsoft) document, it should be bluntly obvious as to what the 3G Cell Providers are going to do.

Ultimately, then, these boil down into two key issues: making sure that there is no "technical" attempted DRM in the standard (it simply won't work. fact) and making sure that the real-time data distribution can cope, all at once, with dramatic changes in levels of bandwidth, moment to moment.


The past ten years have shown DRM to be the failure that it was expected to be. Anyone could have worked that out, but it was necessary to let content pushers worh it out for themselves. People do not care about quality - they often don't even notice. 60 inch televisions with utterly terrible MPEG compression artefacts, and people believe they're getting HDTV and think it's great! Video tapes with abominable sound, and lines scrolling across the picture and they still watched it. So one of the main reasons of DRM - to protect higher-quality pictures - is based on a false premise that people actually care.

But DRM is primarily a failure because the more people you distribute to, the higher the mathematical probability that one or more of them will break the DRM. Let's assume that the probability of DRM being ignored by an individual is 1 in 1,000, and that there are 1,000 people. The combination of 0.999 probability of DRM working, when multiplied together 1,000 times is: 1,000 is 0.3676954. So the probability of DRM being broken is 63%. Funny, that. Now let's assume that the probability is 1 in 1,000,000, due to some massive disincentive such as life imprisonment (or penalties of death), and that there are 1,000,000 people to whom the content is distributed. 0.999999 to the power of a million is: 0.367879. How strange - that's the mathematical number e (or, its inverse, 2.71828...)

So no matter how you play it, when you're distributing to millions and hundreds of millions of people, the application of DRM simply is a proveable mathematical failure, even when you assume that there are ridiculously large disincentives to break the DRM.

The solution, therefore, is simple: content providers are just going to have to trust people. And rely on the fact that most of them are very, very dumb (which is a big help). En-masse, people simply don't care if the content is good quality, or where it comes from - they just want to watch it, and participate and be able to talk to their friends about it later.

It's a strange thing to expect - content providers trusting people. But, ultimately, the content providers have no choice. If the method of distribution is easy, they will use the easy distribution method. If a distribution method does not exist, they will make one up. If the path of least resistance is blocked, another one will be created.

Content providers can either work with this simple reality, or not. Content providers can take advantage of this simple reality, or not.

Real-time Content Distribution

There do not exist any versatile unpatented unencumbered open real-time audio-video distribution standards, nor in fact any algorithms. There are plenty of compression algorithms (some of which are patent free), and there are a few proprietary real-time video standards (such as Adobe Flash Media Server).

However, having video and audio compression isn't enough, in the face of massive changes in the bandwidth capabilities of the clients.

So a radical rethink is needed about how the content should be distributed. Perhaps unsurprisingly, cooperative distribution (also known as peer-to-peer distribution) immediately springs to mind. Fortunately, given that DRM is not an issue (because it cannot be made to work), peer-to-peer cooperative distribution techniques can be considered. In place of DRM, Digital Signatures can be utilised to great effect, thus unequivocably marking the content as belonging to the Copyright Holders, and thus ensuring a legal basis for blatant and widespread copyright violations, if they occur.

Key to the success of a massively-distributed video distribution algorithm will be "layered prioritisation" of incremental differences to frames at increasing quality - all simultaneously distributed.

All packets must be subdivided into UDP requests using Peer-to-Peer Query Technology (such as Kademlia), the basic principle of which is to make an MD5 or SHA hash of a unique key (such as the programme name, the type of information required plus the Time Signature on the A/V Frame) and then "broadcast" that out to all the nearest neighbour nodes. If the neighbours do not have the information cached, they forward the query on to their neighbours, and so on.

At each level of prioritisation a separate network of peer-to-peer nodes must be maintained, with queries only being made and distributed for data at the specific "priority" level.

The absolute top priority will be the audio component, with a mono soundtrack being the first priority and, like FM radio, a stereo soundtrack which is a "diff" of the Left and Right channels being the second priority. Additional tracks, providing ambient sound (or Dolby Surround), can also be considered.

The second priority will be a 5fps 180x120 picture, with "full-frame" sent every second (or two), and "update" frames sent thereafter. This will providing the absolute basic minimum viewing experience that would work even over utterly unreliable connections and very slow ones. As little as 10k per second should be enough to get audio and video.

The third priority will be the alternate frames on the 180x120 picture, thus requiring double the bandwidth, and also providing 10fps.

The fourth priority will be to expand the picture size up by approximately 40% (square root of 2), thus requiring - again - approximately double the bandwidth. This will be done by a mathematically exact and precise transformation of the 180x120 picture data, which need not be absolutely accurate because it will be corrected by the application of the "diff" at the 240x160 level. The transformation / expansion process should be easily performed using integer arithmetic, assembly level instructions or SIMD / MMX instructions: as might be surmised already, there are going to be quite a few of these transformations.

Each layer of prioritisation should be targetted to increase the required bandwidth by between 50 an 100%, with 50% being preferred. In this way, the "upload" link of ADSL subscribers can be utilised to exponentially distribute at least the minimum acceptable quality picture WITHOUT overloading or burdening ISPs with an ill-adviseable cost. If, however, an ISP makes a decision to put in "cacheing", all that is needed is a computer which participates in the "watching" of a particular programme, and, given that every single node in the network is also a "cache", automatically the computer on the ISP's premises also becomes part of the cache.

The Peer-to-peer infrastructure will make an effort to detect its nearest neighbours by analysing the IP address range, use of broadcast UDP traffic, traceroute ICMP traffic and analysis of packet response times. Priority must be given to making queries to the nearest neighbours.

Additionally, participants in the distribution of the channel should add UDP broadcast - not just UDP point-to-point and TCP point-to-point - as the method of distribution, at all levels. Participants should be prepared to receive - and cache - UDP broadcasted channel fragments. In this way, where ISPs and/or Network Administrators do not block UDP broadcasts, the distribution of channel content will be much less network-intensive.

Content distributors will need to do the most amount of processing. The creation of so many diffs of picture frames and frame sizes will be enormously CPU intensive. Fortunately, however, there is only one location where that work need be done: at the source! All other participants will be purely "ferrying" that traffic, and the only location where it is actually reconstructed is for display on the end-user's screen.

The whole system will be self-adjusting to suit the network, in real time (not the other way around, as has been requested by the BBC). As the underlying infrastructure improves, with Fibre-Optic cables being added and brought to people's homes, the whole system will "uprate" automatically. The addition of "cacheing" technology at ISP sites is also a "nice improvement" rather than an absolute prerequisite. And users will be able to specify the bandwidth allocation that they are prepared to tolerate, with an immediately obvious trade-off in picture quality that they can decide whether to accept, or pay for additional bandwidth.

What is particularly nice is that the slower speed connections will still be helped by the faster speed ones, as the slower end-user connections will simply drop out of the sharing process without being adversely affected by neither sending nor receiving of data that the connection simply cannot cope with.

Offline content and delayed viewing

Offline content (downloading of higher quality) and delayed viewing - pausing - is an automatic feature provided by the design of this system. The viewers' station simply begins participating in the "higher rate" peer-to-peer networks. They will begin to receive data, at a much slower rate than they would otherwise be able to see in real-time, but they will receive it nonetheless.

Nodes which have already viewed the higher-rate channel data will be able to participate in the peer-to-peer distribution to these slower off-line systems, thus reducing the burden on the distribution infrastructure.

All in all, the entire system fits to the maximum capacity of the overall network and still provides viewers with what they want, in the format that they want it, for an amount that they are prepared to pay.

Channel Guides

Channel guides are not covered by this specification, as there already exists perfectly good technology that can be leveraged (HTML, REST, JSONRPC, XMLRPC, RSS, XML etc.) Free Software Technology already exists in the form of the ineptly-named "Democracy Player", which has been renamed to the marginally better Miro Player.

Miro demonstrates that it is not necessary to have massive proprietary infrastructure to provide channel guides or distribute content. Whilst it would be nice to also publish channel guides over peer-to-peer technology, advocating such is not the primary focus of this article.

Network RPC systems

Getting RPC right is hard. There is enough to consider in the above without having to make life all the more complex by reinventing an RPC mechanism - especially one which is going to have to transfer real-time data.

There is one particular technology which has all of the required features that make it perfect for use in this situation - including being able to make function calls over UDP Broadcast traffic: DCE/RPC. The FreeDCE reference implementation is both BSD-licensed and LGPL-licensed, and its binary libraries come in at under 1mb in size, which makes it reasonable enough to put onto embedded systems.

DCE/RPC had a reputation as being a "heavyweight" at the time it was put together, however, many free software RPC mechanisms now completely dwarf the FreeDCE implementation in size, yet provide none of the same benefits, placing extra burden onto the developers who use the alternatives.

Packets are broken down into "Protocol Data Units", with PDU headers being an approximate 24 bytes in size. For use with the real-time transmission of audio and video streams, care should be taken to ensure that each data fragment does not push the size of the PDU beyond a reasonable limit.

DNS UDP packets are specifically limited to around 500 bytes, to ensure that they are never fragmented. Many people believe that an MTU of around 1500 is reasonable, but this is MTU size is chosen by ISPs endeavouring to squeeze the maximum bandwidth possible in an attempt to "sell speed". A much more sensible MTU of 800 gives far better latency, allowing VoIP and other real-time traffic a much better chance of interleaving with over-bloated HTTP and other "downloads", but with the down-side to the ISPs in that the IP and UDP packet headers now take up a sizeable amount of the traffic and they would be unable to "compete" based on "speed". Thus, everyone gets dumbed down to the lowest common denominator, yet they still wonder why VoIP is rubbish.

The maximum MTU and PDU size needs to be very carefully chosen, as it will need to work not only across 3G networks but also Satellite Broadband as well. Fragmentation of UDP packets across 3G and Satellite would not in the slightest bit be amusing. It makes sense therefore to follow DNS's lead, and to pick the absolute minimum MTU size, despite the overhead involved, at the very least for the UDP based RPC traffic. The TCP RPC traffic could consider a larger MTU and PDU size.

Luckily, the designers of DCE/RPC thought of everything, and included MTU size negotiation as part of DCE/RPC session establishment!

The actual protocol, therefore, could break down packets into sections as small as 128 bytes, with the RPC functions being designed to receive data as an array of these sections, and multiple function calls being made (fitting within the negotiated MTU size) to retrieve the data of an entire frame of e.g. 1024 bytes in size [whatever].

Thanks again to the design of DCE/RPC, such complex subdivisions of the data can be attempted with very little additional coding or programming effort, as the hard work is done by the very efficient NDR marshalling format.


Real-time distribution of audio-visual data is something that is being requested by the BBC on an unprecedented scale: existing technology, whether proprietary or open, simply is not designed to cope, if it exists at all. The MPEG standards were never designed with such dynamic ranges of requirements to be coped with: the MPEG standards were designed to work within fixed bandwidth point-to-point. The H.263 standards were never designed with such distribution requirements in mind: although the existing video-conferencing standards can cope with changes in bandwidth, they simply were not designed with transmission to multiple recipients in mind.

Content producers have tried to interfere, thanks to lack of technical awareness, in the creation of control mechanisms on top of distribution channels, and their attempts have been shown to fail. fortunately. Content producers need to try the novel solution of trusting the end-users, making their money based on being part of the "easiest channel" (many pirateers strip out adverts from the content that they distribute, and they definitely don't distribute the "official literature", often containing adverts and other revenue-generating opportunities, with their copied material).

Content needs to be peer-to-peer distributed and cached by every node participating in the channel as "simultaneous upgradeable differences", with each upgrade to the picture quality being optional to each individual receiver station. In this way, not only can ISPs provide cacheing by simply "watching" a channel, but also the whole network automatically adjusts to take advantage of whatever bandwidth and capabilities are available, and - in particular - individual stations only participate in the sending and receiving of traffic that their bandwidth allocation can cope with (including ISP "cacheing" stations!)

Content needs to be digitally-signed using a public key, with the private key being solely and exclusively available to the Content Producer, and overall checksums applied to ensure that content is correctly distributed throughout the network. In this way, it can be made clear to recipients that they are receiving material that is unequivocably the Copyright of the Content producer.

"Offline" content is an automatic byproduct of the design of the protocol, as is "delayed viewing". Participants and recipients of content up to a particular point will assist in reduction of load on the servers; likewise for viewers who wish to "download" higher quality content for "off-line" viewing can participate in the exchange of the "faster high-quality stream-upgrades".

DCE/RPC is a good framework for the provision of this real-time audio/visual data transfer, as it has the capabilities required to deal with the complexity of the interaction, freeing up the implementors from the burden of rolling their own RPC mechanism. Additional transports can be added (such as HTTP proxying - e.g. ncacn_http) to the underlying DCE/RPC framework without impacting the design of the actual audio/visual data transfer. Additionally, authentication and verification mechanisms can be added - again, transparently as far as the application is concerned.

Luke, when do you decide to call it a quit?, posted 9 Mar 2009 at 20:33 UTC by badvogato » (Master)

like this Director at the bottom of US DHS directorship. Here is his resignation letter.

Is his reason good enough for you and me? It is hard for us to find motivations to do what we believe is right and to burden ourselves with powers unassigned.

I wonder if Rob is the starfish or the spiderman himself. which one would you bet?

p.s., posted 9 Mar 2009 at 21:15 UTC by lkcl » (Master)

... if nothing else, you simply distribute the content in all format sizes simultaneously, and, without attempting anything "clever" such as performing diffs on video frames, allow the devices to automatically switch the selection of the appropriate format size, depending on detected available bandwidth. so, for the first 25 frames, an individual device downloads the first "full" picture followed by the 24 updates in 760x520 but, on discovering that that format takes more than one second of real-time to download, drops down to reading the next second of data - the next 25 frames - at the 640x480 picture size, from a different set of peers.

p2p-next, posted 10 Mar 2009 at 20:21 UTC by lkcl » (Master)

p2p-next - e.u. funded

free software video streaming projects, posted 10 Mar 2009 at 20:26 UTC by lkcl » (Master)

directory of p2p video streaming projects

Tribler / SwarmPlayer, posted 10 Mar 2009 at 21:02 UTC by lkcl » (Master)


Distribustream, posted 10 Mar 2009 at 21:38 UTC by lkcl » (Master)

distribustream - includes an RFC (oh no! it's JSON-formatted data ARGH they used text-based RPC but at least they used a standard RPC mechanism rather than rolling their own whew). it's in ruby but there does exist a c client apparently.

Graded degradation of MPEG, posted 16 Mar 2009 at 19:51 UTC by lkcl » (Master)

apparently, it turns out that MPEG is capable of encapsulating "graded" signals, such that you can just .... not add increasingly high quality components.


the trouble is that at the time MPEG was designed, the cost in hardware and software of cutting up a stream in this way was prohibitive.

it's time for that to be revisited.

Why can't you broadcast encrypted stuff?, posted 21 Mar 2009 at 20:06 UTC by DeepNorth » (Journeyer)

Re: 2) DRM is mutually exclusively incompatible with broadcasting. This is a simple mathematical fact, and content providers are just going to have to accept it. If people cannot get content easily and at a fair price, they simply find other ways.

Unless I missed a meeting, it's dead simple using vanilla PKI:

encr BBCPrK DC | CloudDS | decr BBCPuK | Display


* DC = whatever digial content, say a TV show.

* BBC = British Broadcasting Corporations

* BBCPrK = BBC Private Key

* BBCPuK = BBC Public Key

* encr = PKI Encryption program

* decr = PKI Decryption program

* | = pipe symbol -- left side stream in, right side stream out

* CloudDS = Cloud Distributor -- something that sends and receives stuff across the network cloud from one to n.

* Display = some 'sink' for the digital content -- TV, monitor, disk, etc.

encr BBCPrK DC | CloudDS | decr BBCPuK | Display

Below is an actual pipeline that mimics the above:

env | b64 -e | more | b64 -d

The above uses 'env' which has meaning under a number of Operating Systems and dumps a bunch of strings to stdout. It also uses a base64 utility (with the encode and decode switches faking the asymmetric keys). It takes the 'env' stream, encodes it, sends it through something and then decodes it and displays the console.

You need to have a base64 program with that syntax, but fortunately I wrote one about eight years ago when I realized I would be making this post.

Oops -- not quite *that* simple, posted 22 Mar 2009 at 00:55 UTC by DeepNorth » (Journeyer)

Sorry about that -- the above, of course, requires that the 'public key' is only known at the receiving end. It also presumes, obviously, that some sort of optimization allows caches along the way to have access to the unencrypted stream or a suitable intermediate form. It's not quite 'dead' simple, but it is doable.

I have to note here that I find the whole concept of DRM and treacherous computing scary. The fact that infrastructures are being built to create digital masters and digital slaves is entirely contrary to any 'enlightened wishes' of the sovereign population. If people understood what genie was being let out of the bottle, they would not stand idly by while this happens. Many years ago, Richard Stallman wrote a story:

Here is a quote from the above short story:

"you could go to prison for many years for letting someone else read your books"

I am an advocate of freely available maximum strength encryption. I think that, going forward, public access to strong encryption and other privacy measures will be necessary to keep the freedom we have left.

As soon as I started to put in bits above to make a practical method of DRM for broadcast, I thought 'wait a minute' ... let them think that it *is* hard to do if that is what the weasels will try do to with it.

Strangely enough, as can sometimes happen, I thought of a really interesting notion whilst designing that pipeline. I am going to see if there is a way to make quick hack as a 'proof of concept'. If I do, I will describe it here.

only at the receiving end...., posted 28 Mar 2009 at 21:28 UTC by lkcl » (Master)

Sorry about that -- the above, of course, requires that the 'public key' is only known at the receiving end. I

precisely. and everyone involved has to know the "public key", which means that you are doing "verification" - not "encryption".

in other words, content from the "BBC" can be verified as coming from the BBC.

to encrypt, you have to .. well... encrypt! and that means that you must distribute the decrypt secret key.

and _that_ means that the more people that you distribute to, the higher the probability that someone will distribute the key _without_ permission. or, if that's difficult (note: obscurity != difficult), they will simply write a system which decrypts, stores and re-forwards as either real-time or offline content.

simple mathematics. no matter what measures are in place, the fact that content is being distributed mean that the probability of a crack is increased to the point where it is inevitable.

water always flows downhill....

the other way is this: you transmit using a single encryption key per person. well _that_ really works.

1) broadcast distribution per person requires MILLIONS of customised encrypts. that's insanely expensive.

2) the numbers game makes it pointless. one person decides to anonymously decrypt and republish, everyone else gets the re-published content instead of the encrypted content.

no matter _what_ you consider, DRM loses, loses, loses, every single which way it is considered.

mathematically, DRM loses.

algorithmically, DRM loses.

psychologically, DRM loses.

sociologically, DRM loses.

politically, DRM loses.

practically-speaking, DRM loses.

the only way in which DRM isn't yet dead is from the perspective of its status in law. having been bought into existence by the cartels, DRM is slowly being eroded, and will eventually lose there, too.

Well, I don't want to argue because I sure HATE DRM, posted 30 Mar 2009 at 22:35 UTC by DeepNorth » (Journeyer)

Re: 1) broadcast distribution per person requires MILLIONS of customised encrypts. that's insanely expensive.

There are ways around this. However, it is not quite 'dead simple' and you are correct that it can get 'leaky'. As I said above, I am reluctant to make too much about this or go into it because it *can* be done and I sure would not want to be the one providing the method. I wonder -- is there any way we could figure out *all* the ways of doing it, patent them and find some legal mechanism to make it impossible for anyone to use them? By rights, you are supposed to make patents available for licensing on some reasonable basis. However, is it possible to break the pathways to doing this into plausible separate patents, sell them off to different entities and then have the individual licenses punitive PLUS keep some essential and difficult step(s) as trade secrets? Oh my. I truly do hate those guys.

Re: 2) the numbers game makes it pointless. one person decides to anonymously decrypt and republish, everyone else gets the re-published content instead of the encrypted content.

This will always be some kind of weakness, but do not underestimate the nefarious nature of Windows Vista, Treacherous Computing and the willingness of these guys to use the inverse of Rubber-hose cryptanalysis - Rubber-hose cryptography.

I looked about for a discussion of this and did not find one easily. Cryptanalysis, yes, but not cryptography. Rubber host cryptanalysis, or a 'rubber hose attack' is to use a weakness of the key holder to get the key from them. The traditional image is to beat the secret out of them with a rubber hose, hence the name. It is coercive. Its counterpart would be a coercive method of enforcing individuals to *not* reveal things. The DMCA is, in my opinion, a variant of this. We have the technology to do certain attacks, but the police will jail you if you are even involved in making them available.

It is getting increasingly more difficult to beat these things.

In the case, for instance, of broadcast content (I am not revealing anything here because it is already in place), each computer is a sealed environment and although it will decrypt for play to the screen, it will not do anything else with that stream. This is the essence of treacherous computing. YOU do not have ultimate admin rights to the computer and so you just get 'access denied' errors or can not even see the files you wish to hack into. The manufacturer (Microsoft and cronies) owns the 'root' keys for the system. The system negotiates an SSL-like session with the broadcaster using a key you can't get to and moves the stream into the treacherous platform. It can't be eavesdropped in transit and only the treacherous computer is able to access the file and even then will only access it with the permissions allowed. So, for instance (I don't know if they have gone this far yet, but it is a no-brainer to implement), the file you have on your machine is 'play by reference only'. You have a reference on the machine which you can not even read. You use the reference to play the content (wherever or however it is stored on the file system) as long as the broadcaster allows. It could, for instance come with a play once and die restriction or even a play a sample and then ask for money restriction. Except that you take a film of the output, there is no way you can access the stream. Remember, this is treacherous computing -- the display device itself uses SSL type encryption to get its signal from the computer. There is no way to intercept and no way to get into these devices to gain access to a decrypted signal.

Now, in the fullness of time, just as high-quality photocopiers are already not allowed to print money, video cameras will be made so that they simply will not film a DRM image from a treacherous device. The whole image will be watermarked so you can't distort it out and the camera will inspect the watermark and honor the instructions of the other treacherous device.

As I say, the above is already in place or I wouldn't describe it.

Up until now, in the DRM arms race, we have had two advantages that have allowed all the security schemes to fall:

1) We have unrestricted access to the same tools.

2) Attack is cheaper than defense.

Item one is being closed down as we speak via treacherous computing. Since we will *no longer* have access to the tools, attack, though cheaper than defense will be prohibitively expensive. The forces of evil are lining this up so that NOBODY will be able to mount any kind of attack at all.

Although there are always going to be exploitable weaknesses in encrypted systems, these weaknesses can not be (effectively) exploited using a pair of tweezers and a crystal radio set.

I chose the crystal radio because there is arguably always going to be some kind of side-channel attack. However, even the cleverest of us, even if they could cobble together a device to actually capture the usable stream, would have no effective way to defeat the entire system. I am a resourceful guy with a lot of time in as a programmer and all-round tinkerer, but I would be left behind before I could even capture the stream. You still have to take that stream and convince something to store it somewhere and then you have to convince the system that the thing you have is playable and then you have to convince the system you have the right to play it. As your weapons of choice go from canons to guns to arrows to wooden clubs and the defensive walls go from nothing to sticks to brick and then reinforced concrete plated with armor your probability of successful attack goes down.

Is it really sensible to think that a watermarked stream with digitally signed segments is going to come across perfectly intact and that you are somehow going to be able to sign it appropriately. I really don't think so. I think that if we were on an equal footing with our systems (we had full admin access to similar hardware), we would have a chance. However, Treacherous computing and rubber-hose defenses such as DRM and the ominously rising police state increasingly makes digital weapons development and deployment impossible.

I am going to make another post to briefly speak about what I believe is a possible solution.

There oughta be a law..., posted 31 Mar 2009 at 11:53 UTC by DeepNorth » (Journeyer)

The solution to the problems created by current DRM mechanisms is something akin to 'n' laws of robotics PLUS. I will leave the PLUS for another day.

What we need is to incorporate into the very DNA of our legal system the notion that at some level devices must NEVER be able to work against their masters and that all individual humans may be masters and no device or entity (including evil corporations) that is non-human can trump a human. This is a very difficult balancing act (hence 'n', three or four provably do not cut it). Although I say 'device', I am thinking along the lines of clever devices akin to 'robots'.

The fact is, DRM and the Treacherous Computing environment that makes it practical is DANGEROUS to public health and safety, individual human liberty (including privacy and the pursuit of happiness) and the entire fabric of society.

There needs to be multiple custody of keys (m of n required to do something where m increases with the severity of the consequences of key discovery).

There needs to be a compelling reason for a robot to disobey a human being. Protecting somebody's copyright is so far below a compelling reason, it should be removed from consideration and to the extent that copyright interferes with anything at all it should be abolished.

A human being should not be able to stop the cooling of a nuclear reactor without some checks in place. I made that up, but I honestly feel it is a bad idea to allow a 'fire alarm' type mechanism to trip the meltdown of a nuclear reactor. It should have locks and those locks should require some rather unique authority to defeat.

Strong encryption, available to all is a necessity. I *need* to able to say that such and such a document can not be opened except with chosen trusted custodians supplying keys. I need to, in an emergency (when the state has gone mad, as it has now), apply digitally administered rules to my programs and data.

What we need to do is to outlaw, in the harshest possible way, any attempt for other parties to injure the public good. Copyright was intended to provide an incentive to creators to create things for the ultimate good of the public. That is, its only reason for existence and it is a weak reason at that. Its current level of abuse seems to me to argue strongly for its abolition altogether. However, even if one grants that there is some merit in the old common-law regime of a 14 year copyright (I don't, BTW), the whole purpose of the grant is to allow the property to fall into the public domain. Any digital control mechanism places this in danger and in fact we already have a TON of public domain material removed from the public domain using these mechanisms. A high-resolution image of any classical painting, for instance, should be freely available to everyone. There is no incentive required to get the world's art digitized like this. Allowing any copyrights, watermarks or other obstructive mechanisms to be applied or even attempted on these things robs the public (you and I) of what belongs to them, for there is surely a demonstrable incentive to controlling the best images of the Mona Lisa. If you allow them to (and we have), they will steal this from us and charge us a toll on our own common heritage.

I think the framers of the U.S. Constitution slipped up when they allowed Congress the right to grant copyrights and patents. They anticipated many evils, but they just were not paranoid enough when they drafted that little bit. Had they seen what became of it, there is no way at all that they would have put that in there. On the contrary, it barely made it anyway and if they knew what we know now they would have specifically and forcefully forbade it from even being attempted.

We need to put into our laws mechanisms that make it impossible for the greedy or power-mad to injure the public good. The only way to do that is to make it disadvantageous in the extreme to make the attempt. It should, for instance, not be legally possible to profit from breaching the public trust. The way things are set up now, perpetrators of financial crimes, especially corporate entities have a strong incentive to cheat. In the worst-case scenario, they have to pay back what they steal. Over time, that provides a financial incentive to cheat -- and guess what? They cheat!

The solutions to most of these dreadful things (WIPO, DRM abuses, DMCA) is to remove the ability of the 'forces of evil' to perpetrate them. We need the political will to 'cut them off at that pass' by fixing our constitutions (I am Canadian, most here are probably Americans) to prevent the abuses in the first place. One of the best ways to prevent this stuff is to abolish copyrights and patents and mechanisms used to enforce them. Even though it would cause problems in the short term, I support the notion of Canada simply opting out of all the supra-national treaties that pertain to copyrights and patents. On balance, we would be better off in the long run and we would help make the world a better place.

As a somewhat tangential parting shot, let me just say that I find it irksome in the extreme that every time I watch a video or DVD I am confronted with an FBI warning. What on earth does the FBI have to do with a civil matter like copyright infringement? It is downright creepy how much power copyright brokers hold. They have managed to effectively criminalize the civil matter of copyright infringement and to pervert the world's technology to the point that it now presents genuine political and physical danger for the world's populations. With their patent cronies, they impoverish the Third World unnecessarily and they are coming for us.

cost of DRM, posted 5 Apr 2009 at 21:27 UTC by lkcl » (Master)

It is getting increasingly more difficult to beat these things.

you are underestimating the effect on OEMs of the mass-market cost of production and development.

OEMs (original equipment manufacturers), especially those in the far east, are not the *slightest* bit interested in factoring in DRM into their hardware, when the cost of doing so shaves their margins from e.g. 2% to 0.5%.

if you think that's ridiculous, then you need to be aware that the margins on the mass-production e.g. of desktop PCs is ten to twelve percent FOR EVERYBODY. no, not 10-12% for the OEM, plus 10-12% for the middle-man, plus 10-12% for the sales distributor - i repeat: 10-12% for EVERYBODY.

one broken PC out of a hundred can destroy a sales distributor's profits, which is why they offer those 2-year / 3-year warranties that add 30% to 50% on top - it's the only way they make any money.

why do you think netbooks have linux in them?

do you think that the netbooks are going to have DRM software or DRM hardware built in to them - ever?

now think about the mass-market of digital TV.

do you _really_ think that distributors are going to be happy to be forced to shove in DRM?

horribly high cost of DRM, posted 28 Apr 2009 at 19:54 UTC by DeepNorth » (Journeyer)


Let me just say that I feel a little sheepish about disagreeing. Nobody wishes more than I do that the problems with DRM would go away. Unfortunately, they are here and they are here to stay. Nothing can stop DRM implementation. What we need to do is lobby for safeguards that no single entity or cabal can control DRM.

Does crazy, unacceptable, unthinkable stuff happen in our industry? Why, yes it does.

I remember back when we sold PCs for more than $5,000.00 and the operating system cost $50.00. Microsoft grossed about 1% of the cost of a PC and about 5% or less of the total gross margin. Today, you can buy a more or less equivalent PC for about one tenth the cost -- $500 or so. A copy of Windows Vista business plus Office Pro costs more than $600. For MS, the gross margin on that has got to be more than $200. Meantime, according to your notion (with which I don't really disagree), everyone else in the chain makes about $50. That extra $400 on the MS side is rapidly being scooped in its entirety by MS, BTW.

Basically, the cost of a PC has dropped by a factor of ten and the take by Microsoft has risen by a factor of ten. Their share has increased one hundred fold or more. For most people, when they buy a PC, most of the money goes to Microsoft.

Here's my question:

If, back then, you were asked to choose between the following scenarios:

a) Microsoft's share would increase by 100 times.

b) Manufacturer's would absorb a 2 percent additional cost.

What would be more likely? I would pick (b), as I think most people would. In fact, though, both happened. It seems impossible to me that MS could take even more starting today, but they have proven unusually tenacious in their pursuit of a greater share. As for manufacturers taking a hit on commodity items, well, that's the commodity game.

As the marginal cost of the devices relative to the total cost tends toward zero (which it is doing), DRM becomes increasingly necessary to protect the profits in the system. Likely, the manufacturer of a device in Taiwan is building for $400 and selling for $440. Later, they will build two boxes at $300 and sell for $645. Their margin goes down, but their total take still goes up.

Manufacturers of these devices will continue, generally, to do what they are able. It has ever been thus. When you can't buy parts without DRM EvilInside (TM), the decision as to whether or not to include them is moot. The DRM machinery is ALREADY in place. That battle is finished and we lost.

I could write a treatise on the costs of DRM. They are unacceptable. However, the hardware, legislative and most of the software battle has already been lost. We are currently on the road to being at the mercy of a couple of companies holding the 'master keys' to control most of the world's new computers.

Re: do you think that the netbooks are going to have DRM software or DRM hardware built in to them - ever?

Uh. I agree with you that it is unthinkable. However, it has already happened. So ... yes, I *do* think netbooks are going to have DRM in them. In fact, I will go one further than that, I expect within less than two decades that every non-trivial device will have DRM capabilities. In fact, you will be unable to buy one without them because they will not be manufactured. DRM in MS Vista is not optional. When you by Vista, you buy DRM. There is no non-DRM version of Vista. The same is true for more things than you realize and we are at the heel of the curve.


"Friday Mar 27, 2009 Intel DRM bug hangs resume on B110

Hi All,

If you run the compiz desktop on any of the Intel Atom netbooks, B110 has a bug in the DRM code, that hangs the system on resume.

The Beijing Engineering team just asked me to test a patch fo this today.

I tested on the AA1, AA1-10 and MSI netbook and it works fine.


Note that for the above to be in physical existence the plan to do it must have been in place for many years.

Note also that they are *more* than willing to risk the failure of the entire system to put it in there.

AFAIK, all of VIA, ARM, GEODE and ATOM devices in production have DRM support.

As an aside, "Mar 27" was my birthday. Quite the birthday present, n'est pas?

The situation with DRM is very, very dire. This transcends just the notion of DRM per se. DRM is actually necessary and inevitable. The problem is one of political will to stop the forces of evil before it is too late. Companies already can (and do) shut things down remotely. That is already wrong. We need to ensure that our laws stop any abuse of the system. As it currently stands, legislation (not just about DRM), technology, social structures and customs all conspire to make abuse not only possible, but certain. It's kinda creepy when you know what is happening.

Fail, posted 3 May 2009 at 04:00 UTC by trs80 » (Apprentice)

The DRM in that post on the Atom refers to the Direct Rendering Manager video accelerator, used by X, not Digital Rights Management. Also note that netbooks that ship with Windows mostly have XP, not Vista, because XP costs $15 due to competition from Linux. As for that DRM being inevitable, I disagree - there is a clear market for open hardware, as evidenced by the WRT54GL, OLPC, OpenMoko.

I do agree with you there should be laws that ownership of a computer = absolute control of that computer, but it's a complex subject worthy of an entire article.

Further Research, posted 4 Jun 2009 at 17:59 UTC by lkcl » (Master)

recent research into this topic shows that code and projects already exist

SVC stands for "Scalable Video Coding", and there exists an extension to H264 which does exactly that.

The Trilogy Project is a project that is drawing to completion which combines JXTA, p2psip, SVC, MPEG-21, MPEG-7 and MDC (presumed to be mozilla development centre) into... get this: exactly what i described in this article.

i'm a bit unimpressed by the use and deployment of MPEG-21 (in the trilogy project), simply because it will be a failure to "protect" - not for technological reasons, but for psychological ones. but the rest of the project? i'm absolutely astounded. can't wait to see the source code: they use a number of free software applications and libraries.

head of UK talktalk ISP states "pirates will always win", posted 6 Jun 2009 at 15:00 UTC by lkcl » (Master)

water always flows downhill...

Paper on SVC, posted 16 Jun 2009 at 13:24 UTC by lkcl » (Master)

SVC H.264 extension - describes exactly what i've been talking about (!) in a more formal and of course much more authoritative manner. the reference implementation, which is an ongoing effort, is here.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Share this page