Older blog entries for dkg (starting at number 78)

Kevin M. Igoe should step down from CFRG Co-chair

I've said recently that pervasive surveillance is wrong. I don't think anyone from the NSA should have a leadership position in the development or deployment of Internet communications, because their interests are at odds with the interest of the rest of the Internet. But someone at the NSA is in exactly such a position. They ought to step down.

Here's the background:

The Internet Research Task Force (IRTF) is a body tasked with research into underlying concepts, themes, and technologies related to the Internet as a whole. They act as a research organization that cooperates and complements the engineering and standards-setting activities of the Internet Engineering Task Force (IETF).

The IRTF is divided into issue-specific research groups, each of which has a Chair or Co-Chairs who have "wide discretion in the conduct of Research Group business", and are tasked with organizing the research and discussion, ensuring that the group makes progress on the relevant issues, and communicating the general sense of the results back to the rest of the IRTF and the IETF.

One of the IRTF's research groups specializes in cryptography: the Crypto Forum Research Group (CFRG). There are two current chairs of the CFRG: David McGrew <mcgrew@cisco.com> and Kevin M. Igoe <kmigoe@nsa.gov>. As you can see from his e-mail address, Kevin M. Igoe is affiliated with the National Security Agency (NSA). The NSA itself actively tries to weaken cryptography on the Internet so that they can improve their surveillance, and one of the ways they try to do so is to "influence policies, standards, and specifications".

On the CFRG list yesterday, Trevor Perrin requested the removal of Kevin M. Igoe from his position as Co-chair of the CFRG. Trevor's specific arguments rest heavily on the technical merits of a proposed cryptographic mechanism called Dragonfly key exchange, but I think the focus on Dragonfly itself is the least of the concerns for the IRTF.

I've seconded Trevor's proposal, and asked Kevin directly to step down and to provide us with information about any attempts by the NSA to interfere with or subvert recommendations coming from these standards bodies.

Below is my letter in full:

From: Daniel Kahn Gillmor <dkg@fifthhorseman.net>
To: cfrg@ietf.org, Kevin M. Igoe <kmigoe@nsa.gov>
Date: Sat, 21 Dec 2013 16:29:13 -0500
Subject: Re: [Cfrg] Requesting removal of CFRG co-chair

On 12/20/2013 11:01 AM, Trevor Perrin wrote:
> I'd like to request the removal of Kevin Igoe from CFRG co-chair.

Regardless of the conclusions that anyone comes to about Dragonfly
itself, I agree with Trevor that Kevin M. Igoe, as an employee of the
NSA, should not remain in the role of CFRG co-chair.

While the NSA clearly has a wealth of cryptographic knowledge and
experience that would be useful for the CFRG, the NSA is apparently
engaged in a series of attempts to weaken cryptographic standards and
tools in ways that would facilitate pervasive surveillance of
communication on the Internet.

The IETF's public position in favor of privacy and security rightly
identifies pervasive surveillance on the Internet as a serious problem:

https://www.ietf.org/media/2013-11-07-internet-privacy-and-security.html

The documents Trevor points to (and others from similar stories)
indicate that the NSA is an organization at odds with the goals of the IETF.

While I want the IETF to continue welcoming technical insight and
discussion from everyone, I do not think it is appropriate for anyone
from the NSA to be in a position of coordination or leadership.

----

Kevin, the responsible action for anyone in your position is to
acknowledge the conflict of interest, and step down promptly from the
position of Co-Chair of the CFRG.

If you happen to also subscribe to the broad consensus described in the
IETF's recent announcement -- that is, if you care about privacy and
security on the Internet -- then you should also reveal any NSA activity
you know about that attempts to subvert or weaken the cryptographic
underpinnings of IETF protocols.

Regards,

	--dkg
I'm aware that an abdication by Kevin (or his removal by the IETF chair) would probably not end the NSA's attempts to subvert standards bodies or weaken encryption. They could continue to do so by subterfuge, for example, or by private influence on other public members. We may not be able to stop them from doing this in secret, and the knowledge that they may do so seems likely to cast a pall of suspicion over any IETF and IRTF proceedings in the future. This social damage is serious and troubling, and it marks yet another cost to the NSA's reckless institutional disregard for civil liberties and free communication.

But even if we cannot rule out private NSA influence over standards bodies and discussion, we can certainly explicitly reject any public influence over these critical communications standards by members of an institution so at odds with the core principles of a free society.

Kevin M. Igoe, please step down from the CFRG Co-chair position.

And to anyone (including Kevin) who knows about specific attempts by the NSA to undermine the communications standards we all rely on: please blow the whistle on this kind of activity. Alert a friend, a colleague, or a journalist. Pervasive surveillance is an attack on all of us, and those who resist it are heroes.

Syndicated 2013-12-21 22:55:00 from Weblogs for dkg

automatically have uscan check signatures

If you maintain software in debian, one of your regular maintenance tasks is checking for new upstream versions, reviewing them, and preparing them for debian if appropriate. One of those steps is often to verify the cryptographic signature on the upstream source archive.

At the moment, most maintainers do the cryptographic check manually, or maybe even don't bother to do it at all. For the common case of detached OpenPGP signatures, though, uscan can now do it for you automatically (as of devscripts version 2.13.3). You just need to tell uscan what keys you expect upstream to be signing with, and how to find the detached signature.

So, for example, Damien Miller recently announced his new key that he will be using to sign OpenSSH releases (his new key has OpenPGP fingerprint 59C2 118E D206 D927 E667 EBE3 D3E5 F56B 6D92 0D30 -- you can verify it has been cross-signed by his older key, and his older key has been revoked with the indication that it was superceded by this one). Having done a reasonable verification of Damien's key, if i was the openssh package maintainer, i'd do the following:

cd ~/src/openssh/
gpg --export '59C2 118E D206 D927 E667  EBE3 D3E5 F56B 6D92 0D30' >> debian/upstream-signing-key.pgp
And then upon noticing that the signature files are named with a simple .asc suffix on the upstream distribution site, we can use the following pgpsigurlmangle option in debian/watch:
version=3
opts=pgpsigurlmangle=s/$/.asc/ ftp://ftp.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-(.*)\.tar\.gz 
I've filed this specific example as debian bug #732441. If you notice a package with upstream signatures that aren't currently being checked by uscan (or if you are upstream, you sign your packages, and you want your debian maintainer to verify them), you can file similar bugs. Or, if you maintain a package for debian, you can just fix up your package so that this check is there on the next upload.

If you maintain a package whose upstream doesn't sign their releases, ask them why not -- wouldn't upstream prefer that their downstream users can verify that each release wasn't tampered with?

Of course, none of these checks take the the place of the real work of a debian package maintainer: reviewing the code and the changelogs, thinking about what changes have happened, and how they fit into the broader distribution. But it helps to automate one of the basic safeguards we should all be using. Let's eliminate the possibility that the file was tampered with at the upstream distribution mirror or while in transit over the network. That way, the maintainer's time and energy can be spent where they're more needed.

Tags: crypto, devscripts, openpgp, package maintenance, signatures, uscan

Syndicated 2013-12-18 03:15:00 from Weblogs for dkg

OpenPGP Key IDs are not useful

Fingerprints and Key IDs

OpenPGPv4 fingerprints are made from an SHA-1 digest over the key's public key material, creation date, and some boilerplate. SHA-1 digests are 160 bits in length. The "long key ID" of a key is the last 64 bits of the key's fingerprint. The "short key ID" of a key is the last 32 bits of the key's fingerprint. You can see both of the key IDs as a hash in and of themselves, as "32-bit truncated SHA-1" is a sort of hash (albeit not a cryptographically secure one).

I'm arguing here that short Key IDs and long Key IDs are actually useless, and we should stop using them entirely where we can do so. We certainly should not be exposing normal human users to them.

Key IDs have serious problems

Asheesh pointed out two years ago that OpenPGP short key IDs are bad because they are trivial to replicate. This is called a preimage attack against the short key ID (which is just a truncated fingerprint).

Today, David Leon Gil demonstrated that a collision attack against the long key ID is also trivial. A collision attack differs from a preimage attack in that the attacker gets to generate two different things that both have the same digest. Collision attacks are easier than preimage attacks because of the birthday paradox. dlg's colliding keys are not a surprise, but hopefully the explicit demonstration can serve as a wakeup call to help us improve our infrastructure.

So this is not a way to spoof a specific target's long key ID on its own. But it indicates that it's more of a worry than most people tend to think about or plan for. And remember that for a search space as small as 64-bits (the long key ID), if you want to find a pre-image against any one of 2k keys, your search is actually only in a (64-k)-bit space to find a single pre-image.

The particularly bad news: gpg doesn't cope well with the two keys that have the same long key ID:

0 dkg@alice:~$ gpg --import x
gpg: key B8EBE1AF: public key "9E669861368BCA0BE42DAF7DDDA252EBB8EBE1AF" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
0 dkg@alice:~$ gpg --import y
gpg: key B8EBE1AF: doesn't match our copy
gpg: Total number processed: 1
2 dkg@alice:~$ 
This probably also means that caff (from the signing-party package) will also choke when trying to deal with these two keys.

I'm sure there are other OpenPGP-related tools that will fail in the face of two keys with matching 64-bit key IDs.

We should not use Key IDs

I am more convinced than ever that key IDs (both short and long) are actively problematic to real-world use of OpenPGP. We want two things from a key management framework: unforgability, and human-intelligible handles. Key IDs fail at both.
  • Fingerprints are unforgable (as much as SHA-1's preimage resistance, anyway -- that's a separate discussion), but they aren't human-intelligible.
  • User IDs are human-intelligible, and they are unforgable if we can rely on a robust keysigning network.
  • Key IDs (both short and long) are neither human-intelligible nor unforgable (regardless of existence of a keysigning network), so they are the worst of all possible worlds.
So reasonable tools should not expose either short or long key IDs to users, or use them internally if they can avoid them. They do not have any properties we want, and in the worst case, they actively mislead people or lead them into harm. What reasonable tool should do that?

How to replace Key IDs

If we're not going to use Key IDs, what should we do instead?

For anything human-facing, we should be using human-intelligible things like user IDs and creation dates. These are trivial to forge, but people can relate to them. This is better than offering the user something that is also trivial to forge, but that people cannot relate to. The job of any key management UI should be to interpret the cryptographic assurances provided by the certifications and present that to the user in a comprehensible way.

For anything not human-facing (e.g. key management data storage, etc), we should be using the full key itself. We'll also want to store the full fingerprint as an index, since that is used for communication and key exchange (e.g. on calling cards).

There remain parts of the spec (e.g. PK-ESK, Issuer subpackets) that make some use of the long key ID in ways that provide some measure of convenience but no real cryptographic security. We should fix the spec to stop using those, and either remove them entirely, or replace them with the full fingerprints. These fixes are not as urgent as the user-facing changes or the critical internal indexing fixes, though.

Key IDs are not useful. We should stop using them.

Tags: collision, crypto, gpg, openpgp, pgp, security

Syndicated 2013-12-13 20:04:00 from Weblogs for dkg

The legal utility of deniability in secure chat

This Monday, I attended a workshop on Multi-party Off the Record Messaging and Deniability hosted by the Calyx Institute. The discussion was a combination of legal and technical people, looking at how the characteristics of this particular technology affect (or do not affect) the law.

This is a report-back, since I know other people wanted to attend. I'm not a lawyer, but I develop software to improve communications security, I care about these questions, and I want other people to be aware of the discussion. I hope I did not misrepresent anything below. I'd be happy if anyone wants to offer corrections.

Background

Off the Record Messaging (OTR) is a way to secure instant messaging (e.g. jabber/XMPP, gChat, AIM).

The two most common characteristics people want from a secure instant messaging program are:

Authentication
Each participant should be able to know specifically who the other parties are on the chat.
Confidentiality
The content of the messages should only be intelligible to the parties involved with the chat; it should appear opaque or encrypted to anyone else listening in. Note that confidentiality effectively depends on authentication -- if you don't know who you're talking to, you can't make sensible assertions about confidentiality.

As with many other modern networked encryption schemes, OTR relies on each user maintaining a long-lived "secret key", and publishing a corresponding "public key" for their peers to examine. These keys are critical for providing authentication (and by extension, for confidentiality).

But OTR offers several interesting characteristics beyond the common two. Its most commonly cited characteristics are "forward secrecy" and "deniability".

Forward secrecy
Assuming the parties communicating are operating in good faith, forward secrecy offers protection against a special kind of adversary: one who logs the encrypted chat, and subsequently steals either party's long-term secret key. Without forward secrecy, such an adversary would be able to discover the content of the messages, violating the confidentiality characteristic. With forward secrecy, this adversary is be stymied and the messages remain confidential.
Deniability
Deniability only comes into play when one of the parties is no longer operating in good faith (e.g. their computer is compromised, or they are collaborating with an adversary). In this context, if Alice is chatting with Bob, she does not want Bob to be able to cryptographically prove to anyone else that she made any of the specific statements in the conversation. This is the focus of Monday's discussion.

To be clear, this kind of deniability means Alice can correctly say "you have no cryptographic proof I said X", but it does not let her assert "here is cryptographic proof that I did not say X" (I can't think of any protocol that offers the latter assertion). The opposite of deniability is a cryptographic proof of origin, which usually runs something like "only someone with access to Alice's secret key could have said X."

The traditional two-party OTR protocol has offered both forward secrecy and deniability for years. But deniability in particular is a challenging characteristic to provide for group chat which is the domain of Multi-Party OTR (mpOTR). You can read some past discussion about the challenges of deniability in mpOTR (and why it's harder when there are more than two people chatting) from the otr-users mailing list.

If you're not doing anything wrong...

The discussion was well-anchored by a comment from another participant who cheekily asked "If you're not doing anything wrong, why do you need to hide your chat at all, let alone be able to deny it?"

The general sense of the room was that we'd all heard this question many times, from many people. There are lots of problems with the ideas behind the question from many perspectives. But just from a legal perspective, there are at least two problems with the way this question is posed:

  • laws themselves are not always just (e.g. consider chat communications between an interracial couple in the USA before 1967, if instant messaging had existed at the time), and
  • law enforcement (or a legal adversary in civil litigation) may have a different understanding or interpretation of the law than you do (e.g. consider chat communications between a corporate or government whistleblower and a journalist).
In these situations, people confront real risk from the law. If we care about these people, we need to figure out if we can build systems to help them reduce that legal risk (of course we also need to fix broken laws, and the legal environment general, but those approaches were out of scope for this discussion).

The Legal Utility of Deniability

Monday's meeting was called specifically because it wasn't clear how much real-world usefulness there is in the "deniability" characteristic, and whether this feature is worth the development effort and implementation tradeoffs required. In particular, the group was interested in deniability's utility in legal contexts; many (most?) people in the room were lawyers, and it's also not clear that deniability has much utility outside of a formal legal setting. If your adversary isn't constrained by some rule of law, they probably won't care at all whether there is a cryptographic proof or not that you wrote a particular message (In retrospect, one possible exception is exposure in the media, but we did not discuss that scenario).

Places of possible usefulness

So where might deniability come in handy during civil litigation or a criminal trial? Presumably the circumstance is that a piece of a chat log is offered as incriminating evidence, and the defendant is trying to deny something that they appear to have said in the log.

This denial could take place in two rather different contexts: during rules over admissibility of evidence, or (once admitted) in front of a jury.

In legal wrangling over admissibility, apparently a lot of horse-trading can go on -- each side concedes some things in exchange for the other side conceding other things. It appears that cryptographic proof of origin (that is, a lack of deniability) on the chat logs themselves might reduce the amount of leverage a defense lawyer can get from conceding or arguing strongly over that piece of evidence. For example, if the chain of custody of a chat transcript is fuzzy (i.e. the transcript could have been mishandled or modified somehow before reaching trial), then a cryptographic proof of origin would make it much harder for the defense to contest the chat transcript on the grounds of tampering. Deniability would give the defense more bargaining power.

In arguing about already-admitted evidence before a jury, deniability in this sense seems like a job for expert witnesses, who would need to convince the jury of their interpretation of the data. There was a lot of skepticism in the room over this, both around the possibility of most jurors really understanding what OTR's claim of deniability actually means, and on jurors' ability to distinguish this argument from a bogus argument presented by an opposing expert witness who is willing to lie about the nature of the protocol (or who misunderstands it and passes on their misunderstanding to the jury).

The complexity of the tech systems involved in a data-heavy prosecution or civil litigation are themselves opportunities for lawyers to argue (and experts to weigh in) on the general reliability of these systems. Sifting through the quantities of data available and ensuring that the appropriate evidence is actually findable, relevant, and suitably preserved for the jury's inspection is a hard and complicated job, with room for error. OTR's deniability might be one more element in a multi-pronged attack on these data systems.

These are the most compelling arguments for the legal utility of deniability that I took away from the discussion. I confess that they don't seem particularly strong to me, though some level of "avoiding a weaker position when horse-trading" resonates with me.

What about the arguments against its utility?

Limitations

The most basic argument against OTR's deniability is that courts don't care about cryptographic proof for digital evidence. People are convicted or lose civil cases based on unsigned electronic communications (e.g. normal e-mail, plain chat logs) all the time. OTR's deniability doesn't provide any legal cover stronger than trying to claim you didn't write a given e-mail that appears to have originated from your account. As someone who understands the forgeability of e-mail, i find this overall situation troubling, but it seems to be where we are.

Worse, OTR's deniability doesn't cover whether you had a conversation, just what you said in that conversation. That is, Bob can still cryptographically prove to an adversary (or before a judge or jury) that he had a communication with someone controlling Alice's secret key (which is probably Alice); he just can't prove that Alice herself said any particular part of the conversation he produces.

Additionally, there are runtime tradeoffs depending on how the protocol manages to achieve these features. For example, forward secrecy itself requires an additional round trip or two when compared to authenticated, encrypted communications without forward secrecy (a "round trip" is a message from Alice to Bob followed by a message back from Bob to Alice).

Getting proper deniability into the mpOTR spec might incur extra latency (imagine having to wait 60 seconds after everyone joins before starting a group chat, or a pause in the chat of 15 seconds when a new member joins) or extra computational power (meaning that they might not work well on slower/older devices) or an order of magnitude more bandwidth (meaning that chat might not work at all on a weak connection). There could also simply be complexity that makes it harder to correctly implement a protocol with deniability than an alternate protocol without deniability. Incorrectly-implemented software can put its users at risk.

I don't know enough about the current state of mpOTR to know what the specific tradeoffs are for the deniability feature, but it's clear there will be some. Who decides whether the tradeoffs are worth the feature?

Other kinds of deniability

Further weakening the case for the legal utility of OTR's deniability, there seem to be other ways to get deniability in a legal context over a chat transcript.

There are deniability arguments that can be made from outside the protocol. For example, you can always claim someone else took control of your computer while you were asleep or using the bathroom or eating dinner, or you can claim that your computer had a virus that exported your secret key and it must have been used by someone else.

If you're desperate enough to sacrifice your digital identity, you could arrange to have your secret key published, at which point anyone can make signed statements with it. Having forward secrecy makes it possible to expose your secret key without exposing the content of your past communications to any listener who happened to log them.

Conclusion

My takeaway from the discussion is that the legal utility of OTR's deniability is non-zero, but quite low; and that development energy focused on deniability is probably only justified if there are very few costs associated with it.

Several folks pointed out that most communications-security tools are too complicated or inconvenient to use for normal people. If we have limited development energy to spend on securing instant messaging, usability and ubiquity would be a better focus than this form of deniability.

Secure chat systems that take too long to make, that are too complex, or that are too cumbersome are not going to be adopted. But this doesn't mean people won't chat at all -- they'll just use cleartext chat, or maybe they'll use supposedly "secure" protocols with even worse properties: for example, without proper end-to-end authentication (permitting spoofing or impersonation by the server operator or potentially by anyone else); with encryption that is reversible by the chatroom operator or flawed enough to be reversed by any listener with a powerful computer; without forward secrecy; or so on.

As a demonstration of this, we heard some lawyers in the room admit to using Skype to talk with their clients even though they know it's not a safe communications channel because their clients' adversaries might have access to the skype messaging system itself.

My conclusion from the meeting is that there are a few particular situations where deniability could be useful legally, but that overall, it is not where we as a community should be spending our development energy. Perhaps in some future world where all communications are already authenticated, encrypted, and forward-secret by default, we can look into improving our protocols to provide this characteristic, but for now, we really need to work on usability, popularization, and wide deployment.

Thanks

Many thanks to Nick Merrill for organizing the discussion, to Shayana Kadidal and Stanley Cohen for providing a wealth of legal insight and legal experience, to Tom Ritter for an excellent presentation of the technical details, and to everyone in the group who participated in the interesting and lively discussion.

Tags: chat, deniability, otr, security

Syndicated 2013-12-05 23:14:00 from Weblogs for dkg

getting to TLS (STARTTLS HOWTO)

Many protocols today allow you to upgrade to TLS from within a cleartext version of the protocol. This often falls under the rubric of "STARTTLS", though different protocols have different ways of doing it.

I often forget the exact steps, and when i'm debugging a TLS connection (e.g. with tools like gnutls-cli i need to poke a remote peer into being ready for a TLS handshake. So i'm noting the different mechanisms here. lines starting with C: are from the client, lines starting with S: are from the server.

many of these are (roughly) built into openssl s_client, using the -starttls option. Sometimes this doesn't work because the handshake needs tuning for a given server; other times you want to do this with a different TLS library. To use the techniques below with gnutls-cli from the gnutls-bin package, just provide the --starttls argument (and the appropriate --port XXX argument), and then hit Ctrl+D when you think it's ok to start the TLS negotiation.

SMTP

The polite SMTP handshake (on port 25 or port 587) that negotiates a TLS upgrade looks like:
C: EHLO myhostname.example
S: [...]
S: 250-STARTTLS
S: [...]
S: 250 [somefeature]
C: STARTTLS
S: 220 2.0.0 Ready to start TLS
<Client can begin TLS handshake>

IMAP

The polite IMAP handshake (on port 143) that negotiates a TLS upgrade looks like:
S: OK [CAPABILITY IMAP4rev1 [...] STARTTLS [...]] [...]
C: A STARTTLS
S: A OK Begin TLS negotiation now
<Client can begin TLS handshake>

POP

The polite POP handshake (on port 110) that negotiates a TLS upgrade looks like:
S: +OK POP3 ready
C: STLS
S: +OK Begin TLS 
<Client can begin TLS handshake>

XMPP

The polite XMPP handshake (on port 5222 for client-to-server, or port 5269 for server-to-server) that negiotiates a TLS upgrade looks something like (note that the domain requested needs to be the right one):
C: <?xml version="1.0"?><stream:stream to="example.net"
C:  xmlns="jabber:client" xmlns:stream="http://etherx.jabber.org/streams" version="1.0">
S: <?xml version='1.0'?>
S: <stream:stream
S:  xmlns:db='jabber:server:dialback'
S:  xmlns:stream='http://etherx.jabber.org/streams'
S:  version='1.0'
S:  from='example.net'
S:  id='d34edc7c-22bd-44b3-9dba-8162da5b5e72'
S:  xml:lang='en'
S:  xmlns='jabber:server'>
S: <stream:features>
S: <dialback xmlns='urn:xmpp:features:dialback'/>
S: <starttls xmlns='urn:ietf:params:xml:ns:xmpp-tls'/>
S: </stream:features>
C: <starttls xmlns="urn:ietf:params:xml:ns:xmpp-tls" id="1"/>
S: <proceed xmlns='urn:ietf:params:xml:ns:xmpp-tls'/>
<Client can begin TLS handshake>
I don't know (but would like to) how to do:
  • postgresql TLS negotiation
  • mysql TLS negotiation
  • other reasonable network protocols capable of upgrade
  • other free TLS wrapping tools like openssl s_client or gnutls-cli that can start off in the clear and negotiate to TLS. I am trying to get libNSS's tstclnt into the libnss3-tools package, but that hasn't happened yet.
If you know other mechanisms, or see bugs with the simple handshakes i've posted above, please let me know either by e-mail or on the comments here.

Other interesting notes: RFC 2817, a not-widely-supported mechanism for upgrading to TLS in the middle of a normal HTTP session.

Syndicated 2013-10-30 17:00:00 from Weblogs for dkg

Unaccountable surveillance is wrong

As I mentioned earlier, the information in the documents released by Edward Snowden show a clear pattern of corporate and government abuse of the information networks that are now deeply intertwined with the lives of many people all over the world.

Surveillance is a power dynamic where the party doing the spying has power over the party being surveilled. The surveillance state that results when one party has "Global Cryptologic Dominance" is a seriously bad outcome. The old saw goes "power corrupts, and absolute power corrupts absolutely". In this case, the stated goal of my government appears to be absolute power in this domain, with no constraint on the inevitable corruption. If you are a supporter of any sort of a just social contract (e.g. International Principles on the Application of Human Rights to Communications Surveillance), the situation should be deeply disturbing.

One of the major sub-threads in this discussion is how the NSA and their allies have actively tampered with and weakened the cryptographic infrastructure that everyone relies on for authenticated and confidential communications on the 'net. This kind of malicious work puts everyone's communication at risk, not only those people who the NSA counts among their "targets" (and the NSA's "target" selection methods are themselves fraught with serious problems).

The US government is supposed to take pride in the checks and balances that keep absolute power out of any one particular branch. One of the latest attempts to simulate "checks and balances" was the President's creation of a "Review Group" to oversee the current malefactors. The review group then asked for public comment. A group of technologists (including myself) submitted a comment demanding that the review group provide concrete technical details to independent technologists.

Without knowing the specifics of how the various surveillance mechanisms operate, the public in general can't make informed assessments about what they should consider to be personally safe. And lack of detailed technical knowledge also makes it much harder to mount an effective political or legal opposition to the global surveillance state (e.g. consider the terrible Clapper v. Amnesty International decision, where plaintiffs were denied standing to sue the Director of National Intelligence because they could not demonstrate that they were being surveilled).

It's also worth noting that the advocates for global surveillance do not themselves want to be surveilled, and that (for example) the NSA has tried to obscure as much of their operations as possible, by over-classifying documents, and making spurious claims of "national security". This is where the surveillance power dynamic is most baldly in play, and many parts of the US government intelligence and military apparatus has a long history of acting in bad faith to obscure its activities.

The people who have been operating these surveillance systems should be ashamed of their work, and those who have been overseeing the operation of these systems should be ashamed of themselves. We need to better understand the scope of the damage done to our global infrastructure so we can repair it if we have any hope of avoiding a complete surveillance state in the future. Getting the technical details of these compromises in the hands of the public is one step on the path toward a healthier society.

Postscript

Lest I be accused of optimism, let me make clear that fixing the technical harms is necessary, but not sufficient; even if our technical infrastructure had not been deliberately damaged, or if we manage to repair it and stop people from damaging it again, far too many people still regularly accept ubiquitous private (corporate) surveillance. Private surveillance organizations (like Facebook and Google) are too often in a position where their business interests are at odds with their users' interests, and powerful adversaries can use a surveillance organization as a lever against weaker parties.

But helping people to improve their own data sovereignty and to avoid subjecting their friends and allies to private surveillance is a discussion for a separate post, i think.

Syndicated 2013-10-08 20:12:00 from Weblogs for dkg

RIP Cookiepuss

Yesterday, i said a sad goodbye to an old friend at ABC No Rio. Cookiepuss was a steadfast companion in my volunteer shifts at the No Rio computer center, a cranky yet gregarious presence. I met her soon after moving to New York, and have hung out with her nearly every week for years.

[Cookiepuss -- No Dogs No Masters]

She had the run of the building at ABC No Rio, and was friends with all sorts of people. She was known and loved by punks and fine artists, by experimental musicians and bike mechanics, computer geeks and librarians, travelers and homebodies, photographers, screenprinters, anarchists, community organizers, zinesters, activists, performers, and weirdos of all stripes.

For years, she received postcards from all over the world, including several from people who had never even met her in person. In her younger days, she was a ferocious mouser, and even as she shrank with age and lost some of her teeth she remained excited about food.

She was an inveterate complainer; a pants-shredder; a cat remarkably comfortable with dirt; a welcoming presence to newcomers and a friendly old curmudgeon who never seemed to really hold a grudge even when i had to do horrible things like help her trim her nails.

After a long life, she died having said her goodbyes, and surrounded by people who loved her. I couldn't have asked for better, but I miss her fiercely.

Syndicated 2013-09-28 04:28:00 from Weblogs for dkg

half a minute for science!

A friend is teaching a class on data analysis, and she is building a simple and rough data set for the class to examine, and to spur discussion. You can contribute in half a minute! Here's how:

  1. get a stopwatch or other sort of timer (whatever device you're reading this on probably has such a thing).
  2. start the timer, but don't look at it.
  3. wait for what you think is 30 seconds, and then look at the timer
  4. how many actual seconds elapsed?
The data doesn't need to be particularly high-precision (whole second values are fine). The other data points my friend is looking for are age (in years, again, whole numbers are fine) and gender.

You can send me your results by e-mail, (i suspect you can find my address if you're reading this blog). Please put "half a minute for science" in the subject line, and make sure you include:

  • actual seconds elapsed
  • age in years
  • gender
Science thanks you!

Syndicated 2013-09-25 21:41:00 from Weblogs for dkg

Support privacy-respecting network services!

Support privacy-respecting network services! Donate to Riseup.net!

There's a lot of news recently about some downright orwellian surveillance executed across the globe by my own government with the assistance of major American corporations. The scope is huge, and the implications are depressing. It's scary and frustrating for anyone who cares about civil society, freedom of speech, cultural autonomy, or data sovereignty.

As bad as the situation is, though, there are groups like Riseup and May First/People Link who actively resist the data dragnet.

The good birds at Riseup have been tireless advocates for information autonomy for people and groups working for liberatory social change for years. They have provided (and continue to provide) impressive, well-administered infrastructure using free software to help further these goals, and they have a strong political committment to making a better world for all of us and to resisting strongarm attempts to turn over sensitive data. And they provide all this expertise and infrastructure and support on a crazy shoestring of a budget.

So if the news has got you down, or frustrated, or upset, and you want to do something to help improve the situation, you could do a lot worse than sending some much-needed funds to help Riseup maintain an expanding infrastructure. This fundraising campaign will only last a few more days, so give now if you can!

(note: i have worked with some of the riseup birds in the past, and hope to continue to do so in the future. I consider it critically important to have them as active allies in our collective work toward a better world, which is why i'm doing the unusual thing of asking for donations for them on my blog.)

Syndicated 2013-09-10 06:07:00 from Weblogs for dkg

gpg --ask-cert-level considered harmful

Occasionally, someone asks me whether we should encourage use of the --ask-cert-level option when certifying OpenPGP keys with gpg. I see no good reason to use this option, and i think we should discourage people from trying to use it. I don't think there is a satisfactory answer to the question "how will specifying the level of identity certification concretely benefit anyone involved?", and i don't see why we should want one.

gpg gets it absolutely right by not asking users this question by default. People should not be enabling this option.

Some background: gpg's --ask-cert-level option allows the user who is making an OpenPGP identity certification to indicate just how sure they are of the identity they are certifying. The user's choice is then mapped into four levels of OpenPGP certification of a User ID and Public-Key packet, which i'll refer to by their signature type identifiers in the OpenPGP spec:

0x10: Generic certification
The issuer of this certification does not make any particular assertion as to how well the certifier has checked that the owner of the key is in fact the person described by the User ID.
0x11: Persona certification
The issuer of this certification has not done any verification of the claim that the owner of this key is the User ID specified.
0x12: Casual certification
The issuer of this certification has done some casual verification of the claim of identity.
0x13: Positive certification
The issuer of this certification has done substantial verification of the claim of identity.

Most OpenPGP implementations make their "key signatures" as 0x10 certifications. Some implementations can issue 0x11-0x13 certifications, but few differentiate between the types.

By default (if --ask-cert-level is not supplied), gpg issues certificates ("signs keys") using 0x10 (generic) certifications, with the exception of self-sigs, which are made as type 0x13 (positive).

When interpreting certifications, gpg does distinguish between different certifications in one particular way: 0x11 (persona) certifications are ignored; other certifications are not. (users can change this cutoff with the --min-cert-level option, but it's not clear why they would want to do so).

So there is no functional gain in declaring the difference between a "normal" certification and a "positive" one, even if there were a well-defined standard by which to assess the difference between the "generic" and "casual" or "positive" levels; and if you're going to make a "persona" certification, you might as well not make one at all.

And it gets worse: the problem is not just that such an indication is functionally useless; encouraging people to make these kind of assertions actively encourages leaks of a more-detailed social graph than just encouraging everyone to use the default blanket 0x13-for-self-sigs, 0x10-for-everyone-else policy.

A richer public social graph means more data that can feed the ravenous and growing appetite of the advertising-and-surveillance regimes. i find these regimes troubling. I admit that people often leak much more information than this indication of "how well do you know X" via tools like Facebook, but that's no excuse to encourage them to leak still more or to acclimatize people to the idea that the details of their personal relationships should by default be public knowledge.

Lastly, the more we keep the OpenPGP network of identity certifications (a.k.a. the "web of trust") simple, the easier it is to make sensible and comprehensible and predictable inferences from the network about whether a key really does belong to a given user. Minimizing the complexity and difficulty of deciding to make a certification helps people streamline their signing processes and reduces the amount of cognitive overhead people spend just building the network in the first place.

Syndicated 2013-05-20 07:21:00 from Weblogs for dkg

69 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!