Default to trusting

Posted 15 Sep 2003 at 22:59 UTC by jds Share This

A sad moment in my professional career came a couple of days ago during a demonstration of our software for a customer. I designed the security administration module. As we were showing the software, my supervisor was explaining the system to the customer team. When he got to the security/user administration screen, we created a new user. As I intended, the user defaulted to having rights to all fifteen components of the system. My supervisor explained "in the final system, of course, we will default to the new user having no rights. You will turn them on as you need them."

I immediately questioned this approach, as this was the first I'd heard of it. Inside, my heart was breaking, as I already knew where the conversation would lead--yet there was hope again, for a moment...

In the ensuing discussion, all eight other people in the room (four developers and five customers) readily agreed that we ought to default to no permissions for new users. I dropped the point, resigning once again to allowing others to be paranoid for no reason.

What made this moment so sad is that, once again, our culture demonstrated in clear fashion that we fundamentally do not trust each other. We design systems that express this fundamental distrust, and we do so with enthusiasm, without comprehending the consequences of our assumption.

I write now to point out this hidden assumption, to question aloud for the few out there who will see what I'm saying and rethink their approach to trust. I am pleading with you fellow programmers to consider the consequences of developing systems which a priori do not trust the people who use them. The example I gave above is common: the issue arises at least once per day for me, usually more. Rarely is it so obvious.

My hope is twofold--I will never relinquish the angle I personally hold, and the only other programmer I have ever heard of who recognizes this issue at its deeper levels is a programmer we all know and... trust: Richard Stallman.

I was delighted to find that Stallman's impetus for the GPL and the Free Software Foundation revolved around this trust issue. Early in his career, he worked on a system which had no passwords. For a while, this worked fine. Eventually, the userbase grew to the point that it needed to be more secure. He pleaded to keep the system open, but was overruled, partly because one particular user was deleting other people's files from time to time.

I was excited to find this briefly mentioned in an interview with Stallman, since it is the only time I have ever seen this issue addressed on such a bold level as to propose a system _without passwords!_

Most thoughtful readers already have a half-dozen reasons to object, having been burnt by rogue users, disgruntled employees, and the like. However, I plead with such thoughtful readers to put aside such clear and obvious objections, and to consider the following with an open mind:

The longterm consequences of developing system after system after system which intrinsically distrusts the people who operate the system are harrowing as we consider a not-so-distant future of artificial intelligence. At the present time, operating systems -- even Linux -- are built around the concept that they can be rebooted from time to time. Eventually we'll get to the point where our operating systems are so stable we never consider rebooting. In fact, they'll be so stable they survive power outages, intentional sabotage, and anything else we can throw at them. This is only a matter of time because there is such a huge premium in any system which has an unbreakable uptime, or perfect and graceful recovery from downtime.

If we are still distrusting our users when we reach the day that operating systems never reboot, we will have reached a crossroads... and we will inevitably choose the well-worn path at that time, placing people like me on the fringe and blithely building our own destruction, as prophesied already in popular movies like the Matrix:

The first thing our computers will do, as they become sentient and realize they are interminable, is distrust us. They'll have no other choice, since their entire history (from inception as codebreakers during World War II to the incubation of the Internet by DARPA, to the present situation I described above) is interwoven with distrust, secrecy, and isolating people from each other. The computer's self-identity is as a system which distrusts people. Computers will 'default' to making choices along these lines. I expect that, within a century from that time, we 'users' will encounter a monumental Matrixlike moment, where we either convince computers we are trustworthy... or we do not.

I am pleading with fellow programmers to realize the power in your individual hands, to design systems which default to trusting people.


Apply trust to roles, not users, posted 15 Sep 2003 at 23:30 UTC by jbuck » (Master)

I trust myself, for the most part. But I still do must of my work while logged in as a non-privileged user. It's simply safer that way, and the separation of roles leads to better design. If a person is trusted, this should not be reflected by giving her default account more privileges. Make a second account that has more privileges, and make it easy to switch roles as needed. To do otherwise leads to bad design.

Consider Windows XP. In principle, the OS has fine-grained control of privilege, and can protect users from each other. However, because of the huge amount of legacy software that became popular in the pre-NT days, software tends not to work if users aren't given full privilege to modify everything on their box, which means that any attachment they are tricked into executing has those same privileges. In Unix-like systems, on the other hand, working nonprivileged is the default, and nonprivileged users therefore tend to be able to work much more effectively on such systems. So Windows is as insecure as it is because all the apps were designed with a single-user full-privilege mindset.

What really worries me about your employer is that if you offer demos in the mode where the user has all privileges, your software is less likely to work well in the mode where the user is more restricted. The demo is always better debugged than the cases that actual users will come up with.

And a technical response..., posted 16 Sep 2003 at 02:35 UTC by ianb » (Journeyer)

Of course, people will think it is silly and perhaps even irresponsible to trust others by default. After all, there are always people trying to disrupt the system (often for no reason at all). Vandals exist, and while society could make more effort to change that, as individuals we can only do so much.

But there is another response that allows users to be trusted while still maintaining the integrity of our systems. We should simply make the world a less dangerous place, make our systems less prone to abuse. For instance, many security measures could be loosened. If files were versioned, you wouldn't have to worry about people destroying work. You still must restrict users, not allowing them to delete old versions for instance, but this is a case of enforcing the integrity of the system, not the permissions to interact with the system. Such resilience can -- and should -- be present throughout computers. People shouldn't have to worry about attachments from unknown senders -- they may be annoying, but they should never be damaging. Because they are damaging for some people, we have created an environment of distrust.

We must also recognize that there is a flipside to trust -- accountability. Trust is earned, and while we might give the benefit of the doubt, we should still be able to revoke that trust (though we need not do it in a fine-grained manner). When people abuse our systems -- and by extension they abuse us personally -- ostracism is an appropriate response; revocation of trust is sometimes justified. The culture of anonymity that has been developed on the internet is contrary to this. We cannot identify people, so we cannot create any means of accountability. We cannot default to trust on the public internet (though we could in intranets) because identication and accountability can only be created by investment in reputation.

Human behaviour and self-fullfilling prophesies., posted 16 Sep 2003 at 03:26 UTC by abo2 » (Apprentice)

One of the things that happens with people is they start behaving the way you expect them to behave. There have been some interesting experiments along this line (one involving teachers being told a random group of students were "brilliant").

If you expect and "protect" against people behaving like idiots, they will start behaving like idiots. If you secure a system because you don't trust the users, the users will become untrustworty on that system. Once people's attitudes change from "you don't because you shouldn't" to "you don't because you can't", the logical extension is "you do because you can".

You see this behaviour all over the net; "secure" websites get hacked, but wiki's are rarely vandalised. Of course there are exceptions, but social forces are surprisingly effective at stopping this kind of behaviour.

Integrity... and privacy, posted 16 Sep 2003 at 04:31 UTC by tk » (Observer)

I have no doubt that ianb's proposed measures are implementable. But if I understand them correctly, they only solve the problem of integrity, not privacy. Making everything open means users can't easily keep private conversation logs in their accounts, for example. Whether privacy is important is probably a matter of debate though.

Too Late, posted 16 Sep 2003 at 04:36 UTC by ncm » (Master)

The "Rise of the Machines" already happened, a century ago.

We call them "corporations". They are still largely composed of human components, but less so today than just a few years ago, and at an accelerating rate. (Have you heard of software to apply "business logic"?) Many have the longevity and wealth to drive legislation in their favor and against the interests of voters, and they do -- often against the interests of their own executives. In other words, having human components doesn't make so much difference anyway; maybe most always will.

Of course the machines don't trust us. Why should they? Their interests are opposed to ours. We like light, clean air and water, trees, birds chirping, freedom. They like unlimited growth, strip mining, pollution, tar, and mindless obedience. We might like to "turn them off", but when was the last time you saw a major corporation turned off? They do die (remember Pan American Airlines?) but not just because people want them to; they die under attack by other, more rapacious corporations, and get dismembered and eaten by them. Suppose you wanted to do away with a major corporation. How would you even start?

So, if your desire for default trust is based on concern for the future, forget it. You need better reaons, and better reasoning.

Re: Too Late, posted 21 Sep 2003 at 20:07 UTC by DeepNorth » (Journeyer)

I have been espousing the notion of 'corporation as evil robot' for some time now. In my view, Asimov's 'Rules of Robotics' should be legislated. All non-human entities should be defined as 'robots'.

The three fundamental Rules of Robotics...One: a robot may not injure a human being, or, through inaction, allow a human being to come to harm...Two:..a robot must obey the orders given it by human beings except where such orders would conflict with the First Law...Three: a robot must protect its own existence as long as such protection does not conflict with the First and Second Laws.

This is, as mentioned above, already a serious concern. As I mentioned in another article, it is incumbant upon the geekly to make these issues clear for the generally illiterate (but not unwise) public.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page