Older blog entries for ade (starting at number 73)

20 Mar 2013 (updated 18 Sep 2013 at 10:15 UTC) »

Why do we bother with APIs?

I love APIs
Sometimes people wonder why we bother building APIs since it seems they can end up being used in ways that compete with our own products.

There are idealistic reasons for building APIs, as outlined by Jonathan Rosenberg, but there are also commercial benefits even if you don't share that philosophy. The main one is that APIs reduce the friction involved in making your services more valuable. They make it easier for other people to add data to your services. 

They also attract more users to your services by effectively advertising them on other people's sites. As well as increasing your visibility APIs also ensure that users are more likely to try your services since the risk of lock-in is reduced. If you have at least a CRUD API potential users know that there will be a  mechanism for extracting their data if something better comes along or if your services change in ways they don't like.

The other benefit of APIs is that they lower the cost of experimentation and increase the set of potential experimenters. These experiments can serve your users in two ways. Firstly they can handle niche use cases without cluttering the user interface of the application. Secondly some of these niche use cases may turn out, after a period of refinement, to be useful for mainstream users or for attracting completely new sets of users.

Another thing we've learned the hard way is that if you don't give people an API or you give them an insufficient API they'll resort to screen-scraping and hacking in order to unlock the value in your product. This can create dependencies on things that were never meant to be stable or it can lead to the emergence of widely-used but unofficial APIs

That behaviour can harm your product, your developers and your users. For example it can lead to a mismatch in expectations when some developers believe they're using an official API with established deprecation and change management policies. You also have to ensure that the APIs you create don't damage the product, for instance, by making it very easy to spam or game your system.

Providing an API, no matter how good, is just the start. The next challenge is to make something valuable enough that developers will use it in the absence of some extrinsic compulsion.

Firstly this involves making something that's easy to experiment with. So it should be easy to copy-paste a personalised URL into a browser and see a pretty-printed dataset.

Then you have to offer a path from there. The path starts with letting people play even if they don't understand your service all the way to the point where they understand your abstractions and the specifications you're using.

People should be able to go from playing in the browser to playing at a terminal with curl/wget to playing with an OAuth-enabled HTTP client to playing with your specialized wrapper libraries for your API to building businesses upon your platform.

But you can't just stop there. If you want to go from merely offering an API (typically a set of CRUD operations on your product's datasets) to building a viable platform you need to solve some difficult problems:

  • how does your platform, as opposed to your product, generate revenue or value for you?
  • how does your platform generate revenue or value for those who build upon it?
  • how do you respond to and/or incorporate the innovations that will be built upon your platform?
  • how do you nudge developers into creating more value than they capture from your users and your platform?
  • what happens to this surplus value? Is it being re-invested in the platform or siphoned off?
Even if you solve all these problems you don't have any guarantees of long-term success. The transition from API to platform to ecosystem is difficult and most APIs don't make it. However APIs can still help developers create new possibilities along the way.

Syndicated 2013-03-20 15:20:00 (Updated 2013-09-18 09:48:03) from Ade Oshineye

19 Mar 2013 (updated 18 Sep 2013 at 10:15 UTC) »

What do you mean 'we'?


"The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit
Shall lure it back to cancel half a Line,
Nor all thy Tears wash out a Word of it."

The web that Anil Dash wrote about wasn't lost. It was rejected. 

Dash himself rejects it when he uses a commenting system that only allows Facebook users to comment. Daniel Tunkelang rejected it when he abandoned his blog in favour of a network that gives him higher levels of engagement. I reject it when I use Instagram to take and share photos just because it's more convenient than the alternatives. 

My initial response to Anil Dash's The Web We Lost was a mixture of amusement at his rose-tinted nostalgia, annoyance at his revisionist history and bemusement at his usage of Facebook comments. As time has gone on I've realised that Dash is not a hypocritical finger-wagging reactionary but just another sensible person making sensible decisions about the networks that will generate the most engagement for his content. Of course these sensible decisions happen to clash with his stated beliefs.

The mainstream of humanity actively rejected the web-that-was rather than accidentally let it slip away. They rejected it for much the same reasons they rejected the prospect of running their own power generator. It turns out that using a central power grid gives you a better quality service for less effort which frees you to focus on the things you really care about. Humanity rejected a vision of the web where everybody runs their own websites because it turned out that most people don't care as much about maintaining infrastructure as the geeks who formed the majority of the web's users 10 to 20 years ago.  That's why every time I see someone, for instance Clay Shirky, who has been cheerfully running a compromised blogging engine on his own domain for years I shudder at the idea that we once thought self-hosting was going to be the norm.

Felix Salmon's article was one of the first responses that acknowledged this problem. It made me realise why Dash's article reminded me so much of the distress of the privileged. That's because the 'we' who lost something is the set of middle-aged geeks who miss the way things used to be and want to roll back time to a world where only geeks could harness the power of the web. Like scribes bemoaning the advent of universal literacy the comments section of Dash's post is full of people saying how much better things were when communication tools were difficult to use and restricted to a sophisticated elite.

This makes me sad. The dream of the early web was that by removing the Gutenbourgeois as gatekeepers we would create the possibility for new voices to be heard. Wish granted. 

Unfortunately the technocratic response to these new voices was to dismiss them as an Eternal September of clueless newbies. It's as if the web was better before all these 'other' people turned up and started making choices 'we' don't like. It's as if all those developers choosing to build upon technologies with clear value propositions (build upon this platform and you'll get users and paying customers) and good DX were wrong. It's as if the billions of non-geeks were either ignorant, misled or suffering from false consciousness when they chose closed systems with great UX.

Robin Sloan has a refreshing perspective on this issue. He writes, on Medium, that we've reached a point where our taste has outpaced our skill. Our taste means we demand that an acceptable website must have lots of qualities that are beyond the skill of the average individual. By framing the issue in terms of taste and skill he shows why the pendulum is unlikely to swing back. Running a sufficiently high quality web site, as opposed to a web presence, is so hard that the amateur web looks like a wasteland of dead blogs, unmaintained websites and broken linksAgain and again and again sensible people choose better UX or a larger network over a more open, decentralised or federated service. But what if this flight to quality isn't a problem?

What if all those billions of people made intelligent decisions that made sense for them? What if the people saying that the past was better than the future are the ones who are wrong? What if we reject this mythical past in favour of a new future where we try to build new things that people use because they're better solutions not because they claim superior morality?

Appeals to a bygone era where the web was more open but less diverse aren't going to inspire the construction of a better future as history teaches us that "convenience wins, hubris loses." Instead those appeals sound like the beleaguered art critic moaning that "taking a picture feels like signing up to some mad collective self-delusion that we are all artists with an eye for beauty, when the tragicomic truth is that the sheer plenitude and repetition of modern amateur photography makes beauty glib." When Dash writes that there's "an entire generation of users who don't realize how much more innovative and meaningful their experience could be" but can't point to any examples it sounds like yet another hollow claim that things were better when we were young.

Maybe things really were better when we were young but I've learned to distrust appeals to bygone golden ages. Instead I want to hear people talking about vibrant futures. I want to see people working on new ideas that may not work out but which open up new possibilities. I want to see new people making new things. I want to see people making new things with all the uncertainty and doubt that brings.

This is why I'm increasingly hopeful about efforts like IndieWebCamp and ParallelFlickr. These are people building things that are useful primarily for themselves and possibly for others. That's how we'll invent a new and better web.

Jaiku forever


Syndicated 2013-03-19 18:39:00 (Updated 2013-09-18 10:08:08) from Ade Oshineye

1 Mar 2013 (updated 18 Sep 2013 at 10:15 UTC) »

A world of social login

Who are you?

We've known for years that passwords are bad.

They're bad for users because they tend to use the same weak password across multiple sites which means they're only as safe as the least secure site they use. They're bad for developers because the sign-up process loses a large portion of potential users. They also force every developer to jump through all the steps required for a world-class identity system:

  • multi-factor authentication
  • the forgot password dance
  • a salted and hashed password database
  • etc.

Despite all this, passwords and the password anti-pattern are still prevalent.

Social login isn't a panacea but in the long run the only viable solution is delegating authentication to a small set of high quality identity providers. It has to be a small set to avoid the damage to conversion rates caused by the NASCAR problem. They will be high quality since the market is so competitive that low quality providers (where quality is a measure of the experience/value provided to users, developers and publishers) will find it hard to acquire and retain users. The market will be competitive simply because various entities have realised that social login is the backbone of any successful ecosystem so they're making the necessary investments.

This is sub-optimal but the OpenId dream (where every user runs their own server and their own OpenID endpoint) ran aground on the twin rocks of user apathy and security. Even if the dream had survived that it still didn't have a good answer to the major publishers who wanted to know what they would be getting in return for the extra effort of supporting OpenID. If you think OpenID Attribute Exchange and PAPE are solutions then you may be wearing the complicator's gloves.

The only questions left are:
  • who will be these identity providers
  • what will be their business models
  • how will we assess and choose between them
  • how will we keep them honest
  • how much control do they give users
  • do they help developers build better and more valuable services as time passes
  • will they become gatekeepers that constrain future innovations

This moves us to a world where users authorise developers rather than particular apps or web sites. As a result once you give a developer access to your information you give all of their services and apps access to your information. Technologies like OAuth2's bearer tokens mean that developers can easily pass access to a user's information back and forth between their mobile apps and their back-end systems.

In this new world developers will have to deal with multiple competing identity providers who each impose their own constraints and policies in order to protect their users. As a result developers will have to start thinking in a more sophisticated way about the way they propagate identity between their different systems, track the provenance of user data and honour the conflicting policies imposed by multiple identity providers. They'll also need more nuanced terminology. It won't be enough to think solely in such crude terms as "public" versus "private". Developers will also have to be aware of the subtle distinctions between "obscure" versus "secret" and "public" versus "publicised".

In return we get a world of social login where you bring your identity, your interests and your community to every app, service and device rather than just the ones built by identity providers with unified privacy policies.

Syndicated 2013-03-01 16:12:00 (Updated 2013-09-18 09:50:15) from Ade Oshineye

The Google+ Sharelink Endpoint: doing it right

If your site has a Google+ sharing feature that uses this URL: https://plusone.google.com/_/+1/confirm?url= then you're doing it wrong. You're using unsupported and undocumented functionality. Don't.

You should be using a sharing URL that looks like this: https://plus.google.com/share?url=

That's our official sharelink endpoint. It is supported, monitored and maintained. The URL you're using right now is an internal part of our +1 button's Javascript API so it's subject to change because we don't expect anyone else to be depending on it.

The documentation for the sharelink endpoint is here: https://developers.google.com/+/plugins/share/#sharelink-endpoint It even offers a set of standard graphics that you should be using for consistency with the rest of the web.

In short: don't be like the guy in the photo below.

Helvetica heretic

Syndicated 2013-02-22 13:58:00 (Updated 2013-02-22 13:58:27) from Ade Oshineye

30 Dec 2012 (updated 18 Sep 2013 at 10:15 UTC) »

The other side of creating more value than you capture



+Tim O'Reilly likes to talk about "creating more value than you capture." The obvious logical alternative to this is "capture more value than you create."

However I suspect that this is a false dichotomy.  I think we've missed something. It's possible for a vendor to create more value than they capture and yet, by building a new network, ensure that the surplus value eventually flows back to them. This ends up primarily magnifying the value of their network rather than the wider web. 

For instance a post on Tumblr is easier to reblog, if you have a Tumblr account, than to re-post elsewhere. It's easier to 'follow' a tumblr than to subscribe to the RSS/Atom feed for that same tumblr. Tumblr's bookmarklet makes sharing to your tumblr easier than cutting and pasting it elsewhere.  By building proprietary solutions that have a better user experience than the open solutions, Tumblr created a situation where sensible people act in ways that keep them inside the Tumblr network. This is more like a gated community than a walled garden precisely because the members of this network made an informed choice and are happy with the consequences of being inside it.

The hidden assumption in Tim O'Reilly's thinking was that the network that would primarily benefit from all this surplus value was the web. But it turns out that large social networks and large blogging networks and other sites that host large numbers of activity streams are the primary beneficiaries. We can see this in Tim O'Reilly's examples from his original presentation which primarily focusses on Twitter and the benefits for Twitter users. At the time making a distinction between Twitter and the wider web would have seemed nonsensical. 

Today we realise that creators go where they can reap value and that's increasingly in networks (like Twitter, Tumblr, LinkedIn's Influencer platform, Google+, Medium, Instagram, etc) which can help them easily discover a community that cares deeply about their creations. We also realise that the necessary walls (which take the form of privacy controls and API restrictions) between these networks and the wider web means the networks can benefit from Reed's Law and grow their value in proportion to the number of such communities that are formed rather than just the number of users. 

Every creator and their audience forms a new community where value can be created and captured. Some of these communities may even be generative. They may be creating more value than they capture. So where is that value flowing?

If you own a network where people create more value than they capture then most of that surplus value flows to you rather than to the wider web. The challenge for the owners of these networks is to invest that surplus value back into the wider web in the hope that they'll reap even more surplus value in the future. The challenge for those who believe in the virtues of the wider web is to show these network owners how they can contribute to and benefit from the wider web.

Syndicated 2012-12-30 15:41:00 (Updated 2013-09-18 09:54:18) from Ade Oshineye

Grouped & Creative Mischief

Grouped & Creative Mischief

Reading Grouped and Creative Mischief at the same time made me realise that there are two main strands of thought about the future of advertising, marketing and media. They see the same phenomena but explain them with different stories.

Tyler Cowen would be proud of the sets of stories that these books tell as they project the world-views of their authors. The stories are too neat, too unambiguous and too polished to be true. However they're still educational. The difference in these two books is in the set of lessons they wish to teach us.

One book wants us to believe that the web is being rebuilt around people. Unfortunately its examples all come from one source so after a while you start to suspect he means the web is being rebuilt around Facebook. Grouped is about the benefits of homophily (being with people like you), about websites that use social proof to show their value and websites that use data from your friends to provide better services to you.

The other book wants you to believe that success in advertising (or anything else) is about your willingness to do the things that other people won't or can't. That may mean thinking thoughts others are afraid to think, seeking inspiration away from your peers and giving someone "the right answer even though you know he doesn't want it."

In Creative Mischief Dave Trott tries to teach the reader that "to be noticed, we need to do something different. To be different, we need to break the rules. To get away with breaking the rules, we need to be clever." In Trott's world "what seemed to be facts were only true if I subscribed to it being that way" and "the rules are meant to be a spring board, not a straitjacket."

His short book walks the reader through the last 4 decades of his escapades and his experiences in the advertising industry. It's fun and cheeky and insightful. It's also lacking in empirical data or research to back up his anecdotes. Adams, on the other hand, has all the data and the citations you could want.

The dichotomy between these differing viewpoints was summarised as Mad Men versus Math Men in a recent article. I think it captures something important about the gulf between researchers and storytellers. Storytellers believe it's more important to be inspiring than to be right. Most researchers disagree. Paul Adams is one of the few that sees the value of the storyteller's mode of communication. At the same time I think there's a dichotomy between a homophilous and a heterophilous view of the world.

In Adams's view we want to be nudged into making the same decisions as our close friends and families. He rejects the conventional marketer's search for influencers and tastemakers who can move the masses because
"even when there are influential people and specific situations where they can wield great influence over many others, finding them is so expensive that it becomes a poor investment compared to other available strategies."
This is a world where 'social proof' is the most important thing. This is a world where "we’re only connected to people like us" so "it’s hard for ideas to pass between groups who are separated by dimensions like race, income, and education."

Hubs and influencers

In Trott's view we want to be guided into making decisions that separate us from the herd. This is a world where every appeal to the wisdom of the crowd is countered by citing the danger of pluralistic ignorance.

I much prefer Trott's world because I believe the most valuable relationships are created by the differences between us rather than our similarities. In fact I've been maintaining a Tumblr devoted to the idea of heterophily for a while. I believe heterophily is one of the key ingredients in creating groups that are creative, fulfilling and productive. Despite this I still have to acknowledge that Adams's book is the one you should read if you're interested in improving the performance of your marketing campaigns or understanding the incipient social web. However Trott's book is the one you should read if you want to dream of a better and more interesting world.

 The things you learn in Brooklyn

Syndicated 2012-03-01 16:27:00 (Updated 2012-03-01 16:29:08) from Ade Oshineye

29 Jul 2011 (updated 17 Jan 2014 at 20:11 UTC) »

Implications of being post-PC

At last night's OSJam I gave a lightning talk about the implications of being post-PC.

Those implications were:
Identity: a post-PC device needs to know its owner's identity since it can't rely on obtaining that information from a PC. At the moment all the devices are building their own identity platforms but eventually they'll start to take advantage of existing identity systems like Webfinger and PGP.

Personalisation: a post-PC device can be uniquely personalised because it's not predicated on the idea that it will be a shared device. The classic example is the experience when you buy a Kindle from Amazon. It will be preconfigured with your name and the books you've already bought. It's a small step from there to a future where the moment I bought a device in a shop it would automatically know that it belonged to me.

Kindle personalisation

Cloud: post-PC devices tend to be heavily dependent on the cloud in order to store persistent state and to perform sophisticated processing. The definition of the cloud that this implies comes from Simon Meacham. He says that the cloud means treating the union of all those servers/networks/services as if they were one machine available to be used by everybody. In this vision the physical devices we own are merely portals to and caches for this metaphorical "one machine."

Mobility: if our data and services live in this cloud then all the devices I can log in to are equivalent. At this point the ability to take a device to the place I wish to use it (for instance the Kindle screen is supremely legible in bright sunlight unlike the screens of most mobile phones) and the fact a given device is ready-to-hand will trump all other considerations. Photographers have long had a saying "the best camera is the one you have with you" and the rest of the world will soon experience something similar.

Devices: new kinds of devices become possible, if not inevitable, in a post-PC future. Technology will migrate to the most convenient form and place in your life. That means computers, cameras and music players will start becoming features of watches, jewelry and clothing. That's partly because people actually like their watches, jewelry and clothing whilst they mostly tolerate their computers. The early examples of this are Nike+ and the increasing variety of smart watches. In fact several of the people attending OSJam were wearing the precursors of the smart watches of the future. Furthermore once people stop expecting technology to be delivered to them in a beige box it opens the doors to new innovators such as Arduino enthusiasts and the open source hardware community.

From OSJam 19 - Post-PC: gadgets of the now

Personal Area Networks: The logical culmination of this is the personal cloud or personal area network. This is not merely a network of devices that are physically close to one person. Instead it is a network of devices that are geographically distributed but which can connect to each other (via a variety of networks and protocols including the internet, wifi and bluetooth) and prove that they belong to the same person. The devices are tied together by the fact that they belong to a single person and therefore they can seamlessly share each other's functionality.

These are merely my guesses about what's likely to happen as a consequence of these post-PC devices. Ultimately Alan Kay is right: the only way to predict the future is to invent it.

Syndicated 2011-07-29 22:03:00 (Updated 2014-01-17 19:48:48) from Ade Oshineye

8 May 2011 (updated 21 Dec 2013 at 13:11 UTC) »

What is Developer Experience?

Developer Experience (#devexp) is an aspirational movement that seeks to apply the techniques of User Experience (UX) professionals to the tools and services that we offer to developers. Developer Experience can be boiled down to 4 main ideas.

Lab

1. Apply UX techniques to developer-facing products.

These techniques and ideas include:
  • Personas
  • Lo-fi prototyping and sketching
  • Usability testing by watching people try to use your products without interfering so that you can get a realistic understanding of how people will actually respond. Some companies set up usability labs or run hackathons in order to get this kind of data.
  • A wider range of techniques from UX research practitioners
  • GOMS

2. Focus on the '5 minute Out Of Box experience'

The idea here is that if you provide a library, developers should be able to go from downloading to "Hello World" in 5 minutes. You should test this with a stopwatch or even a screencast to prove this is possible. Ideally they should then be able to take the code from this 5 minute experience and evolve it as their requirements grow in sophistication without having to start over. Making a library that supports the same user from unsure novice to sophisticated expert is hard but pays off in terms of increased adoption and fewer questions on the mailing list.

3. Use convention over configuration

This was most clearly stated by the Ruby On Rails community but is rooted in a simple insight. When someone first starts using your API or library they have the least knowledge so that's the worst time to ask them to make lots of complicated decisions with far-reaching implications. This is why you, the developer of the library or API, should make the initial decisions for them by establishing conventions that can be overridden with configuration options.

4. Try to "design away" common problems rather than documenting workarounds

For instance if your users struggle with getting OAuth working then create abstractions that handle it for them rather than documenting the 6 most common problems or writing up the 'simple 12 step process' for getting it working.

This is inspired by Don Norman's work on perceived affordances which says that things should be designed in ways that immediately suggest how you should use them. If you walk up to a door it should be obvious without reading any signs, whether you should pull, push or slide the door to open it. If the door needs to be documented with a sign then it was badly designed.

This theory of affordances applies just as well to developer tools and APIs. They should be designed to have affordances that encourage correct usage rather than documented to make up for deficiencies in usability.

Push Pull


The phrase (developer experience) and the hashtag (#devexp) comes from Michael Mahemoff but it's not a new idea. Its roots are in ideas and practices such as:
It's a set of ideas we would like to see more people, projects and companies adopt. We don't claim to be paragons and we're looking to learn from other people's examples.

That's why we've set up http://developerexperience.org/
We hope to use it to point to examples of great developer experiences as well as aggregating relevant tweets using the #devexp hashtag.

Please join us. You can start by using the tag "devexp" whenever and whereever you write about developer experience. Over time this will help us build up a body of knowledge that will do for developers what UX has done for users.

Add comments on Buzz

Syndicated 2011-05-08 12:55:00 (Updated 2013-12-21 12:59:10) from Ade Oshineye

29 Mar 2011 (updated 18 Sep 2013 at 10:15 UTC) »

The irony and the tragedy of OAuth scopes

Overly broad permissions

I was taking a look at my PeerIndex profile when I got the above screen. I was surprised since the button I clicked said "sign in with Twitter" and didn't mention anything about updating my data. I did a little digging and it seems that I'm not the only one who has this reaction.

In my case it was especially ironic since one of the things that I've been trying to do with the Buzz APIs is encourage developers to ask for the minimum set of permissions that they need.

The idea is that an app which is just going to use its access to your account to gather metrics shouldn't also be able to post messages on your behalf. That's why we expose 3 different scopes. These are read-only, read-write and a special scope for photos because those tend to be especially sensitive.

However developers will often just ask for the maximum set of scopes in order to give themselves the freedom to implement new features later on without having to ask the user to re-authorise them. They do this because it's easier for them and because they believe it results in a better user experience since the user isn't constantly being asked to give permission.

Unfortunately what many developers don't notice are the users who get to the authorisation screen and then close the tab because they don't understand why your app needs write-access to their account. The point is that asking for overly broad permissions, just like the password anti-pattern, repels users.

In the case of the Twitter API the problem seems to be that any HTTP POST API call is considered a write and so services like PeerIndex end up needing to ask for read-write access even though they're well-behaved.

The tragedy is that all parties are trying to create the best possible user and developer experience (by avoiding complicating the user interfaces with lots of options and removing the need to constantly ask the user for new permissions) but the end result is bad for all concerned.


Add comments on Buzz

Syndicated 2011-03-29 11:34:00 (Updated 2013-09-18 10:06:49) from Ade Oshineye

64 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!