That means it's not a curated gallery. Instead we recommend that people who just want to see a set of exemplary PWAs should go to the pwa.rocks PWA directory.
If there's already another PWA directory why are we doing this and what do we hope to achieve?
Our primary goal is to learn in the open and share those lessons. Some of the things we hope to learn include:
what makes people use a PWA offline?
what constitutes a meaningful offline experience?
what percentage of our userbase actually uses it offline?
which PWA technologies help with acquisition, engagement, retention and re-engagement of users?
how do we build a good cross-platform and cross-browser experience?
what signals in analytics and Search Console indicate that we are on the right path?
what are the things we believe or assume that are wrong?
We hope to get 1000 30DAU of this content-centric (lots of pages with URLs) PWA over the next few months. That level of regular usage should start to surface some of the challenges that big web apps face.
However this isn't a big web app so our stack is relatively simple.
The one clever technical feature is that we use Lighthouse As A Service. That means that every time someone submits a manifest (all we require is that the site provide a web manifest over HTTPS) we run Lighthouse inside a headless Chromium instance to collect metrics about the quality of the prospective PWA. If you’re already a Lighthouse user then you may spot that our scores sometimes differ from those you see in Lighthouse. It’s an open issue and we’re working on it.
Lighthouse is a big part of this web app’s value so it's going to be the subject of the first in a series of articles sharing the lessons we are learning in building Gulliver. Until then you can get in touch with us via Github if you have questions or feature requests.
As a publisher you have 2 choices. You can either bring the people to your content or you can find ways for your content to flow towards the people. Historically that has meant using syndication technologies like RSS but in these days of #PeakRSS that's no longer sufficient. The audience are no longer just sitting in front of their aggregators. They're sitting in front of activity streams watching reshares and hitting reload on their favourite websites every morning.
To maximise the reach of your content you have to reduce the accidental friction (see Brooks on the difference between essential and accidental properties) involved in sharing to those activity streams. That means making it easy to share (hence the sharing buttons all over the web), making it clear why they should share (hence the various experiments encouraging you to share or tweet specific quotes from an article) and finally making the shareable unit something that can travel easily around the web.
This is where embeds come in. An embed is a card that contains a chunk of a site's functionality that can be ripped out of that site and reused elsewhere.
But isn't this what links are supposed to do? Unfortunately inline links have various problems:
they take the user away from your site
they break silently
the content at the destination has to be consumed without the context provided by your site
the destination can't provide guarantees about the experience your audience will receive and thus you can't set reliably expectations for anyone who clicks on the link
Embeds don't have these problems. They:
keep the user on your site
break in ways that are immediately visible
are consumed in the context provided by your site
can be constrained to a tiny card with a predictable UX within your site
Embeds also represent a chance for publishers to change social media from something where your campaigns/integrations send traffic to someone else's site to something that delivers measurable value (traffic and revenue) to you. The most valuable embeds:
convert your social efforts into traffic on your site
help you keep people engaged with your site
help you increase the reach of your content
help you use other people's content to enrich your site
Ultimately publishers will realise that embeds are just the first generation of portable content unit and they have their own limits. There will be other generations and they can offer different trade-offs.
I find this intersection of art and machine learning interesting because at scale it leads to the discovery of new tools and new perspectives on something that we consider to be uniquely human. For example the spatial visualisation used in the "Machine learning & art" talk at I/O lead me to t-SNE as a mechanism for building two-dimensional maps of high-dimensional spaces.
This raises the possibility of building pirate maps of information spaces or hypertexts and connecting that to Vannevar Bush's ideas about stigmergy in the Memex. Imagine being able to create and share your own trail through the space of all art? Or being able to reinvent the idea of the traditional slideshow of the family's holiday photos.
Today apps like Prisma merely help us make alternative versions of existing photos. At the same time apps like The Roll and Google Photos help us identify our best photos. In the future they might help photographers to create photos from scratch and tell new kinds of hypertextual stories.
People are seeing that native app platforms have some feature and are then asking for the exact same feature for the web. Instead they should be asking about the job to be done and the benefits users or developers see from a given feature. For example app stores :
Instead of 'cargo culting' the app stores we should be asking what web-centric solutions to the problem would look like. For me that means lots of competing and opinionated PWA directories rather than one central PWA Store or even a popular search engine.
All curation grows until it requires search. All search grows until it requires curation.
The Social Stack: what’s in and what’s out at the various layers
I wrote this back in late 2010 from August to November. This was around the time of the first and only OpenWebFoo so I was trying to think through the stack of specifications, protocols and standards that would have let us build a federated social web.
I've ensured that all the links still work as of June 2016 but apart from that this represents what I believed all those years ago. I'm posting it here because I want to remember the past. At the same time it's a marker of the end of that particular phase of my life.
Guide to implementing App Linking on Android 6.0 Marshmallow
Android Marshmallow has a feature that can make life better for developers who feel that their app experience is better than their web experience. It's calledApp Linking and it ensures that your app always handles links for your domain without the disambiguation dialog you would normally see. The feature is called App Linking but the connection between the app and the web site is called an App Link. And, in case you're wondering, it's unrelated to Facebook's AppLinks.org initiative. This is a short guide to implementing and testing the feature. Let's start.
Go through your manifest and identify the domains (and subdomains) your app claims to be able to support.
Add an assetlinks.json file pointing to your app (or apps) to each of these domains or subdomains. If there's a domain or subdomain that you don't control then the verification process will fail. You can either remove that host from your manifest or you can remove the CATEGORY_BROWSABLE category from the manifest as this will have the same effect: your app won't intercept request for other people's domains or subdomains.
Make sure you serve the assetlinks.json file over HTTPS on every domain or subdomain that you support. Your entire site doesn't have to support HTTPS. Serving just the assetlinks.json file over HTTPS will suffice.
Make sure the assetlinks.json file is served with content-type “application/json” since it won’t work with any other content type.
If everything works you should see a message like this: Add an autoVerify attribute to the intents in your manifest for each of these domains. Be aware that the verifier doesn't follow redirects so it won't work if you try to shortcut this by having one canonical file that all the other URLs redirect towards. You can find more details about the install-time verification process by reading this excellent but now outdated guide from Christopher Orr. Don't forget that all of these files must match exactly so if you update one of them you must update all of them. Fortunately the SHA256 in the assetlinks.json is based on your app's private keys so once you've added your release and debug keys you should never need to change it. Between this guide and the official documentation you now know everything you need to make App Linking work on Android Marshmallow. If you still have any questions then ask on Stack Overflow using the tag: android-app-linking.
Nowadays (thanks to a long lunch with Paul Downey, Jeni Tennison, et al) I've begun thinking of the web as a ship of Theseus where, despite replacing every single part of the stack, what's left is still recognisably the web.
This made me realise that we are surrounded by unexamined and ossified metaphors that are in danger of becoming thought-terminating cliches. For example: - open web versus (presumably) closed web - the web browser is the web platform is the web - the web as a platform - web apps - web versus native
One of the reasons I present at conferences like OpenTech is because I want to have my mind changed and my complacent metaphors jolted. This year my presentation came out of asking myself "what do I most like about the web?" My initial list was: - its universality (view source meant everybody everywhere could cut and paste their way to something that sort of worked). - its omnivorous inclusiveness (it tended to absorb neighbouring or competing technologies like WAIS, Gopher, NNTP and FTP). - its hypertextuality, intertwingularity and document orientation because they open the door to new forms of argumentation as well as letting us create living documents. - its peculiar notion of addressability without guarantees about the nature of the resources at the end of the links.
That answer is driven by two realisations. Firstly it turns out that browsers are not the only user agent. They're not even the only kind of user agent. Secondly now that deeplinking is becoming mere linking it is clearer than ever that apps are just domain-specific user-agents. As a consequence it becomes clear that native versus web is merely a debate about whether we should use 1 universal user-agent or N domain-specific user-agents.
The web has changed, is changing and will keep on changing. The convoy is bigger than I originally thought because there are lots of overlooked user-agents out there. These apps aren't part of the web any more than browsers are but their addressable/linkable content most definitely is part of the web. Just because that content has a preferred user-agent doesn't change this.
Awkward questions for those boarding the microservices bandwagon
Why isn't this a library? What heuristics do you use to decide when to build (or extract) a service versus building (or extracting) a library?
How do you plan to deploy your microservices? What is your deployable unit? Will you be deploying each microservice in isolation or deploying the set of microservices needed to implement some business functionality? Are you capable of deploying different instances (where an instance may represent multiple processes on multiple machines) of the same microservice with different configurations?
Is it acceptable for another team to take your code and spin up another instance of your microservice? Can team A use team B's microservice or are they only used within rather than between teams? Do you have consumer contacts for your microservices or is it the consumer's responsibility to keep up with the changes to your API?
Is each microservice a snowflake or are there common conventions? How are these conventions enforced? How are these conventions documented? What's involved in supporting these conventions? Are there common libraries that help with supporting these conventions?
How do you plan to monitor your microservices? How do you plan to trace the interactions between different microservices in a production environment?
What constitutes a production-ready microservice in your environment? What does the smallest possible deployable microservice look like in your environment?
The start of the year is a good time to be thinking about the end of the year and the kind of world I would like to see. Usually this leads to resolutions and predictions. Unfortunately I find most predictions worthless since the pundits seldom go back to check on their previous predictions. The other problem is that people start out making predictions and then the articles turn into wishlists. That's why this year I'm just going to write a wishlist.
Wikipedia starting to use identity technology to improve the user experience. For example if I donate or become a member then I'd like to stop seeing obnoxious adverts (and they really are adverts) asking for money. The Guardian's membership programme is a good model that Wikipedia should adopt.
a viable successor to the Leica M9. The Leica M Type 240 just isn't a big enough improvement.
a viable replacement for Aperture. I have zero faith in the upcoming Apple Photos app and there isn't enough official support for migrating from Aperture to Lightroom.
a viable replacement for the old Mac Pro. The new Mac Pro abandons all of the strengths of the old Mac Pro.
less hype/advocacy for microservices and more documentation/descriptions of techniques that work with collections of small services. You'll be able to tell if you're seeing hype by the number of awkward questions that they raise.
Firstly these apps are special because they can use every sensor and transmitter on your mobile phone. That means they will also be the first to get access to new sensors and transmitters as they are added to these devices. This gives them more power than any other apps that have ever existed.
Secondly these apps are special because they're installed in a device that the user will carry around all day every day. That means that they will see more usage per user since there will be so many more opportunities to use them. It may even be time to start measuring average usage per user (AUPU) as a more quantitative version of Larry Page's 'toothbrush test.'
Finally these apps are special because they're on devices that will eventually end up in the hands of every post-pubescent person on the planet. That means they will eventually end up with more users than anything we've seen so far.