This is the second issue of The Interoperability, a newsletter written by Alek Tarkovski on the margins of our new Shared Digital Europe project. We are exploring what we are provisionally calling the Interoperable Public Civic Ecosystem – and thinking a lot about interoperability. Thanks for subscribing, we’ll be grateful for recommending this newsletter to your friends – they can sign up here.
Since the last newsletter, we finalised the first research sprint, which gave us a broad view of the topic: the value proposition of interoperability and proposed ways that it can become a solution to problem we face in the digital space. We don’t yet have the full map of interoperability narratives, but we have a good start. And I’m still perplexed by how this concept suddenly transformed from a bland technical rule to a favourite topic of digital activists and regulators.
This is for us the most striking outcome of our desk research. The value of interoperability is framed almost solely in terms of two market-related concepts: innovation and competition (the two are related). Here’s a quote from Viktor Meyer-Schonberg (from a MIT Tech Review interview that he gave in 2019). He’s both an early proponent of interoperability and recently an “interoperability celebrity”.
“Those with access to the most data are going to be the most innovative, and because of feedback loops, they are becoming bigger and bigger, undermining competitiveness and innovation. So if we force those that have very large amounts of data to share parts of that data with others, we can reintroduce competitiveness and spread innovation”.
Interoperability is, simply, a term that has a long tradition of being used in economic writing and in the context of market regulation. It has an even longer history of being used as a purely technical term. But only recently did the concept gain traction among activists – EFF might have been one of the first organisations to look at the issue in detail, in their series on adversarial interoperability. And the argument about social, non-economic value of interoperability still needs to be made. Which obviously is a gap that we plan to fill with our report.
Our map of interoperability frameworks and narratives is not yet ready. But we have the equivalent of a hand-drawn sketch, made on some oily piece of parchment, folded in four, which the adventurers receive before embarking on their great exploration.
The starting point is obvious, if a bit obscure: interoperability starts with the internet, with its first primeval, proto-form. In 1963, J.C.R. Licklider wrote a memo about the “Intergalactic Computer Network”, one of many proposals with the prefix inter- that emerged at that time. Intergalactic is such a great term and I wish it caught on. Licklider admitted that he deliberately wrote about intergalactic networks at a time when networked computers barely functioned. He did it, because he wanted to get people excited. (A storytelling lesson from which we can learn much for our advocacy efforts).
And then, several years later, we get the TCP/IP protocol and all the interoperable information flows that one can imagine, and then the Internet. William Gibson once described it as the biggest single thing in the world – made possible because of interoperability, a principle that made many networks into one.
And then the networks went on like that for some time, in their interoperable way, for some time. Until a fork happened.
And this is the most basic map of the online space, as seen through the lenses of interoperability. There is the space that underwent platformisation – “the platformnet”. And then there is the rest of the internet. Interestingly enough, both of these spaces are to some degree interoperable. Even the greatest monopolists are, as proven by initiatives like the Data Transfer Project.
But the general attitude towards interoperability in these two spaces is very different. In the internet, interoperability is a positive principle in need of being preserved and amplified. In the platformnet, it’s regarded with suspicion, as it is both a useful tool and a weapon that might break monopolistic hold of corporations on this space.
This split between two distinct ecosystems is highlighted in a passage from the European Commission’s European strategy for data that encapsulates what is emerging as a European perspective on the technology ecosystem (and which I like a lot – which is not something that I often write about a quote from an EU policy document):
“Currently, a small number of Big Tech firms hold a large part of the world’s data. This could reduce the incentives for data-driven businesses to emerge, grow and innovate in the EU today, but numerous opportunities lie ahead. A large part of the data of the future will come from industrial and professional applications, areas of public interest or internet-of-things applications in everyday life, areas where the EU is strong. Opportunities will also arise from technological change, with new perspectives for European business in areas such as cloud at the edge, from digital solutions for safety critical applications, and also from quantum computing. These trends indicate that the winners of today will not necessarily be the winners of tomorrow”.
There’s the platformnet controlled by the (American and maybe also Chinese) Big Tech, and then there’s the whole rest, “where the EU is strong”. And a sense that this landscape is dynamic, with new spaces and flows emerging as we watch. Spaces and flows in which interoperability can be secured at the start, and not fought over at a moment when it is almost too late.
Policy talk about interoperability is actually two different conversations. The term stays the same, but the vision, narratives and goals are very different.
// By the way: reading about early internet protocols I rediscovered Alexander Galloway’s “The Protocol”, abook that I really enjoyed in 2005. Galloway writes that control society started in the 70s, with the launch of the TCP/IP protocol. ”… a control society, based upon computers as key technologies, distributed network as a pervasive diagram and protocol as the dominant managerial mode”. Control – today we’d probably say “surveillance” – happened not as some perversion of the interoperable network. It was the flip side, or rather the dark side of interoperability from the start.
Let’s make food interoperable
Serendipity struck this weekend and I found in the New Yorker a perfect practical illustration of interoperability. We badly need these examples to grasp what is the real value of interoperability and why we need it.
“Our Ghost-Kitchen future” is a great long read about the changing ecosystem of restaurants, fast food joints, food production and delivery. It is a story of platform companies building a new ecosystem where instead of restaurants that eaters visit there are mainly “ghost kitchens” cooking food that is delivered to homes. A model that obviously gained a huge boost in the last months. It’s also a model that’s heavily algorithm-driven, with restaurants designing menus, dishes and even dish and restaurant names based on Big Data. That random selection of dishes and a wacky name simply mean that someone algorithmically optimised their offer. In such an ecosystem, a single “real kitchen” can prepare dishes for multiple virtual restaurants, each of them not more than a menu and a brand name.
There’s a classical platformisation story to be told here: the platform intermediary asymmetrically benefits from consumer data, allowing it not just to sell business intelligence on the two-sided market, but giving it a privileged position if it decides to open up own kitchens. Which the platforms obviously do – it’s the 2020s and they’ve learned their lesson. This is the classical “competition and innovation” narrative around interoperability.
But this is also a story of local food joints being replaced by bland, data-driven, virtual establishments. Of the magic of eating out transformed into its own simulacra. It’s the difference between a very humanistic experience of eating, and a dehumanised system where food means sustenance and corporate profit.
Link of the week
Paul shared with our team a recording of a talk about the “Single Market 2.0” by Andrea Renda, a policy researcher from CEPS and the College of Europe. Renda aims to rethink the Digital Single Market framework for current times and does a very good job. His proposed framework has been useful for us as we try to define the European interoperable ecosystem.
Graphics in this issue are by László Moholy-Nagy and taken from the first issue of a Czech pre-war avant garde magazine, “Telehor” (available on Monoskop).