James Bridle (JB) artist, writer and technologist in conversation with Michaela Friedberg (MF) and Olaf Grawert (OG)
Agency in the Age of Decentralization
The analogue world that surrounds us as well as the digital spheres in which we move on a daily basis are becoming increasingly complex. Despite possessing ever more data, it is becoming increasingly difficult for us to understand the mechanisms that track and analyze us and predict our behavior. Nevertheless, many people hold to the idea that more data, technology, and digitalization are enough to create a better world. In his work, James Bridle exposes simple assumptions, resentments, and unquestioned conventions that have become deeply ingrained in our everyday behavior and thus in the technologies we use. Michaela Friedberg and Olaf Grawert spoke with the writer, artist, and curator about spaces of agency in the “New Dark Age.”
OG: In late 2018, you curated an exhibition at Nome Gallery in Berlin, titled Agency. What was the idea behind it?
JB: Agency presents artists who are actively and specifically engaging with the situation they are describing. Their works are not works of documentation, but works that try to reconfigure an existing situation in some meaningful way. In many cases, this means telling stories rather than documenting something directly, because, more often than not, the stories that are handed down to us are written by the same people who created these situations, technologies, and devices. What some of the artists in this show have done is to rewrite the narratives that we have been told. They are re-telling stories through the same objects, processes, and systems but with radically different outcomes.
OG: What is your understanding of agency in a moment when people seem to increasingly feel at the mercy of new concentrations of power, especially within the technological sphere and the web? JB: For me it is essential to keep in mind that agency is something different than mastery. There is a very common belief that as things become more complex you need to gain an understanding at every possible level in order to be able to act. I increasingly think that this is not possible anymore. No one can have this kind of totalizing overview of the system. Even people or entities who one thinks are in control have very little agency overall. So, the question one should ask oneself is: What is required to act under any particular circumstances? That is what I think of as agency. It is about figuring out one or two actions or interventions that can be performed and recognizing that they are taking place in a much larger system. It is not about striving for mastery but looking for smaller entry points, smaller things that can be twisted and that might have wider systemic effects.
MF: Despite the invisibility of the systems you address, in your own artistic work you often draw a connection to a tangible material output. In doing so, you imply that no immaterial process can ever be absolved from producing a physical footprint. In a time when everyone thought brick-and-mortar would become obsolete, tech companies are again investing in physical assets at the scale of infrastructure and urban planning. How do you see your work fitting into this binary of material and immaterial, virtual and real?
JB: I think we all suffer from this bias towards the physical because we are still using our fairly basic mammalian brains to understand the world, so things that aren’t at our fingertips are harder to under- stand. One strategy for producing critical techno- political analysis is to point at the concrete aspects of the internet: to look at data centers as spatial entities, trace the cables under the ground, or look at sites of extraction where things are mined to make iPhones. This strategy makes the virtual a little more tangible and easier to understand. It makes it easier to pinpoint the politics of technology that are so often physically difficult to encounter.
This is needed to prepare the ground for critiquing those same technologies as we encounter them on a daily basis in cities and architecture. If we don’t do this, these architectural and urban schemes might escape criticism in the same way that the application of so many technologies have escaped criticism in the past, which have been neglected because they are seen as politically neutral. We often think of the internet as some sort of magical faraway place where stuff just happens and then gets beamed down to us, but in many ways the internet is quite physical. There are big buildings at the edge of cities filled with computers generating a lot of heat and consuming huge amounts of electricity. There are countless cables that run along the ocean floor and connect everything together. If you look at the map of where the internet fiber optic cables go around the world, you will see that they trace out completely the routes of former empires. All the fiber optic cables from Africa still root back to their former colonial powers. A lot of the ones from South America still go back to Spain. This is because imperialism did not end with decolonization; it moved on to the level of the global infrastructure. If you can see the infrastructures behind the technologies, you can work out where the power still lies. Once people confront the physical dimension of technology they can start to get down to the roots of the prob- lems. They can move from a mere critique of technology towards a deeper under- standing of the actual distribution of power that underlie the flows of capital.
MF: We are constantly hearing the assumption that technology is neutral. Yet such arguments make it easier for technology to penetrate all aspects of our everyday lives. What exactly are the implications of this assumption?
JB: I was in a talk a few years ago where Eric Schmidt, the former CEO of Google, spoke about this idea of technology as neutral, as a kind of rising tide lifting things that are inherently good. In the course of that speech he talked about the genocide in Rwanda, and said that if there would have been mobile phones in Rwanda in the early 1990s the genocide would not have happened. This is emblematic of the belief that the application of more technology— particularly of technologies of vision and surveillance, but also the gathering of situation-specific information in the form of data—is intrinsically positive. This is fundamentally and dangerously untrue. There were plenty of eyes on Rwanda at that time. There was satellite imagery, reports and radio recordings. There was an immense amount of media attention. The problem was not that we did not know what was going on. The problem was a lack of political will, as well as inability, to act for many reasons.
But it is also dangerously wrong because the technology Schmidt described in such positive light, the mobile phone, has been shown to make things worse in many situations. For example, the violence that flared up in Kenya during the last elections was heavily driven by information spreading over social networks, so one could argue that mobile phones also brought huge amounts of violence. Facebook recently admitted it played a role in the genocide in Myanmar. It certainly wasn’t Facebook’s fault, or mobile phones’ fault, but the idea that the application of technology alone is somehow neutral or good is deeply wrong and incredibly dangerous to promote. Still, it’s something that is deeply embedded in the culture of Silicon Valley, which is why it needs to be constantly challenged.
OG: As architects we have a relatively limited skill set for addressing many of today’s technical developments. For example, when it comes to the implementation of “smart” technologies at the scale of architecture and infrastructure, we can approach programmers, computer scientists, and developers and have productive conversations with them. But this does not change the fact that we ourselves lack the necessary technical expertise. As someone that studied computer science and therefore has insight into both worlds, what would you say is the agency of people working behind the scenes of these major tech companies?
JB: So often we are concerned with our own lack of agency, even when we become politically aware of these technologies and start realizing that there are people behind them with their biases. It is fairly easy to imagine those people as being more powerful than us, but that is very rarely the case. The people who build these things don’t necessarily have more agency than anyone else, and if they do they very rarely recognize it because they lack the overview, just as everyone else does. The transdisciplinary perspective is still incredibly rare. The people who develop a technology rarely give too much thought to the political or social outcomes of it, which is why I say that expertise or mastery of technology itself is not the simple answer to fixing this. Educating people to be better at technology does not necessarily produce some kind of higher consciousness. In general, people are mostly defined by the context in which they are working, and the context, in my opinion, is not that you are a programmer but that you work for a large company with shareholders, which is realistically the driving mindset behind your work. So, when we talk about domains of agency: yes, there is educational work to be done at all different levels, but it is the broader framework that sets the direction in ethical matters, and right now the direction is mostly set by the returns anticipated by companies and their investors.
MF: What roles can people with specialized technical knowledge take outside of the corporate context? If we are aiming for a more integrated, transdisciplinary perspective of the whole system, then could you imagine specialists, including yourself, taking on a more conventional role in a governing body? As a politician?
JB: I think that I have a political role already, as does everybody else. If you’re asking if I would put myself up for election under a party system, that seems fairly unlikely, but there are many other ways to actively shape society. If we understand governance as the assumption of responsibility, then this should be the goal.
MF: In the case of the genocide in Rwanda, you framed the problem as a lack of political will to intervene, which suggests the need for alternative organizations to deal adequately with such catastrophic developments. If conventional political structures do not take on the task, are there other methods, such as technological management, which might work better?
JB: Representative democracy is not the worst form of government. We need people to tackle the tasks at hand, and it is all the better if truly competent people are chosen to do so. We obviously don’t live within a perfect version of representative democracy right now, but there is growing awareness of the importance of infrastructure, which forms the basis for many of our activities. And this is something that I think is as relevant to architecture and its physical questions as it is to digital ones. When we are building things, we rarely think about the shape of the infrastructure on which it’s con- structed, and how that affects its outcome. I am thinking of things like open-source software. For example, when it comes to video conferencing tools most people still use Skype, which is based on a deeply centralized system. Every Skype call travels through a data center owned by Microsoft, so you are compelled to trust that company. But Skype is not the only video conference tool. There are many other vendors in this area, so you can have the conversation with another software just as well. By using those alternative tools you are radically changing the shape of the network, expanding and boosting it, making it increasingly attractive to other users.
There are fundamentally different ways we can design and build things, which are also based on different profit models and methods. I don’t mean that we should completely abandon centrally organized companies or representative democracy altogether, but that we can produce a shift by thinking differently about how things are shaped, how they communicate across networks, and how they produce very different effects on a much wider scale just by the way they are constructed. This is a possible design challenge, and an architectural challenge, but it is not just thinking about the surface aesthetic of these things, it’s thinking about how they are systematically constructed.
MF: For all the empowering aspects of decentralization promises, there is also the loss of legal entities that can be held accountable for their actions. In other words, when you decentralize power, you also decentralize responsibility. Are you arguing for stronger decentralization?
JB: There is a lot of discussion about over-centralization of power on the web today. At the same time, we have gone through an incredible phase of decentralization of media and communication technologies over the last 20 to 50 years, to the extent that now we don’t know whom to trust anymore. Politicians, media—these traditional gatekeepers of the truth within society—I think that it is absolutely essential that these existing forms of power are questioned and that we build some other ways of discussing the truth. That applies in a broader sense to the idea of centralization and decentralization. Decentralization is democratizing to the point where it meets education and engagement. The agency happens in the middle. It is not enough to radically decentralize without people having the power to have an informed opinion. We have gone through all these decentralizing processes but we have been disem- powered at the same time because we don’t understand the technologies that we are using. Decentralization must therefore be accompanied by educational measures relating to the tools we use on a daily basis. This concerns software, for example, but can also be applied to architecture and urban planning. Currently these tools are designed to make everything invisible, and therefore not subject to criticism. All these magic, advanced, complex, extraordinary tools—whether it is chat apps or maps— are all hidden behind little icons on our smartphones. We don’t think about them. There are ways to build these tools so they are also educational in their use, so that you don’t just access them blindly but learn how they work and how to use them thoughtfully.
OG: The Berlin-based group terra0 Research has tried to develop a system of decentralized ownership based on blockchain technology and designed for a forest so it can own itself. There is also a historical case in Germany: the Cologne Cathedral. The cathedral is owned by the Hohe Domkirche zu Köln, meaning it owns itself. It is represented by the Domkapitel (the cathedral chapter), a board of humans that acts in its interest, albeit only for its maintenance, and with no profit interests. What can we learn from such examples?
JB: I know that there is a river in New Zealand which has obtained legal personhood. In the corporate sector, this possibility has existed for a long time. I don’t know enough about the Cologne Cathedral to say for sure, but what I understand is that it’s not about giving agency to the cathedral itself, but about creating a situation where it shares agency with humans. It is the board, the cathedral chapter, which acts in its interest. When we talk about a self-owning forest I get a bit nervous because I think that if a forest is self-owning, then in classical economic terms, it is put in competition with other forests, or even with people. The incentive to cut it down could then become even greater. What is interesting, on the other hand, is the shared responsibility of space, in which they form a kind of community with common interests and care for one another. I think that the terra0 project is good but mostly as a critique of the idea that you can somehow magically separate anything from humanity, or society, or politics by the application of technology.
It’s the so-called Oracle Problem: all these models are ultimately based on “smart contracts,” which use external data sources—the “oracle”—to make automated decisions and forecasts. The use of this data is problematic because it is used to decide on how the contract responds to future external inputs (the if-then conditions). If you rely on a central oracle that outputs incorrect data, this compromises the smart contract, and any benefits you would have gained from decentralized decision-making are lost. What happens at the edge of any one of these technological systems? How does it interface with the world? How do you get data from the “real” world into this supposedly magic, imperfect system? I may be proved wrong about this in the long term, but I think the Oracle Problem is essentially insurmountable. That problem is never going to go away. The only response, for me, is the notion of mutual responsibilities between parties. This turns the question from how do we lay down some kind of perfect network of control to a question of stewardship: How do we manage things in the best interests of everyone in the present where the means will define the ends?