We are celebrating 15 years — and counting — of stories that are deeply researched and deeply felt, that build a historical record of what the city has been.
In the first full week of January 2025, 60 square miles of Los Angeles burned to the ground; Palestinian deaths in Gaza due to traumatic injuries alone were re-estimated at 70,000 people. As we struggle to assimilate the unimaginable, we would do well to remember that our imagination itself is a field of struggle. Why do efforts to envision the future look like the colonization of space, and not infrastructures to support our collective interdependence on Earth? “Those who monopolize resources monopolize imagination,” writes Ruha Benjamin in a manifesto meant to seed our ability to imagine justice in the here and now, with the tools we already have. The influential scholar has long analyzed how technological solutions obscure and fossilize longstanding racial inequities in prisons, along borders, and even on park benches. We spoke with Benjamin, whose work bridges academic critique, public conversation, and creative action, about supporting more liberatory, radical forms of imagination. Instead of fetishizing novelty and technological solutions, we can learn from history and the insights and experiences all around us to envision the future that can still be. – MM
Your recent book on imagination got me thinking about a riddle that a friend’s son told me years ago and has haunted me ever since. He said: Imagine that you’re in a room with four very tall walls, and they’re totally smooth. There are no footholds, and there’s no way out, and you’re in there with nothing, and there’s water pouring in from the top in all directions. What do you do?
We were stumped, proposing one solution after another, and none of them worked. And then, the answer to his riddle was: Stop imagining.
In a way, you are proposing exactly the opposite. If we are in the room filling with water or the burning house, you’re really saying: Start imagining.
True, but I actually like the invitation to “stop imagining.” Because lately, I’ve been thinking about things that should not even be possible for us to imagine. Take what’s happening in Palestine. Yes, we urgently need a ceasefire, and an end to the occupation. We should also draw red lines around things that should be incomprehensible, unimaginable, because they’re so harmful and evil. Stop imagining that some lives are disposable. Stop imagining that one group’s security can justify another group’s annihilation.
Part of what I’m working through in the book is the way imagination is tethered to reality in all these different ways. So even as I want us to get more serious about imagination, that should go hand in hand with more pragmatic investments in the lived realities of people in the here and now, not just future-making. As we work to fashion imagination, we must be careful not to fetishize it.
I’d like to ask you about your choice to frame the book as a manifesto. That’s a format that architects and urbanists and technologists love, and one that it is in that spirit of willing a new world into being, usually ex nihilo, usually in the image of one person’s vision. But you choose that form and then work absolutely against it, suffusing the whole text with insights from fellow travelers from all over, leaning into examples of real projects and practices, instead of envisioning what could be.
With that word, “manifesto,” I’m refusing to enter this conversation in the detached mode of a scholar that’s agnostic about what I’m writing about. I’m trying to signal that I’m writing this and I’m angry. I’m writing this and I’m grieving. I’m writing this, and I want you to care about it as much as I do; rather than engage the subject at arm’s length. Part of the reason I draw on different examples and initiatives is to say: Perhaps a lot of our future is behind us. Perhaps there are things that we need to learn from, that have already happened, that are happening right under our noses, that aren’t being given the respect that they deserve.
So often when we’re future-oriented, we are divorced from the here and now, and from the past. By highlighting these examples, I’m asking: What are the alternative paths that we could have taken if we had invested in collective care over accumulation and competition? There are other parallel realities, futures that could have been, that can still be.
The other thing is, I never want to pretend that I’m inventing something new or starting some new conversation. We are building on an existing tradition of people who have tried to radically expand our collective imagination. I infused the book with some of those voices — obviously, it’s not anywhere near exhaustive. It’s an exercise in intellectual humility to say: We as readers, I as a writer, am entering something that is ongoing. Novelty is fetishized in academia. Everyone wants to do the new thing. And this is not that.
You have long studied and critiqued how forms of technological innovation encode and further racial and social domination; that includes many aspects of our built environment, from benches to borders. It’s particularly cathartic, in Imagination, to see you dismiss the Silicon Valley utopias we see on the market these days: Sidewalk Labs in Toronto, space colonization. What do these dominant and dominating futures share and why are they so grim?
A lot of these projects fall under the umbrella of “smart cities” or “smart borders,” in which a wide range of technologies are used to collect more and more data on populations under the guise of making things more “efficient” and “tailored” to people’s needs. One of the key features is the gap between the marketing of these initiatives and the experience of those who are already living on the margins of these same geographies. So we have to develop the capacity to look beneath the surface of the glossy marketing and ask hard questions based on histories of exclusion and exploitation.
Even just taking the word “smart” at face value and understanding that that’s never been a straightforward good. “Smartness” has always been tied up with a eugenics imagination. For there to be a “smart,” there has to be a “dumb.” Dumbness and IQ and intelligence have always been a justification for erecting hierarchies and making disposable people and populations.
If we look at smart city initiatives from the perspective of those on the underside — for whom “efficient” data collection means more intrusive forms of surveillance and control — it raises a whole host of concerns and grounds for resistance. We also have to pay attention to who is doing the selling, and see beyond the theater of “public participation” that is staged to make these initiatives appear like they have democratic buy-in.
We have to look beyond the theatrics. Who are the actual producers, directors and the scriptwriters creating this theater? We often find the same people, companies, and logics across different locales. This should lead us to question the newness of what we’re being sold, and ask: What are the old histories, hierarchies, investments, and desires for social control that are being encoded in so-called “smart” systems?
Many of these technologies are touted for their predictive capabilities — predicting things that will happen before they do. That’s the promise of efficiency and intelligence: “We know more about you than you even know about yourself.” Prediction is sold to us as something that’s going to enhance our lives. One way to think about prediction is that it closes off futures; things that should be open-ended, where there should be agency, empowerment, and expression. Predicting a future, always based on historical data, closes off the potential for the unexpected, for things to happen in ways that aren’t so engineered and patterned. I would love for our critical antenna to tingle when hearing these buzzwords, so that we can pull back the screen to see what’s really going on.
To do that, I think as practitioners and professionals in these fields we must develop our historical imagination. If you’re critiquing a future that’s being sold, you have to ask: Where have we come from to get to this point? Because the way that I see it, many of the people who are creating these so-called fixes and solutions for society have a very basic understanding of the societies for which they’re creating all these tools. It’s not enough to allow those with the technical know-how, the specialization, to shape the environments in which everyone has to live, when they have very little understanding of the social fault lines, of the histories. That’s a critique, but it’s also an invitation for us to think about the many different types of knowledge that we need around the table. Varied expertise and lived experience, or what feminist science and technology studies scholar Donna Haraway would call “situated knowledges.” We need to expand who and where we go looking for the voices, the insights, and experiences that should be part of shaping a city, a geography, a proposal for a shared future.
The critique of some of the solutions you’re talking about very often begins and ends as critique. The response to a dominance of techno-solutionism can be a rejection of technology qua technology. Or people come to reject the idea that anything new or in the future could be any better or more desirable. How do you see folks using new technologies in liberatory ways? Or how do we calibrate a vision for the sustenance of life?
When I think about ways to use different technologies as part of a larger toolkit, the first thing I look for is that the technology is not the driving force. It can facilitate social connections and relationships that we want to engender, but it is not at the center of what we are doing.
For example, I’ve been learning about a “digital democracy” tool called Decidim that was created to facilitate greater participation of Barcelona residents in the life of the city, and now it’s spread to numerous locales around the world. Decidim means “we decide” in Catalan and this app is used to facilitate decision making, where people can propose ideas, comment on proposals, and vote on local issues. Rather than the city being completely geared towards tourism and elites, the app incorporates people’s insights so they can have a direct impact on the quality of life for everyone, which is vital. And it doesn’t all happen on the app; there’s also an analog version. As I was walking around Barcelona last month, I saw posters for an in-person meeting to discuss a participatory budget for that particular neighborhood. After all, not everyone is using smartphones and apps — for example, elders don’t always have access, so gathering in person is still important. The technology should not displace other forms of participation.
Another example of using technology in more liberatory ways is an initiative that my lab is supporting called the Phoenix of Gaza virtual reality project, which includes hundreds of images before the destruction of Gaza and now in its aftermath, as part of cultural preservation and rebuilding historic sites and everyday locations for leisure, recreation, and community that are now completely leveled. Founded by Palestinian researchers Naim Aburaddi and Dr. Ahlam Muhtaseb, the purpose of this immersive reality project is to archive these images with the aim of memorializing and ultimately rebuilding Gaza.
I’ve written critically about VR in the past, in Race After Technology. It’s often sold to us as an “empathy machine,” allowing users access to other perspectives. You put on a headset and take the perspective of a police officer who supposedly feels threatened and then you are supposed to understand why he might shoot a Black youth. Or the example of Zuckerberg, who used VR in the aftermath of the hurricane in Puerto Rico, what some have called “trauma tourism.”
There are all kinds of reasons to be critical about this technology. But there are also ways (and I’m learning through Phoenix of Gaza) that if it’s in the right hands — driven by an ethic of care and shaped by those most affected by a particular crisis, instead of by those engaged in extraction and tourism — it could lead to different outcomes. The point is for oppressed people to not simply be recipients of charity and goodwill. Instead, they are the ones that are actually designing, prototyping, encoding their vision of what they want to see in the world.
Ultimately, the evaluation of any technology shouldn’t just be focused on the tool itself but the entire ecology, everything around it: the who, what, where, and why. Collaborating with this team of Palestinian researchers and technologists is a real lesson for me about cultivating criticality with creativity. I’m being challenged to not get stuck in the critical mode, so that I can see other possibilities of how this technology can be used when we really transform the ecology in which it’s being developed and deployed.
What about AI, which you speak about as well, and is just exploding in impact on our daily lives. Is it redeemable? More specifically, I am thinking about its power of envisioning, and the impact for designers.
I’ve been thinking about this more and more in terms of the lines we need to draw. There are examples of more subversive uses of AI that I like to highlight, to get us thinking about what’s possible. At the same time, I want us to evaluate not just individual cases or examples, but zoom out to think about the context in which AI is being developed. That has to do both with the economic underpinnings and the monopolization of data and power that currently characterize the systems in which this technology is being deployed. And we need to think: How else could we do this? If we are currently plagued by “platform capitalism,” what would platform cooperativism look like?
Those active on X might recall that I posted the initial book cover of Imagination in August 2023. It was created using one of these AI image generator tools. I didn’t realize that when it was commissioned, and I quickly got schooled, especially by artists who are waging a fight against AI companies that are stealing copyrighted art to train these generators. Up until then, I had been focused on AI in policing, healthcare, and education. I hadn’t been privy to what was happening in the art world and quickly had to learn about the forms of extraction that were happening and then eventually got the cover changed.
One of my colleagues described it to me like this: Imagine you went to a great restaurant, you had the best meal you’ve ever had, and then at the end of the meal, they tell you that all the ingredients were stolen. Would you think of that meal in the same way? People are using AI to do things that may seem more inclusive. They might be enhancing esthetic diversity, they might be trying to create more equity in the images represented, and yet, the process to get those products could still be deeply exploitative. In evaluating anything that’s dubbed AI, our frame needs to zoom out to the process, not just the product.
The client, whoever’s paying for it, typically gets to determine what’s created, how it’s used. If we’re starting with a world in which we have this monopolization of power and resources, that means those who already have the power and money to commission any kind of AI — they are going to ensure that it reflects their interests.
In the education space, for example, there are predictive tools that are meant to determine which students are “at risk” of not graduating from college. This is called the Student Success Predictor Score, used in over 500 American colleges and universities. Not surprisingly, already-marginalized students are dubbed higher risk than their more privileged counterparts. Mind you, the framing is always beneficent: “We want to help! We want to intervene early before these students run into trouble.” But often, what it does is create more stigma, and it creates a justification for, let’s say, counselors and advisors to steer students out of more difficult fields, “for their own good.” It reinforces the status quo in terms of who’s getting computer science degrees, who’s getting degrees in STEM.
If we’re going to use these tools at all, we should direct them not at those who are “at risk” but at those who are producing the risk for students. We need to turn it in the direction of the adults, the administrators, the fields — those who actually have power to change the environment that individual students have to navigate. We rarely ever see AI turned in the direction of those in power.
There was a great creative project a few years ago that I love to highlight, called the White Collar Crime Early Warning System. There’s another great study out of MIT and Harvard in which the team created a model to predict whether judges would “fail to adhere to the US Constitution by imposing unaffordable bail without due process of law.” So rather than predicting the behavior of defendants, they “studied up” and focused on judges. If we’re going to be predicting, labelling, assessing, let’s at least point it in the direction of those who try to evade scrutiny, who count on their invisibility as a superpower, and shine it in that direction rather than on those who are most vulnerable. More and more, I’m seeing creative experiments to get us to think about the power dynamics that are built into these technologies.
You also spend a lot of time in Imagination with the importance of aesthetics, and art, and play, and what that realm of inquiry does for us. To give one example, you have a provocative perspective on an installation at the Mexico-US border by the architecture studio Rael San Fratello, called Teeter-Totter Wall. Some people criticized it as superficial, but you ask readers to consider the risks of dismissing artistic gestures as “performative.” So I was hoping we might be able to talk about that and perhaps some of the examples that you’ve come back to.
Play is one of those things that’s often dismissed as just recreation, leisure, extra — not important. We should understand how crucial play is for our development as individuals. As society, when we stifle play and imagination, ultimately, everyone suffers because we become tied to these very old ways of doing things without expanding. One of the things that I do in Imagination is look to places and people that continue to play by all means necessary. Take a country like Finland where play is emphasized in schools, educators understand that it’s a site of collaboration, of healthy competition and conflict, of communication, and expression. It has to do with revaluing things that have been dismissed as not important, as childish.
Related to Rael San Fratello’s Teeter-Totter Wall, I have also been learning about the work of Estudio Teddy Cruz + Fonna Forman. They work in Southern California where many of their projects highlight the power of bottom-up participation. They say that these practices need to “trickle up” into urban policy and planning. The reason I focus on specific projects is because there’s so much knowledge there that doesn’t always get the respect it deserves. But people are putting ideas into practice, and that work needs to trickle up into these more highfalutin scholarly contexts. So much of the way forward is to turn things on their head and to say: How can we recenter and learn from these other forms of knowledge and other skills and experiences?
Imagination is a useful prism to think about this with, because it underlies the things that are more obvious. In my view, what’s beneath the surface and what’s bubbling, that erupts occasionally, are the dominant forms of imagination that are infecting so many of our institutional practices and protocols. And so, my invitation is for us to think about how we engender more liberatory, radical forms of imagination, which I define as those that reflect our inherent interdependence as people and planet. Because so much of the world that we have built runs counter to that. It’s constantly undercutting our interdependence, and as a result, we have all the problems and crises we face.
It’s not an easy fix by any means, but it also doesn’t require a new AI system. We need to get back to basics in terms of what it means even to be a society. Because there are powerful forces at work that are trying very hard to make the very idea that we have a responsibility to each other seem like a foreign, bizarre concept. That’s why I think over the next 20 to 25 years, as much as there is investment in all these new technologies, what we really need to invest in is society-building. Technology has its place but, in my view, tech innovation has completely overshadowed many other things that we should be focusing on.
The views expressed here are those of the authors only and do not reflect the position of The Architectural League of New York.