Caught in the Spotlight

A camera basks in the signature green glow of Project Green Light, a “real-time crime fighting and community policing” surveillance network dispersed throughout Detroit. Photo by Tarik Hamza
A camera basks in the signature green glow of Project Green Light, a “real-time crime fighting and community policing” surveillance network dispersed throughout Detroit. Photo by Tarik Hamza

In the smart city, cameras, sensors, networked objects, and algorithms gather intelligence about the built environment and the people who inhabit it, ostensibly to provide decision-makers (or decision-making software) with information that can enable more efficient provision of services — including the policing of public space. But these technologies aren’t just available to private companies or municipal agencies. Increasingly, they are joined by consumer-grade, digital monitoring tools for the concerned citizen: Networked security cameras and license plate readers, sometimes providing direct feeds to local law enforcement, can now be purchased through Amazon. Below, Chris Gilliard explores how technologies that track create different spatial experiences for users on opposite ends of the tool — and for different races and classes at the receiving end of the surveilling gaze. At the same time that these tools blur the lines between who surveils and who is surveilled, they reinforce existing social divisions.

Four years ago, Brandon Harris recounted this anecdote of driving through his mother’s Cincinnati neighborhood:

While stopped at the intersection, I glimpsed out of my eye a tall negro dressed in a white tank top, his skin high yellow like my own, crossing the street in what seemed like a beeline toward my car. He was coming from a corner where much wasteful bravado and boisterous ennui takes place, and I felt it immediately, that familiar sensation, the need to secure my body against potential predators. I was driving an orange car with plastic orange flowers on the dash, the same car I had been driving when held up at gunpoint not far from that corner two summers before.

The man sauntered behind my car, and I locked the door. Hearing this, the electronic click of the door locks snapping into place, he looked back at me and we met eyes as I swiveled my head to watch him. We didn't stop looking at each other the whole time he crossed to the other side of the street. The light turned green, and he said, "I ain't trying to roll up on you, bruh."

That “click” sound is familiar to me. I grew up in Detroit in the early 1980s, around the time when car door power locks were becoming standard, and I would assert that for probably any Black man who grew up in any American city at that time, it was a common experience to walk down a street and hear that sound (especially at night). Click. Click. Click. Even during the summer, folks would have their windows down and still lock their car doors as I walked by. This performance was not particularly secure, mind you. If my intentions were actually to harm someone sitting in their car, a locked door with an open window is a very ineffective barrier. But the sound perhaps served more as an audible and technological reminder of how I, along with other Black men, were seen by the culture. And it was inescapable. There was no “opting out” of this form of what we now refer to as “security theater”: practices that present the illusion of increasing security or safety, but have no meaningful effect.

Digital surveillance has grown exponentially in the last few years, and with it the scope and scale of security apparatus deployed across urban environments. How does the visibility — or invisibility — of these technologies elicit performative acts from people at each end of the surveilling gaze? How is one expected to act as the person watching, or as the person being watched? In the case of the mechanical door lock, the click emboldens a driver, making them feel safer, while at the same time signaling to those outside that the driver is aware of their presence. For the person outside of the car, the effect is quite different: The click signifies that you are seen, and indicates that you are perceived as a threat. A seemingly discrete act transforms not only the user of the tech and the person who is targeted by their act, but even the spatial context around it.

To better understand the current landscape of digital technologies that watch or track people in some form or another, and how their deployment affects both people and the places they inhabit, we might start to think about surveillance technology not only in terms of what it does, but who it is used by and, importantly, who it is used on or against. Consider what we might call “luxury surveillance,” in contrast with “imposed surveillance.” These categories call attention to who gets to decide how surveillance works for them and who is left little room for choice. Luxury surveillance is expensive, voluntary, and sleek (yet often meant to be noticed). Imposed surveillance is involuntary, overt, clunky, and meant to stand out.

A commercial for the Fitbit wearable fitness tracker depicts various women using the device to track sleep and exercise habits. Screenshot via Fitbit’s <a href=”https://www.youtube.com/watch?v=2ltpJD5WKvM”>”Motherhood Is Incredible”</a>
A commercial for the Fitbit wearable fitness tracker depicts various women using the device to track sleep and exercise habits. Screenshot via Fitbit’s ”Motherhood Is Incredible”

Compare a fitness tracking technology (such as Fitbit) and an electronic ankle monitor. The devices have many of the same functions. Fitness trackers are small devices, typically worn on the wrist. They are often equipped with GPS and a microphone and measure a person’s location, activity level, and heart rate. Fitbits and their ilk are worn primarily by people interested in tracking their bodily functions and activity in an effort to improve their level of fitness — to optimize their bodies through tracking. With the notable exception of some mandated wellness programs, people typically wear them by choice.

An anonymous individual wears a court-ordered ankle monitor in director Puck Lo’s short film (Almost) Freedom. Screenshot via <a href=https://aeon.co/videos/what-its-like-to-live-with-a-gps-enabled-ankle-monitor-that-tracks-your-location-at-all-times><em>(Almost) Freedom</em></a>
An anonymous individual wears a court-ordered ankle monitor in director Puck Lo’s short film (Almost) Freedom. Screenshot via (Almost) Freedom

Ankle monitors are typically assigned to people who are either awaiting trial or out on parole. The electronic tracking devices are also used in cases where an asylum seeker is awaiting sentencing in a deportation case. Sometimes they are equipped with microphones and speakers, or even remote blood alcohol level monitors. Because of the ways that “e-carceration” perpetuates longstanding racist structures, writer and civil rights advocate Michelle Alexander has called this type of tracking “the newest Jim Crow.” Ankle monitors are typically compulsory, or may be offered as an alternative to traditional incarceration. In many cases, private companies are able to charge individuals exorbitant fees for the use of the devices. The monitors are often bulky and difficult to conceal under a pant leg, a far cry from the sleek accessories designed for self-tracking. In the digital age, however, the stratification between surveiller and surveilled — and between those who have and don’t have agency over how they are surveilled — is felt beyond the scale of wearable devices. These dynamics are reshaping the social life of entire neighborhoods, communities, and cities.

Still image from a Ring doorbell video feed. Image by Jenni Konrad via <a href= https://www.flickr.com/photos/queen_of_subtle/48073183508/in/photolist-2gf4CBw-2gzw8y2-2gzwdK3-2gzwaZz-2gzvJjk-2gzvBsE-2gzwdsv-2gzvHGd-2gzvDY1-2gzwdG3-2gzwd4Q-2gzwcN9-2gzvG3S-2gzvEzw-2gzvENH-2gzw55D-2gzvyLT-2gzw3ee-2gzvGZb-2gzvFDf-2gzvzCn-2gzvxsk-2gzvwZM-2gzwGxP-2gzvGGs-2gzvz1L-2gzvF4N-2gzvEjG-2gzw83c-2gzw69N-2gzw2N4-2gz8biX-2gzwbUW-2gzw8YL-2gzw5Uu-2gzvzq8-2gzwHgn-2gzwdcV-2gzvBeZ-2gzwGg6-2gzw7B2-2gzwcYd-2gzwFKr-2gzvDKk-2gzvD5x-2gzw5H2-2gzw9d8-2gzw8km-2gzw7PB-2gzvBCQ>Flickr</a>
Still image from a Ring doorbell video feed. Image by Jenni Konrad via Flickr

My neighbors — on each side of my house and across the street — all use Amazon’s Ring doorbell. This sometimes raises opportunities to measure my commitment to privacy. For several nights in a row this past summer, some kids (I assume) were egging cars on our block. Eventually it was my “turn,” and I woke up to my neighbor knocking on my door to inform me that my car had been egged. He said he had footage from the incident captured by his Ring, and that, if I wanted, he could send it to the police. I thanked him, but politely declined the offer. I live in Dearborn, Michigan, which has the largest population of Muslims in the United States, and I am certainly not going to involve the police when there’s a strong possibility that it might endanger a Muslim kid over a problem that can be solved with white vinegar. Surveillance often encourages “solutions” that far outstrip the level of the infraction. Without a camera, it’s unlikely that someone would bother to call the police for a car egging, but the existence of footage — the fact that people have potentially actionable evidence they feel compelled to use — turns a minor instance of vandalism into a situation involving law enforcement.

Ring purports to take on the function of a community of people looking out for one another. It bills itself as a “new neighborhood watch,” without considering the connotation of that phrase in a world where Trayvon Martin was killed by a racist acting as a “neighborhood watch.” Users can post footage from the doorbell to Neighbors, a social network owned by Amazon. On the network, people (many of them Ring owners, though owning the device is not a requirement) are able to post notices and video clips of suspicious activity in their neighborhood. Ring also allows users to receive alerts on incidents in their neighborhood. However, these incidents may be of someone simply biking down the street or of raccoons raiding trash cans. Ring may inspire a feeling of safety and security in the device’s owner, but it also induces hypervigilance and increased anxiety about “crime” at a time when the frequency of violent crime is decreasing all over the country. More than providing any real deterrence, Ring militarizes public space by helping construct a web of police surveillance that would be otherwise impossible. Individual homeowners would likely balk if police asked to put cameras in front of every person’s house. Sold by Amazon and ostensibly owned and controlled by homeowners, those same cameras are embraced.

A promotional image for the Neighbors app highlights users’ ability to upload and label Ring video footage according to various categories of suspicious or illicit activity.
A promotional image for the Neighbors app highlights users’ ability to upload and label Ring video footage according to various categories of suspicious or illicit activity.

Ring’s marketing operation includes RingTV, a website composed of select videos of doorbell footage. These include “cute” videos featuring trick-or-treaters and staged events like a father interrogating the young man coming to pick up his daughter for a date. However, a significant number of selections are devoted to images of “porch pirates” being scared away or “criminals” caught in the act of invading the home or breaking into a car. The interspersing of cuddly family-friendly videos next to anxiety-inducing footage helps to establish Ring as a friend, there to show you good times, and to keep you safe during the bad. Yet, as Mike Caulfield, head of a national digital literacy initiative for the American Democracy Project, has discussed, the very existence of a network for Ring videos may create “pressures for individuals to take the most minor incidents and frame them sensationally, to create incidents with drama, to edit clips deceptively . . . and maybe even to fake content.” In a technologically-created environment where “crime” becomes content, people will be moved to find crime.

A featured video on RingTV.
A featured video on RingTV.

Now consider those on the other side of the camera. A number of the “cute” videos broadcast by Ring feature people who launch into some type of performance once they realize they are “on camera.” People being watched, and knowing the potential for their image to spread, perform in some fashion or another. There are, of course, other, more pernicious effects on those being watched. A recent Motherboard report on Ring cited a delivery person’s fears of being falsely understood as “casing the joint” while he waited for the door. Albert Fox Cahn, Founder and Executive Director of the Surveillance Technology Oversight Project, notes in the same report that the persistent surveillance of service workers and delivery people is “depriving workers of autonomy and privacy, and can really have an emotionally toxic effect over the long term.” Recent hackings of Ring cameras, with users spied on in their own homes, are complicating this divide, as individuals who purchased these devices find themselves on the other end of the surveilling gaze in ways that they had not anticipated.

Modes of surveillance and security theater, digital and otherwise, extend from people’s bodies and homes to public spaces. Victor Hugo Green first published the Negro Motorist Green Book to help Black motorists identify “friendly restaurants and hotels in New York” in the 1930s. It progressively became a guide that let Black folks know about safe places they could find food and lodging throughout the US. In an era of rampant segregation and Jim Crow, the Green Book could mean the difference between safety and danger, between life and death. Today, while new technologies make navigation and exploration easier, other tools still dissuade Black and brown people from circulating in safety.

Three Automated License Plate Readers mounted above an intersection in Manhattan. Photo by Billie Grace Ward via <a href= https://www.flickr.com/photos/wwward0/28085985601/in/photolist-22m9mZm-2hzqhif-YsoDJp-GVVCXT-245J9vU-sUeoFm-JwjPFp-sE6FGD-seGvui-PeLchn-5frTqu-a5DAPg-JMS224-7zinq6>Flickr</a>
Three Automated License Plate Readers mounted above an intersection in Manhattan. Photo by Billie Grace Ward via Flickr

Automated License Plate Readers (ALPRs) are cameras that photograph and store the license plates of passing cars, and match those plates against existing databases. ALPRs are often mounted on streetlights and patrol cars. These devices scan “residential areas, apartment complexes, retail areas, and business office complexes with large employee parking areas,” and are cross-referenced with a database of over 2.2 billion location data points. According to the New York Civil Liberties Union, in 2014, “the NYPD operated nearly 500 license plate readers within its Domain Awareness System, a centralized network of security cameras, license plate readers, and chemical and radiological detectors.” This creates a mechanism capable of a granular level of tracking across the city. The NYPD has used ALPRs to, among other things, monitor cars parked outside of mosques. Homeowners associations and even individual citizens are now able to purchase this technology, installing cameras on individual property or at the entrance to a neighborhood.

ALPRs tend to be hidden. However, like so many aspects of police surveillance, they are not a secret. In true panoptic fashion, the preponderance of ALPRs establishes the possibility that you are always being observed. And as with Ring, powerful and connected surveillance tech in the hands of “regular” citizens ramps up fear with constant notices of “invasions” by outsiders. We have already seen what this looks like in viral videos of “BBQ Becky” or “Pool Patrol Paul”: hypervigilant policing of Black users of public spaces. Expanded surveillance capabilities only magnify these effects. Black and brown folks are well aware of the likelihood of being watched when they enter predominantly white and wealthy communities, even when they are residents of that community. But the ability to track and identify people whenever they cross an invisible barrier raises the stakes — not only in the case of being falsely identified by technology, but also in the case of being correctly identified but falsely implicated in illegal activity.

A Project Green Light sign designating a commercial street in Downtown Detroit as a “Green Light Corridor.”
A Project Green Light sign designating a commercial street in Downtown Detroit as a “Green Light Corridor.”
A sign mounted to the façade of a McDonald’s restaurant indicates that this location is a “Green Light Partner.” Photos by Tarik Hamza
A sign mounted to the façade of a McDonald’s restaurant indicates that this location is a “Green Light Partner.” Photos by Tarik Hamza

These technologies enact particular ways of thinking about and living in a digitally-enhanced city. Smart cities might be considered emblematic of luxury surveillance. The ideal citizen of the smart city is one assumed to benefit from surveillance. The smart city is a city of libertarian ideals, where a sense of community is subsumed under the drive for personal convenience. The smart city exists to cater to the needs of its residents. We might think about how this logic extends, or doesn’t extend, to surveilled bodies in the less affluent urban landscape.

Consider the chasm between a smart city development such as Hudson Yards and Detroit’s Project Green Light. Project Green Light began as a move to install cameras — as well as obtrusive, flashing green sirens — at businesses that are open late. The program now includes over 500 cameras equipped with facial recognition, and the Mayor of Detroit has stated his intent to increase that number to over 4,000 in the future. Much like the fitness tracker and the ankle monitor, the systems put in place at Hudson Yards and in the city of Detroit utilize similar technologies with disparate effects. While the technology in Hudson Yards (including numerous cameras and sensors) is meant to be seamless, technology whose purpose is surveilling Black, brown, and Latinx bodies — criminalized bodies, marginalized bodies, over-policed bodies — is often conspicuously displayed.

In Detroit, mechanisms of surveillance are layered on top of the urban space in a way that makes no attempt to integrate with the city. From real-time video feeds outfitted with facial recognition software to “omnipresence” towers that shine spotlights into particular neighborhoods, grotesque add-ons offer little pretense about who or what they are for. The smart city’s seamless efficiency does not extend to the city blocks where technology is conspicuous and pointed outward in ways that signal to those communities that the surveillance is on them, not for them. Black folks and people of color experience the friction (and the accompanying physical and mental stress) of knowing that they are always watched. This is the “click” of the electronic lock, but at scale.

I have written elsewhere about how digital platforms promise to manage the “messiness” of social relationships by reducing them to transactions. Apps and interfaces create an environment where interactions can take place without people having to make any effort to understand or know one another. This is a guiding principle of services ranging from Uber to touchscreen-ordering kiosks at fast-food joints: They enable consumers to interact with purportedly seamless technology rather than deal with other human beings. Companies make a similar promise with surveillance and security technologies. The claim is that these technologies, integrated into our living spaces, will reduce the “frictions” of anxiety and fear, and increase a sense of safety. Yet rather than ease or eliminate friction, these technologies often increase feelings of unease, anxiety, and fear on the part of both the watcher and the watched. Inasmuch as those tensions (whether acknowledged or not) come from a fear of the other, more cameras, devices, tracking, alerts, and notifications will not deliver on their promises. Rather, these technologies will continue to fuel a negative feedback loop between individuals and communities on both ends of the surveillance spectrum, where the only real winners are the companies who profit from the fear they help to manufacture.

Dr. Chris Gilliard is a writer, professor, and speaker. His scholarship concentrates on digital privacy, and the intersections of race, class, and technology. He is an advocate for critical and equity-focused approaches to tech in education. His work has been featured in The Chronicle of Higher Education, EDUCAUSE Review, Fast Company, Vice, and Real Life.

The views expressed here are those of the authors only and do not reflect the position of The Architectural League of New York.

Series

Digital Frictions

Examining the points of abrasion in the so-called smart city: where code meets concrete and where algorithms encounter forms of intelligence that aren't artificial.