Contemporary American society fixates on how humans appear to one another as an ever adapting and predatory market fuels this frenzy by tailoring to our greatest insecurities. The very infrastructure that enables our obsession with appearances constantly watches and learns alongside systems of governance that claim to protect our privacy and personal identities.
Alarming as the recognition of these constantly watching ‘eyes’ may be, their existence has become so deeply embedded in our society that there is no turning back, per say. Rather, we are faced with the question of how we respond to these vigilant machines and the people behind them. This is the precise point where artist Trevor Paglen enters the conversation with a curious rebuttal.
Trevor Paglen thoughtfully occupies a “hybridity” of identities, working as a landscape photographer, documentarian archivist, and political interventionist (Beckman 2007, 62). Born in Maryland in 1974, Paglen witnessed a seismic transition in human experience as technology filtered into every aspect of daily life. Through his artistic practice he playfully prods at surveillance and military systems across the globe, collecting images of invisible and omnipresent oversight systems.
Rather than prescriptively addressing these infiltrations of privacy and autonomy, however, he performs acts of declaration that reflect the gaze of these systems. As Paglen puts it, "Rather than trying to find out what’s 'actually going' on behind closed doors, I'm trying to take a long hard look at the door itself” (Beckman 2007, 63). In an age of constantly evolving advanced technology, Trevor Paglen demonstrates that performative acts of questioning may be as essential as identifying answers. Within the scope of his immense range of work, everal of his pieces uniquely exemplify his approach to machine vision. These works include two of Paglen’s reconnaissance photos of ‘classified’ military technology, "Fanon," and Vampire.
Glowing green with points of orange light, They Watch the Moon appears more like a still from a futuristic video game than documentation of an existing place. The gently rolling hills resemble mounds of moss with deceptively harmless, pearly growths nested at their center. With slight alteration or an eager imagination, this snapshot could be mistaken for an idealistic realm belonging to fairies or extra-terrestrials.
Trevor Paglen, They Watch the Moon, 2010. Source.
The vagueness in Paglen’s photos is both a characteristic of his constraints, having to photograph from a great distance on publically accessible lands, and an ironic allegory for the general obscurity with which the DOD loves to shroud itself. A persona of unfounded superiority characterizes their efforts to separate themselves from the world, of “pretending to be capable of manipulating the universe from a cool, clean distance” (Civin 2014, 41.5). The acts of isolation allow them to maintain this façade and shield themselves from the critical accountability of the public.
A more insidious and subtle example of Paglen’s documentation is Untitled (Reaper Drone). At a quick glance the image appears to simply be a sky-scape with warm tones melting from yellow to blue as wisps of clouds arc upward in a motion reminiscent of an abstract painter’s brushstrokes. The only symptoms of human intervention are the contrails slowing dissolving into the sky and a single black spec marring the lower right quadrant of the photo. The floating dot could easily be overlooked in the great expanse of the sky or mistaken for a spec of dust or debris on the lens of the camera.
Trevor Paglen, Untitled (Reaper Drone), 2010. Source.
Unlike the sky, which offers premonitions of danger or disruption, this small floating object is designed to be overlooked. It functions as an abrupt precursor to a death that comes without warning or second chances. Once again the photo performs multiple purposes as Paglen, “not only gestures to the limits of trying to make these sites visible, but also to the limits of translating their presence into forms of knowledge which can then circulate in public discourse” (Kafer 2016, 58). Because of the dense layers of silence and security enveloping technologies such as drones, little room remains for public discussions and accountability.
Through his photos Paglen defies the implicit, yet often explicit, command to remain quiet and apathetic about military operations. He both educates the viewer about these places and also demonstrates modes of resistance. As Paglen explains, “I have always conceived of this photography in terms of performance. The act of taking a photograph of a black site is just as important, if not more, than the photograph itself” (Kafer 2016, 55). The emotions or thoughts evoked by these photos predominantly remain as personal speculation rather than explicit interpretation of Paglen’s views. But he lends us his lens in an effort to instigate, “the imagination of alternative futures and actions to realize these visions” (Issues in Science and Technology 2019, 76).
The technologies documented in They Watch the Moon and Untitled (Reaper Drone) employ complex surveillance systems intended to maintain a certain level of control over the world and the space around it. The infrastructure needed to do so often remains visibly apparent, despite the government’s extensive attempts to disguise or hid it. However, there are many existing systems of surveillance that are practically invisible. Paglen’s works "Fanon" and Vampire employ technology designed to allow machines to see and comprehend more effectively. They bear this trait in common, but possess significant distinctions in application and effect.
Trevor Paglen, "Fanon" (Even the Dead Are Not Safe) Eigenface, 2010. Source.
"Fanon" appears to be a portrait that is slightly out of focus, perhaps taken through a cloudy piece of glass or plastic. The tones are too cold, as if captured in a morgue under fluorescent lighting, and the facial expression of the figure is utterly neutral. A sense of unease arises when you sit with the photo staring back at you for too long. Even now, I feel a prick of pain behind my right eye as I try to stare at an image my eyes cannot adjust to.
This bizarre and unnerving portrait functions as a facial blueprint of political philosopher Frantz Fanon (PHAIDON). Images of the subject were collected, overlaid, and analyzed algorithmically to distill Fanon’s face down to its average essence. This process utilizes much of the same software development used for contemporary, “automatic image production and recognition” (PHAIDON). After creating a machine-readable ‘image,’ Paglen and his collaborators formatted software to convert this “faceprint (which in its native form is a mathematical abstraction) into an image that human eyes can recognize as a face” (LensCulture). Multiple concepts and questions interplay within the scope of this project. There is the delineation between machine-vision and human-vision, the question of social applications of image recognition software, and the overarching conundrum of attempting to codify ethics and eliminate bias.
"Fanon" exists in multiple formats so as to be readable for both human and non-human viewer. For machines, the codified portrait can be read mathematically as a series of values that can then be employed to identify other instances of the individual’s representation. For humans, "Fanon" is a grid of pixelated colors that can be visually consumed as a whole and mentally stored. Although our comprehension and memory collection processes differ, we mirror each other.
What is unique about this piece, however, is that humans were catered to at all. As Paglen explains, “The overwhelming majority of images are now made by machines for other machines” (Paglen 2019, 24). Something seems to have been lost in this transformative process, as emotion and time were erased from this portrait so that it could be converted into a tool for identification rather than empathy. A shift in vision like this can have cataclysmic impacts, especially when those in charge of using this technology may have malicious or biased intentions.
For further context, Frantz Fanon was a “20th century Martinique-born writer and revolutionary” who worked as a “pivotal figure in the post-colonial struggle” (PHAIDON). He fought against prejudice, oppression, and racism and posed a threat to the powerful entities that sought to destroy him and the communities he worked to protect. Because of this, he undoubtedly posed a threat to existing authority and was likely monitored throughout the course of his life.
If alive today, Fanon would certainly be carefully observed and tracked by the government using technology like that employed by Paglen. Additionally, online representations of Fanon might be supervised to ensure that they are not creating ‘problems.’ From there, it is an easy step to interfere with and control a person’s life or manipulate their image in an effort to harm them.
There are two branches of the thought that arise in response to the threats that machine vision and image recognition and production software pose. They both stem from the same issue, which is that,
“Neural networks cannot invent their own classes; they are only able to relate images they ingest to images that they have been trained on. And their training sets reveal the historical, geographical, racial and socio-economic positions of their trainers” (Paglen 2019, 27).
The bias belongs to the developers, not to the machines, and any software touched by human bias will reflect it. Two overarching options remain: 1) attempt to insert ethics into the code or 2) create a balance of perspectives in the creation of training sets in attempt to check back for human bias. While these are both proactive measures, the first could be applied in numerous contexts whereas the second would require ongoing maintenance and correction.
As appealing as the first option may be, it has several key limitations that prohibit its usage. In order to codify ethics, there would need to be concrete consensus on the values and ideals that humanity has been debating for thousands of years. While there are many clearly defined social standards of morality, there are an equal number of gray areas where human bias takes center-stage. Creating a balance of perspectives and oversight in the production of training data remains the best contemporary option, but requires that the ones in charge of these systems acknowledge their limitations and turn to the voices and perspectives of the many rather than the few.
With time, perhaps, we will learn from computers to see with less bias and more honest perception and interest. This shift in thinking is not only necessary for better understanding and acknowledgement of other humans, but is also an essential step to understanding the machines that surround us. Doing so requires us to “unlearn how to see like humans” (Paglen 2019, 27).
In a complete shift away from "Fanon" with its visual neutrality, Vampire disrupts and vibrates with intensity. The figure emerges from a dark, thick atmosphere with a face cloaked in ragged scraps of what appear to be paper or fabric. Blue and muddled green, these textile-like surfaces could be ‘skin.’ But the crisp edges around the eye-sockets, which hang away from the pink, fleshy protrusion around the eye and scarlet and purple chin, make the material look more like raiment than skin.
Trevor Paglen, Vampire (Corpus: Monsters of Capitalism) Adversarially Evolved Hallucination, 2017. Source.
A conglomeration of ever so slightly ‘off’ qualities, this subject is the digital equivalent of Frankenstein. Vampire arose out of a training set consisting of images of monsters (Boucher 2018). Based on the training data, the system produced images that attempted to replicate the qualities of a “vampire” until the computer generated that an image that met the necessary requirements. Quite simply, this is what the software thinks a vampire looks like: a shrouded, mysterious figure, shifting away from humanity into something unfamiliar and unsettling.
Within this project, Paglen asks an essential question: “What are the politics of recognition, the politics of building any kind of taxonomy?” (Boucher 2018) Behind a computer’s decision to define an image a particular way are a plethora of human-made decisions in labeling training data. What one person may interpret as a monster may not be a monster in the eyes of another. Yet, the computer must be trained. The results are quick delineations and unwavering truth claims, leaving a skeleton of meaning behind. In some applications, this method functions smoothly. But in others, it can quickly become disastrous. For example, who will be considered human and who will not?
While many eyes are needed to prevent this from happening, too many people are excluded from these dialogues. In this way, the portraits of the Vampire and "Fanon" hauntingly circle back to Paglen’s photographs of military tech. Too few voices are being heard in places where ‘truth’ has been hastily or improperly established. Rather than debate the fallibility of their truth statements, however, the first and most crucial step is to examine the door and to perhaps consider how best we might eventually remove it from its hinges (Beckman 2007, 63).
However, Paglen’s role ends here as, “The dialectic nature of this aesthetic system is not one that inevitably produces truth claims” (Kafer 2016, 60). Trevor Paglen leads us to this point through his acts of “political intervention” and in doing so he demonstrates “alternative models of activism”(Beckman 2007, 65). While methods of questioning or gesturing may be denounced as all show and little substance, the act of pushing through these invisible obstructions requires effort, strategy, and dedication. Paglen’s performative methods of observation and participation illuminate the barriers that withhold the public from essential discourses that define the future of humanity. Efforts such Paglen’s educate and exemplify how to push onward in our effort to better understand vision in the age of machines.
Beckman, Karen. “Telescopes, Transparency, and Torture: Trevor Paglen and the Politics of
Exposure.” Art Journal (New York. 1960) 66, no. 3 (2007): 62–67.
Boucher, Brian. "'This Is the Project of a More Just World': Trevor Paglen on Making Art That
Shows Alternative Realities." Artnet News. July 03, 2018. Accessed October 07, 2021.
Civin, Marcus. “Visibility Machines: Harun Farocki and Trevor Paglen.” Afterimage 41, no. 5 (2014):
Kafer, Gary. “Documenting the Invisible: Political Agency in Trevor Paglen’s Limit
Telephotography.” Contemporaneity (Pittsburgh, PA) 5, no. 1 (2016): 53–71.
Paglen, Trevor. “Invisible Images: Your Pictures Are Looking at You.” Architectural Design 89, no. 1
"Trevor Paglen - A Study of Invisible Images." LensCulture. Accessed October 07, 2021.
"Trevor Paglen's Robot Revolutionary." PHAIDON. Accessed October 07, 2021.
“Trevor Paglen Sites Unseen and Orbital Reflector. (Art Feature).” Issues in Science and
Technology 35, no. 2 (2019): 76–83.
"Trevor Paglen: Sites Unseen." Smithsonian American Art Museum. Accessed October 07, 2021.