I am generally inclined to take a more structured approach with my writing. However, given the abstract and constantly evolving nature of this field of study I’ve decided that a somewhat rambling contemplation may be better suited for this reflection. There are a lot of concepts that I have encountered in my studies this semester and it is difficult to decide which ones to talk about. At the moment, I am apt to go with whatever first comes to mind.
The first idea that arises is the topic of how machines are anthropomorphized and how many artists have personally observed this tendency. There is certainly an embedded desire to anthropomorphize inanimate objects. This may be because we want to see more of ourselves in the world and believe that we are having a greater impact on it. Perhaps it is also a way to sympathize with other entities and inflates our sense of interest in them.
While pondering the humanization of robots, I immediately think of Harold Cohen and how he did not like the fact that people were sympathizing with and fixating on his ‘turtle.’ In order to solve this, he changed the presentation and production methods in his shows and directed the viewers focus away from the robots and onto AARON as a whole. Meanwhile, Sougwen Chung takes a very different stance and appreciates and enjoys the ways in which humans sympathize with her robots.
In both of these examples, the robots were given human names. Cohen's system was referred to as AARON and Chung's robots are called DOUG. I'm not sure if these decisions were tongue in cheek or whether they were was intended to reflect the human desire to humanize machines. But it is interesting that they have this trait in common.
I'm also inclined here to consider Octavia Butler's Dawn and the way that Lilith is confronted with the proximal humanity of the Oankali, who are ever so slightly not- human. Since they possess all the power in this situation, their vague resemblance to humans is terrifying.
On the flip side, we also witness Lilith’s attempts to interact with other humans who are inclined to view her as not human and therefore as dangerous. Nikanj explains how humans instinctively see different as dangerous. It is part of the evolutionary process, part of the way that we survived. If we see something as being different, it is first and foremost a threat.
So how does AI fit into this? I think that there are perhaps a couple things we can extract from this. The first is that the humanization of artificial intelligence or robots is good and useful up to a point after which it inflates the personality of the robots to a point where people can become nervous and even afraid. This is not necessarily a rational fear; it is imagination rather than anticipation.
The second thing that I want to pull from this train of thought is that the humanization of machines may be useful because it inspires people to care about the development of AI. In many instances, profit alone fuels the progression of technology. But what begins to foster interest in a greater range of individuals is this idea that humans can somehow connect with machines and that we can reflect each other. That invites further conversation, communication, and diverse input about what AI should be used for.
A good topic to cover next is the range of philosophical questions that get teased out in conversations about AI. Let's talk a bit about ethics. One of the biggest and most pondered questions about AI is what it can be used for and why. I think my major takeaway for the semester is that the future of AI is going to reflect the status quo in many ways. The increased use of AI in our society will not reset or rebalance existing power hierarchies in our society. However, those who do the work to shape the future of AI might.
We have an opportunity and a responsibility to increase the diversity of voices that are shaping the future of AI. That is perhaps the most important thing, instead of developing AI in a vacuum. If this field lacks input from artists and other creative individuals, things will continue in much the same way that they are now and may even get worse, as power and profit are concentrated into the hands of a few.
AI is not so far removed or unique from other forms of technological progress. The ethical considerations surrounding it are remarkably similar to questions we have asked before. But let's get a little bit more specific because there are some ethical considerations that are fairly unique to AI. On the whole, I think that those mostly have to do with representations of people. This includes the co-option or creation of personas for people without their permission. Likewise, the gathering of data that is rooted in the experiences of a few can cause significant problems.
Like I mentioned before, the best solution in many cases is to account for the inherent biases and include a number of voices in the development of AI. We must also check back against the blatantly problematic and dangerous behavior that can be subtly hidden by these systems.
Another idea that I want to touch on briefly has to do with popular representations of AI. I have my own perceptions about this, but multiple authors and artists who have observed similar things corroborate my experiences. Sougwen Chung, for example, talks about how the dominant narratives about AI are misleadingly apocalyptic. Part of the hysteria is rooted in some of the origins of AI. The idea of singularity, for example, has substantially influenced how society approaches ideas about AI.
I'm also thinking about Zylinska’s description of how incredibly wealthy and influential individuals spend a great deal of time and effort preparing for a future with hostile AI. Meanwhile, we really should be focusing on other more likely and pressing issues. This brings me to the topic of climate change, which I think also ties back to our discussion of ethics.
In general, the use of AI consumes are great deal of energy and has a significant carbon footprint. This begs the question of whether we should we be participating in this activity to begin with. As with any art form, this activity has benefits and downsides. The benefit being that we need diverse voices to balance out predominant narratives about AI. We have to be creatively considering the future in order to combat entirely unnecessary and inflated ideas about AI that allow certain people to fantasize about a future where they save the world through their ingenuity, rather than through cooperation and careful consideration. Many people want to play the part of the hero without regard for how they are creating the need for one in the first place.
When considering the downsides of creating AI art, we must acknowledge that the way we are currently dealing with climate change is insufficient and so even doing little things is valuable. However, I have seen many artists and authors state that the benefits of involvement and advocacy can outweigh the downsides, in this instance.
If you consider the overall environmental impact of artists’ work, it is minimal compared to other entities and organizations. Meanwhile, the voices of artists can have a substantial impact and influence on society and the world. I think this is something that each artist needs to sit with, figure out where they stand, decide on what path they think is the best course for them, and work from the heart.
Moving to an entirely different point, I want to discuss where I am at with the application of this knowledge and what I have learned this semester. The first thought I've had is that what I am trying to do does not feel very unique anymore. That is both a great and difficult thing. It's great because there are legacies and conversations that I can build off of. This helps prove the legitimacy of what we are trying to do, but it also makes the process feel less exciting. It's harder to stay motivated and to feel urgency when the thing you are doing does not seem novel.
However, there is something deeply personal about how I want to be in collaboration with AI and that is what makes this project unique. I have not seen many instances where artists successfully worked with generative neural networks that were trained solely off of an artist’s own work. It's challenging. We haven’t found quick success in our project, in part because our training data is limited. But there is something intimate about sharing this knowledge with a neural network. There's a great deal of trust that goes into that exchange.
There is also a sense of continuity between the processes that I've been doing before and the ones that I'm doing now. It seems like a natural progression, somewhat like incorporating photography into a painting practice. We do not think much of it now, but it felt revolutionary and strange when we first started doing.
I have thought a great deal about feedback and audience participation and I am not sure how to effectively go about addressing that aspect of this project. I found it curious that in Audrey described how they considered setting up a neural network that was rewarded or punished based on feedback from an online audience. This is a thought that I also had independently. Audrey decided against doing this because they thought it might simply be an attempt to optimize art.
I disagree a little bit with this because one critical question I have is about what pressures and forces artists feel from society. What happens when you set up a quasi-perfect entity with all the tools and resources that an artist might have and you tell them to try to deal with the pressures of social media? You would have the chance to see if they are able to survive it. The thing is, I think that they would be stuck in a cycle where they would not progress. I would love to try to recreate that and see if my hunch is correct. I would also love to compare that to an example of an AI that is functioning with itself as its only judge.
All said and done, I was pleasantly surprised at the breadth of differing views about the ways we humanize machines. I was encouraged by the examples of other artists and what they are thinking about and doing through the use of AI. Lastly, I have been challenged to push beyond my initial ideas and question the motives behind my decisions. As a next step, I need to ponder the ethics of what I am doing and truly question whether it is worth it. I think the answer is yes, but I never want to stop asking.