top of page

Intelligence & Singularity

Simultaneous to concerns about automation is a twin theory concerning a concept known as singularity: a theoretical point after which machines exceed human intelligence and become uncontrollable. To make sense of this, I would first like to touch on human versus machine intelligence. Harold Cohen draws a clear line between these two, saying that “[m]achine intelligence is not the same sort of thing as human intelligence” [25]. Although machines can outperform human experts on certain tasks, there are limitations to what they can do since they must function within a strict system of rules beyond which they cannot successfully intuit decisions. Simply put, machines remain highly specialized.

Many popular representations of AI depict something that does not exist yet. While current forms of AI can perform a number of complex tasks, communicate effectively, and adapt to their environments, “Artificial General Intelligence” has yet to be achieved. Theoretically speaking, AGI would be a “computer system that is similar to human intelligence in principles, mechanisms, and functions, but not necessarily in internal structure, external behaviors, or problem-solving capabilities” and would possess approximately the “same level of competence as human intelligence, neither higher nor lower” [26].

After this theoretical point, AGI would supposedly become “completely incomprehensible and uncontrollable,” which threatens to usurp humanity’s power over machines [27]. The machines from The Terminator, iRobot, The Matrix, and so on are all examples of AGI that can compete with and, consequently, conquer humanity and achieve singularity. While representations of singularity make for thrilling stories, they are fictional explorations of philosophical dilemmas rather than realistic predictions of the future. 

Ultimately, the plausibility of singularity relies on a particular and arguably faulty conception of intelligence [28]. As Wang, Liu, and Dougherty conclude, “[a]lthough ‘intelligence’ is surely a matter of degree, there is no evidence that the “level of intelligence” is an endless ladder with many steps above the “human-level” [29]. They go on to explain how this metric is based on the assumption that there are lesser forms of intelligence observable here on earth, an ultimately anthropocentric view of intelligence where difference is frequently categorized as inferiority. If the scale of intelligence is inherently flawed, then there is no proof that AGI would exactly follow, surpass, and then compete with our own intelligence since they would necessarily function differently.

One main source of difference between human and artificial intelligence continues to be experience, since human intelligence predominantly arises from and continues to develop based on physical limitations and external stimuli. Quite simply, AGI is unlikely to ever undergo the same exact bodily experiences as humans and, consequently, its training and development “will naturally cause its behavior to be somewhat predictable” [30]. As a final note, Wang, Liu, and Dougherty conclude that even once AGI is developed, singularity is still highly unlikely to occur since “the essence of intelligence” will have “been captured by humans, which will further guide the use of AGI to meet human values and needs” [31]. With this brief explanation in mind, I would like to turn to the question of creativity and how artists’ work pertains to AI.

2. MISNOMERS & ASSUMPTIONS

bottom of page