This discussion warrants a foundation of clear terminology since there are a number of overlapping and abstract terms used to designate what is considered to be AI. In general, “Artificial Intelligence” functions as a catchall term used to designate when a computer reflects human intelligence, characterized by the ability to learn. In 1955 John McCarthy described AI as “the science and engineering of making intelligent machines” [1].


Machine Learning constitutes a subset of AI that focuses on the ways in which computers can “improve their perception, knowledge, thinking, or actions based on experience or data” [2]. It is a category of techniques that allow a machine to teach itself rather than having all programs explicitly written. While ‘AI’ is an exceptionally loose label, ‘machine learning’ is a concrete process used to mimic human intelligence.


Deep Learning is a method that makes use of neural networks, a type of machine learning algorithm that roughly simulates the structure of the human brain. Deep Learning systems can perform highly complex tasks and account for variations that a human programmer may be incapable of predicting. Neural networks contain immense structures of data, allowing the machine to recognize patterns, understand data relationally, and adapt functionally so that it can process data more efficiently. Backpropagation allows it to update its network to account for previous errors, making this a dynamic and constantly evolving form of machine learning. 

A Convolutional Neural Network (CNN) is a subset of neural networks. Its name originates from a “mathematical linear operation between matrixes called convolution” [3]. Within the CNN are multiple layers of functionality that allow the neural network to identify features in data, process them, and apply information in a variety of ways. CNNs are widely used, particularly for machine vision and image processing. Deep Dream Generator, an AI tool that I have used extensively, is an example of a Convolutional Neural Network.


A Generative Adversarial Network (GAN) makes use of a pair of competing networks that are able to learn from more abstract training data [4]. A GAN consists of a discriminator and a generator, two distinct neural networks that compete against one another. The discriminator attempts to assess whether an image is authentic or not, and the generator seeks to fool the discriminator by producing increasingly authentic images via backpropagation [5].

Algorithms are somewhat loosely defined as mechanisms for achieving these processes. As Christopher Manning explains, “[a]n algorithm lists the precise steps to take, such as a person writes in a computer program. AI systems contain algorithms, but often just for a few parts like a learning or reward calculation method” [6]. Algorithms function in a variety of ways, but effectively they allow a programmer to establish essential processes needed to achieve the successful completion of a task.