One of the first text-to-image outputs, from the alignDraw paper published in 2015. I consider this a milestone moment in AI Art. While not originally intended as artwork, it marks a significant moment with models being able to "understand" our world and generate coherent representations of it, making "decisions" (even if driven by pseudo-randomness of the model architecture) about form, composition and color.
The model becomes the artist
alignDRAW - the first text-to-image artwork from 2015.
By early 2015, neural networks had mastered the art of 'image-to-text' and could create natural language captions for images. Flipping this process, and turning text into image, was a much more complex challenge solved by 19-year old prodigy Elman Mansimov's alignDRAW model.
Fellowship is pleased to present a special release of both print editions and fully on-chain NFTs of this historical artwork, containing all the original images created in 2015.
"alignDRAW images can be compared to the first fixed photographs taken by Niepce in 1826-1827." Dr Lev Manovich
"These images represent the birth of text prompt AI generated imagery." Darius Himes, Christie's
What 2015 provenance is there?
The 'Paper Prompts' can be found in the published academic paper, 'Generating Images from Captions with Attention', submitted in November 2015. You can view the paper here:
https://arxiv.org/abs/1511.02793
The lead author of the paper was Elman Mansimov and the co-authors were Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov.
The 'Process Prompts' can be found on the University of Toronto website. Using archive.org you can see also see these images displayed in November 2015:
One of the earliest examples of AI art minted on the blockchain. 300 of these unique AI generated nude portrait "frames" were given away for free at a Christie's event in 2018. Of the 300, most are presumed lost, with only <50 known to exist in circulation. Robbie is a true pioneer of AI art, having created among other things: AI Generated rap music, AI Generated Balenciaga, rules based generative sketches by GPT-2 inspired by Sol LeWitt
Possibly one of the earliest examples of GAN based art and exploration of the concept of "latent space" in art
This 2015 work is the result of an early experiment in generative AI, where a neural network was trained to understand and recreate the essence of handwritten Chinese characters. Drawing on a database of nearly a million handwritten characters, the model learns to distill their patterns, forms, and textures, ultimately producing entirely new characters that never existed in the original dataset.
The title is a reference to the 1988 book by Xu Bing, who created thousands of fictitious glyphs in the style of ancient Chinese prints from the Song and Ming dynasties. In a similar way, this AI-driven exploration blurs the line between authentic tradition and imaginative creation, inviting us to reflect on the boundaries of written language.
David Young, among early AI artists, created some of the most aesthetically pleasing outputs. This particular one is special because it's accompanied by a physical painting (also part of the collection) inspired by the AI generated image. A very early example of human art inspired by AI art. See blog post below.
AI artwork generated by a GAN trained on images of flowers from rural upstate New York.
This work was exhibited at: Automat und Mensch, 2019, Zurich, Kate Vass Galerie Telephone Paintings, 2021, Montreal, Anteism Automat und Mensch 2.0, 2024, Zurich, Kate Vass Galerie
Botto is a decentralized autonomous AI artist whose taste is influenced by public votes. This piece is from the "genesis" set of 52 works from the first year of its existence
Genesis Period. 2022 August #042. Somewhere lost in the distance, an image of you appears. Contemplating what could have been, if we had only communicated. But now it's too late. The damage is done. We can only Displace the memories of what once was. Botto in conversation with GPT-3.
Two CycleGAN models trained on face transformations are passing their outputs back and forth creating an infinite loop of ever-changing Turing patterns.
Created June 25th 2017
Song: "Pachabelly" by Huma-Huma
This piece is particularly important to me as it signals a transition from the code-based generative art to model-based generative art. One of three minted outputs from the gen2gan project. This piece was previously part of the Starry Night collection.
"A great thing about a joint project is that you become super mindful about your process"
gen2GAN is a collaboration between artists Helena Sarin and Dmitri Cherniak.
First, a generative algorithm written by Dmitri Cherniak generates thousands of outputs without any curation whatsoever. Those outputs then serve as the dataset to model a pipeline of neural networks (GANs) that are lovingly hand tuned and constructed by Helena Sarin.
This output was curated by Sarin and Cherniak.
Generated on July 18, 2020 (PNG, 2200px X 2200px)
An early output from the DeepDream model generated shortly after it was made public. Similar images were included in the original blog post linked below.
An early piece of NFT art, Bloemenveiling was an online auction of short GAN generated videos of tulips; it used smart contracts on the Ethereum network to sell the work and bots to help drive speculative prices. The piece was made in collaboration with David Pfau, an artificial intelligence researcher, and is the third in a series of work that looks at the relationship of the tulip to speculation, hype and value. Through the use of these types of technologies, which are increasingly embedded across different types of markets, it interrogates the way technology drives human desire and economic dynamics through artificial scarcity. It was created as an artwork, for a gallery show, but was a fully functioning distributed app (dApp), existing as a working example of what it also critiques.
One of my favorite pieces of early AI art. An AI model trained on a certain dataset (stars, flowers, clouds etc) is fed a realtime video, and its perception is colored by its existing biases. In this case, it interprets the artist's face as galaxies and star dust
HD Video made with custom software using Artificial Intelligence / Machine Learning / Deep Learning / Generative Adversarial Networks.
Originally made in 2017, minted in Sep 2024.
An artificial neural network looks out onto the world, and tries to make sense of what it is seeing. But it can only see through the filter of what it already knows.
Just like us.
Because we too, see things not as they are, but as we are.
Created and exhibited in Art Basel Miami in 2017, this image was made by a GAN trained on images from Dante's Purgatory. Paglen's Evolved Hallucinations is a fantastic early example of the power and beauty of GANs
I've long been frustrated by the literalness with which fields like computer science and computer vision approach the topic of seeing. It's a neat trick to point a camera at a picture and have the caption "hot dog" or "not hot dog" appear on the screen. But if you show Rene Magritte's iconic "The Treachery of Images" to such an object recognition model, the classifier will invariably return the result: "This is a pipe." Something is wrong here. Visual perception is squishy and slippery, formed by each of our unique biological makeups, our memories, history, culture, and our own subjectivities. Just as a monarch butterfly sees a flower entirely differently than a field mouse, a medieval Spanish farmer sees a comet in an entirely different way than a contemporary architect. An early 20th-century psychoanalyst or semiotician might understand images from a dream quite differently than a present-day cognitive neuroscientist. "Seeing" is a deeply historical, cultural, subjective, and even political affair, profoundly shaped by our sensory and social environs. With the "evolved hallucinations" project, I wanted to see what would happen if we tried to build computer vision models based on a wide range of historical, cultural, and notional worldviews. I began training models on allegorical art, symbolism, and metaphor, using image-vocabularies drawn from literature, philosophy, poetry, folklore, and spiritual traditions. Could I build models that embraced the slipperiness and squishiness of visual perception? What would it mean to build a model designed to "see" the world through the extended allegory of Dante? What might the world look like through the "eyes" of future seaweed on a post-human earth? Or the worldview of a Cassandra-like being, fated to see the future but helpless to change it? The Evolved Hallucinations is my partial answer to that question.
Cello, from the Perception Engines 1 Series, 2018.
Digital master created by neural networks based on the ImageNet category "cello".
Mike Tyka, 2017, 4096 × 4096, mtyka@ , https://www.miketyka.com The series, titled "Portraits of Imaginary People" explores the latent space of human faces by training a generative adversarial neural network (GAN) to imagine and then depict portraits of people who don’t exist. The work is inspired by and named after Russian troll accounts spreading disinformation during the 2016 US elections.