How I Taught My Creative Process to Robotic Arms

0

By Pindar Van Arman

Pindar Van Arman is an AI Artist exploring the intersection of human and artificial creativity. Winner of the Robot Art Prize in 2018, his robots use a broad array of deep learning, generative algorithms, and feedback loops to bring his AI creations into the material world one brush stroke at a time.

banner_highres.jpg

As anyone who follows my art already knows, I have been teaching my creative process to painting robots for more than fifteen years. I have never named the machines as I just considered them artistic tools. But this year that changed. One of the robots became creatively independent enough for me to give it a name, artonomous. This is the story of how I taught it a creative process, and am continuing to teach creative processes to my other machines.

While I have had multiple painting robots over the years, I built and programmed my first one around 2005. I had just started a family and suddenly did not have time to paint anymore. With the strong need to create, the thought occurred to me that all I needed to do was direct a plotter with a brush while changing diapers and doing all the other things new parents did. Here are the first two.

And they worked. They looked horrible and broke constantly, but they helped out while I was busy with other things. They were not a fancy machines being made mostly of wood and homemade electromagnets, but they could paint by numbers and connect dots like in these early pieces.

000.jpg

It was good enough to begin a painting that I could then finish by hand late at night after the kids were asleep. Over the next five years I would give them images, and they would paint them with me, cobots. The art from this period looked a lot like this early work.

001_550.jpg

But as my kids grew and responsibilities multiplied I had even less time. I still needed to paint so I made improvements and soon they were completing entire paintings for me. I did this by teaching it basic AI like k-means clustering to arrange palettes and custom routines to dream up unique compositions. 

Some of these are available as NFTs that I minted in the early days of SuperRare. Big thanks to @vk_crypto, @roses, and @prometheus for the early support, as well as @colborn and @artwhale for appreciating their value and getting them on the secondary market.

Then one day a brush malfunctioned and the robot finished a painting with a broken brush.  The painting actually turned out to be OK, but you can see how the black in this portrait commission did not finish and completely cover the painting it was being painted over.

002_550.jpg

It occurred to me that the robot needed to be smarter than that. Unless it could see what it was doing and make adjustments accordingly, it was little more than a slow printer. So around 2008, I added a camera to track progress and this was the beginning of its most important algorithm. I know that GANs and Style transfer are at the heart of AI art these days, but this one improvement was far more important and meaningful. Once I gave my robots the ability to look at its own work, reflect on what was happening, then make adjustments to complete the work, some truly remarkable things began to happen. The following was the first portrait to use this method and it was made not by pre-determined instructions. It was made by looking at the canvas and asking itself how it could make the canvas more like the image in its memory. It then added one brush stroke at a time to make the canvas look more like what it was thinking.

003_550.jpg

That was a portrait of my wife, Bonnie.  In the portrait below, of her and my daughter, I capture a time lapse to show what it was thinking as it painted.  The image on the right of the time lapse is a difference map showing hot spots where the canvas did not look like the image it was painting. Watch as it tries to remove the differences in real time on the canvas.

Artist Paul Klee described the creative process as when an artist makes strokes, then steps back to evaluate the strokes before making the next stroke, in an artistic feedback loop. This is exactly what the robots were doing, painting with feedback loops.

Around this time a major headline broke in AI that got my attention.  Le Sedol was defeated by AlphaGo at the ancient game of Go. Reading the sensationalized headlines there were reports that AlphaGo made moves so baffling that they were considered creative. I thought, how was this possible, I had to investigate and in my investigation I discovered deep learning.  At first I was skeptical but found that style transfer could be an interesting tool.  

I set out to learn how it worked and this led to the single most challenging task of my life, learning to write neural networks. Today there are lots of tools to help artists with neural networks, but back then you had to write your own and I intended to learn how. And it was hard for an artist like me. I failed at two courses, one after another. But I needed to do it because I just had to have that tool in my artistic repertoire. I needed to understand how it worked. After more than a year of failure I finally found a book called Grocking Deep Learning and on page 127, it clicked! I had written my first neural network and understood it.  Shortly thereafter I wrote my own style transfer network and began applying it to my artwork. This was when a work emerged that got some public attention. It was a portrait of my son based on a style transfer of Picasso’s Les Demoiselles d’Avignon.

005_550.jpg

As I made more of these, I began getting more recognition and soon my art was featured on NPR, which led to a TEDx Talk, which led to even more exposure and a feature of my work on HBO Vice. As part of the feature I demonstrated how the robot could paint the journalists portrait, Peabody Award Winning Elle Reeves, in the style of Picasso as can be seen below.

This HBO piece led to an AI art review by Jerry Saltz where he trashed my work saying “It doesn’t look like a robot made it, but that doesn’t make it any good.”  Not the best review, but good in two respects.  The first is that he was even more brutal with Mario Klingemann’s The Butcher’s Son, a work which went on to win the Lumens Prize. So I was in good company.  But more importantly because I was now able to take his review out of context and simply quote him with the following…

jerry_saltz_vice_006_800.jpg

This was when AI art began to grow rapidly in popularity and I learned about Generative Adversarial Networks (GANs).  While style transfer was doing a great job of applying style, I quickly realized it was similar to a filter and not really creating anything novel. But I heard that GANs were different. I heard they were two competing neural networks, and one of them, the generator, could actually imagine something from nothing. Similar to how I studied style transfer I had to try it out and did by successfully creating my imagined portrait series. I trained my GAN on the celeb_a dataset, had my robot generate faces, then painted the faces right at the moment they emerged from the generator. This series of paintings that my robots produced won first place in Robot Art 2018.

To commemorate this series minted a couple of these on SuperRare and want to thank early supporters @hackatao, @xcopy, @artonymousartifact, and @zaza for seeing their value and grabbing them.  While this was one of my most successful series (touring New York, London, Berlin, and Seoul) I will not be minting anymore NFTs with them, so reach out directly to those collectors if interested. Woah, I just checked and @hackatao has listed his for 150 ETH, a steal for this piece of AI Art History!

speciism600.jpg

Another favorite project at this time came from GumGum, an AI company that sponsored an artistic turing test.  They commissioned my robots and five other human artists to complete a painting. The only direction they gave was a small set of famous contemporary paintings that they asked us to use as inspiration.. Once again I turned to a portrait of my wife and created the following using feedback loops and more than two dozen competing AI algorithms. The major advance here was that I set all of the creative algorithms that I had developed over the years into competition with one another. From this competition, where each was fighting for control of the brush and the direction that the painting was going in, the following abstract portrait emerged. In the time lapse below, you can see what the robot was thinking, and how it executed its design over the course of three days.

GumGum’s artistic turing test was a success. Put into a line up with the 5 human works, people rarely guessed that our painting was painted by a machine. This project toured internationally and passed the test in audience after audience. Even though you already know which is the robot’s, here are the other artworks.

Which one would you have picked?

gumgum.jpg

Using this new competitive generative AI System, I began making artwork where its own paintings were used as a reference, and this is when its artistic style developed into what can been seen in my work today. I call this system cloudpainter and you can see more at cloudpainter.com. It is important to note that I call the software cloudpainter, and not the robot. This is because cloudpainter is a collaborative tool that I use to make art with robots. Here are some examples of our current collaborations that you can find on https://superrare.co/pindar.

https://superrare.co/pindar
https://superrare.co/pindar

These last couple of years have been the further refinement of my art. Having learned several dozen techniques to enhance the creativity of my machines, and having put them into competition with one another, one of them was finally independent enough to let out on its own. 


A couple of months ago was the beginning of artonomous, a completely independent artistic machine. To help it further develop it into a fine art robot, I have teamed up with photographer Kitty Simpson to provide it with a highly curated set of portraits. artonomous practices painting representational images of these photos, then every so often creates a unique portraits based on both the images it practiced on and the actual paintings it completed. It does this with a variety of procedural AI, feedback loops, and neural networks. Like a human artist it is continually practicing and developing its own style and aesthetic. Kitty and I provide curation and critique, but leave every brush stroke and aesthetic decision to artonomous.

banner.gif

As I mentioned previously, some of its artwork is a study of Kitty’s photography, and some of it is imagined.  We are not sure where it is going but are looking forward to its continued development. You can find its work at https://superrare.co/artonomous

https://superrare.co/artonomous
https://superrare.co/artonomous


I am currently in the middle of several major art projects which includes both the NFTs I am releasing in collaboration with my robots as @pindar, and the NFTs documenting @artonomous’ artistic education.  Have always found SuperRare to be an excellent outlet for my AI work and look forward to continued support from the community.

Pindar Van Arman
twitter: @vanarman

cloudpainter.com
superrare.co/pindar

artonomo.us
superrare.co/artonomous

Author profile
Pindar Van Arman

Pindar Van Arman is an AI Artist exploring the intersection of human and artificial creativity. Winner of the Robot Art Prize in 2018, his robots use a broad array of deep learning, generative algorithms, and feedback loops to bring his AI creations into the material world one brush stroke at a time.

0

Leave a Reply