This entry is part [part not set] of 5 in the series AI as a Creator (of AI?)

Can AI create visual art, music and cook new dishes?

In this series, we examine how AI has been “creating” novel things
spoiler alert – it’s getting really good…

By VICTOR ANJOS

Welcome to my multi-part series on AI as a creator.  In this series we aim to look at whether today’s artificial intelligence is capable of devising new works of arts.

We realize that Art is completely in the eye of the beholder, so what may be considered art to some is different than others. Our definition of art will be loose and we will use it to mean the following:

"Art is often considered the process or product of deliberately arranging elements in a way that appeals to the senses or emotions. It encompasses a diverse range of human activities, creations and ways of expression, including music, literature, film, sculpture, paintings, and also such things as math, physics, biology, chemistry and general inventions. "

In today’s part of the series we are examining: AI as a creator of

AI as a creator of Visual Art

Creativity may be the ultimate moonshot for artificial intelligence. Already AI has helped write pop ballads, mimicked the styles of great painters and informed creative decisions in filmmaking. Experts wonder, however, how far AI can or should go in the creative process.

When we recently spoke to AI experts and thought leaders, their opinions varied as to whether AI has the potential to become a true creative partner or even the creator of solo works of art. While this debate will likely continue for some time, it’s clear that as digital content and delivery platforms continue infiltrating all forms of media and expression, the role of AI will undoubtedly expand.

Making machines creative?

Examples of these amazing applications are becoming more numerous, ranging from generating still images, to videos (i.e. deepfakes), or even generating videos from still images. The good news is, now you can see the Mona Lisa nodding and laughing.

Machines are developing the capacity to create rather than just learn. And here’s where it gets interesting—what if we can teach machines to be creative? If they can create an image, then why not a painting? And if they can create a sound, then why not a pleasing sonata? And if they can generate a logical sequence of words, then why not poems, tales, and novels?

We’re in the age of machine evolution. Have you ever looked at a cubist or an avant-garde painting and said to yourself, this better be created by machines? The sharp lines, the hazy features—all characteristics of these artifacts. We humans of the postmodern era are more inclined towards abstraction. So why not let machines take the lead—they love abstraction!

 

Why Generative Adversarial Networks?

One of the many major advancements in the use of deep learning methods in domains such as computer vision is a technique called data augmentation.

Data augmentation results in better performing models, both increasing model skill and providing a regularizing effect, reducing generalization error. It works by creating new, artificial but plausible examples from the input problem domain on which the model is trained.


The techniques are primitive in the case of image data, involving crops, flips, zooms, and other simple transforms of existing images in the training dataset.

Successful generative modeling provides an alternative and potentially more domain-specific approach for data augmentation. In fact, data augmentation is a simplified version of generative modeling, although it is rarely described this way.

"… enlarging the sample with latent (unobserved) data. This is called data augmentation. […] In other problems, the latent data are actual data that should have been observed but are missing."

In complex domains or domains with a limited amount of data, generative modeling provides a path towards more training for modeling. GANs have seen much success in this use case in domains such as deep reinforcement learning.

There are many research reasons why GANs are interesting, important, and require further study. Ian Goodfellow outlines a number of these in his 2016 conference keynote and associated technical report titled “NIPS 2016 Tutorial: Generative Adversarial Networks.”

Among these reasons, he highlights GANs’ successful ability to model high-dimensional data, handle missing data, and the capacity of GANs to provide multi-modal outputs or multiple plausible answers.

Perhaps the most compelling application of GANs is in conditional GANs for tasks that require the generation of new examples. Here, Goodfellow indicates three main examples: 

  • Image Super-Resolution. The ability to generate high-resolution versions of input images.
  • Creating Art. The ability to great new and artistic images, sketches, painting, and more.
  • Image-to-Image Translation. The ability to translate photographs across domains, such as day to night, summer to winter, and more.
  •  

Perhaps the most compelling reason that GANs are widely studied, developed, and used is because of their success. GANs have been able to generate photos so realistic that humans are unable to tell that they are of objects, scenes, and people that do not exist in real life.

Astonishing is not a sufficient adjective for their capability and success.

Example of the Progression in the Capabilities of GANs from 2014 to 2017.

GANs creating visual arts

In 2018, Christie’s, a British auction house, sold a GAN generated painting, “A portrait of Edmond Belamy”, for $432,500, along with the following artist’s signature:

Do you recognize the artist? The signature belongs to our AI creative painter, GAN. It’s the GAN’s objective function we talked about earlier, performing the Min-Max game.

Edmond Belamy, to whom this portrait belongs, is a part of the Belamy family— all created with the GAN model.

The generated portraits are stunning! It’s like artificial intelligence has its own Van Gogh. Well, it actually does. Kenny Jones and Derrick Bonafilia developed a fascinating project based on GANs—GANGogh, which include huge dataset of artistic works with different styles. The network then learned how to create paintings mixing those styles.

 

The project is based on a variation of GANs called DCGANs. DCGANs (deep convolutional GANs) build both the generator and the discriminator based on convolutional neural networks, which are discriminative algorithms mostly used for image classification.

The generated images are surrealistic with pleasing figures and color mixtures. I personally find them beautiful—perhaps with high artistic content that some might find expressive and relatable.

However, of all art generation algorithms, I find AICAN the most interesting. AICAN is an AI application based on creative adversarial networks developed by professor Ahmed Elgammal, the director of Rutgers university’s Art and Artificial Intelligence Lab.

These paintings are revolutionary! The fancy artistic style, the dream-like mood, the swaying lines and shapes, and the harmonic mixture of colors make them indistinguishable from contemporary human-created art. Elgammal has presented the works of his little creative artist, AICAN, in many art exhibitions.

More notably, the “Faceless Portraits Transcending Time” gallery, showcased portraits generated by the algorithm without any details given to their faces. You can check out a demo of their exhibition here. These promising results make the AI community a very exciting one with new adventures every once in a while. And they give us hope more interesting machine-generated art in the future.

So what do you do when all signs point to having to go to University to gain any sort of advantage?  Unfortunately it’s the current state of affairs that most employers will not hire you unless you have a degree for even junior or starting jobs. Once you have that degree, coming to my Mentor Program, with 1000ml with our Patent Pending training system, the only such system in the world; is the only way to gain the practical knowledge and experience that will jump start your career.

Check out our next dates below for our upcoming seminars, labs and programs, we’d love to have you there.

Be a friend, spread the word!

Series Navigation