AI generated art based on human emotions.
For the final project I'm going to ask you to start with one of the two broad areas of application of machine learning we have discussed: interactivity (using teachable machine and ml5.js) or image-making (using RunwayML or ML4A collabs). We'll take two weeks to complete this project - this is not a lot of time so make sure your scope is feasible to accomplish within about a week
Using machine learning to create AI generated images. I'm curious how AI sees human emotions such as happiness, anger, saddess, and love.
I started to look at different platforms that create AI generated art based on prompts and came across Dream by Wombo. They produce stunning graphics by entering a prompt and choosing your art style. I decided to use this to create my images.
The first emotion I played with was Happiness. I entered the same prompt for each of these art styles which was: Smile, Happy, and Happiness. As you can see there is a similar theme throughout these. They all use bright oranges and yellows and both the Psychic and Synthwave have faces that appear to be smiling.
The next emotion was sadness and they are clearly different from the previous ones. The direction of the lines and shapes appear to be moving downward almost in a slouching position. Some of them look like they could be crying.
The emotion anger produced really fascinating art. Each of these styles included faces that really express anger such as the eyes and mouths.
I was surprised how AI interpreted love. Each of these are very similar to one another especially the Psychic and Dark Fantasy styles. It looks like two entities embracing or intertwining one another. There is an abundance of red used in these images suggesting that the AI associates red with love.
How can I take this further?
I went on RunwayML and looked into image analysis models. I was mainly curious about how machine learning could interpret these images. DenseCap can detect objects from images and im 2 txt creates captions based on images.
A challenge I ran into with DenseCap is the AI picked up everything inside the image so I had to crop the border and watermark out of the image because it was picking objects in the border. It was interesting to see the objects it detected of this image based on happiness. Some of the objects detected was the head of a girl and the back of a chair.
Im 2 txt created a caption based on this image, in some ways it does look like a flock of birds flying over a body of water. Overall these models were not train to detect emotions so I was not surprised that they only focused on detecting for objects. The next thing I’d like to explore in this project is training a machine to detect emotions in generated art.