"...By decreasing production time and expediting the iterative process, generative design breeds more time for the creative process."
The creative process, however, is found in the iterative process.
During the course of this project, I assisted TJU's Professor Carol Hermann’s Design One Architecture studio (D1).
Actively observing and partaking in the nurturing of beginner designers offered a unique perspective to the argument of creativity in technology.
First years are not unlike an untrained generative model. They have to learn principles of creativity.
To do this, D1 starts with more abstract and conceptual projects, rather than jumping right into building design. One of the projects they are assigned is to visually analyze a 2D painting in effort to create a 3D form.
To test how generative models aid in the iterative process, I utilized a Style Transfer model in RunwayML.
Style Transfer takes two images (a content image and a style image) and applies the style to the content image.
To diagram I applied various “diagram” and “3D study” styles to the paintings to output what I felt best analyzed the paintings.
Here, AI aided in the creative process of visual analogy.
RunwayML's Style Transfer offers parametric tools to adjust aspects of each image to optimize the result.
My hand in the diagramming lied in the decision process of which output best matched my own analysis of the paintings.
The outcomes varied in comparison with the student’s work. It was curious to observe their iterative analysis process. What decisions the students made from their observations to produce such analyses.
Consequently I wondered, what decisions did the generative model make in its analysis.
As stated, the only hand I played in making the above diagrams were fine-tune adjustments. I did not analyze the paintings, the machine did.
And, as quoted, visual analogy is a part of the creative process. So, I question, is the machine creative?
That is exactly the question artist Pindar Van Arman asked during his project Cloud Painter.
Cloud painter is a project that integrates artificial intelligence, image data, and painting to generate unique portraits.
Cloud painter started as a program that generated iterations of portraits. His project developed, adding neural style transfer to the end results, adding robotics to the machine, and physically painting the iterations.
Eventually, Van Arman gave autonomy to the machine. He created a code that let the machine choose which image it wanted to paint, choose the style, the colors.
Additionally, the robotic arm gained sight, and compared each brush stroke to the digital image and stopped when it believed the physical painting matched the digital image to “the best of its abilities.”
Van Arman isn’t the only pioneer in the AI art world. Refik Anadol has gained huge traction for what he describes as “Machine Hallucinations.”
His projects utilize generative AI, image data, and 3D projection mapping to create experiential installations.
His projects started by observing data as a pigment, compiling hundreds of thousands of images, sounds, videos, for his projects and running them through generative programs to find latent space between each image, and an alternate reality to the data used.
His work investigates not only machines as creatives, but the line of reality between machines and humanity.
Van Arman and Anadol’s work furthered my speculations: if a machine could become a creative, could a machine become a designer?
And, if so, would machines replace or aid humans as designers?
Regardless, how do we not lose sight of the human touch in design?
To answer my questions, I needed to conduct my own experiments. Looking at both artists' work as a foundation, I decided to study elements from their processes.
Both artists use image data and a generative AI model called Generative Adversarial Networks (GANS).
Experimenting with GANs however, would not be as easy as using the Style Transfer model. Style Transfer is a form of AI in machine learning, GANs is a form of AI in deep learning.
And quite literally,
I needed to do deeper learning.
NEXT:
PREVIOUS: