we are experience designers
ai and visualising our imagination
the potential to transform imagination into a concept
AI has entered the creative sector. AI artists have prestigious expositions, many architecture firms now use AI in their design processes, and artists are suing AI companies. It creates a mix of fear and potential. Therefore, as an AI creative, I explored AI to understand its role and applications for us.
Over the last months, I have explored many aspects of AI and researched its potential in our design process as well as exhibit design. AI's capabilities are developing rapidly. Every other week there is a new tool or model that surpasses the previous. Especially generative AI now gets a lot of traction in various media such as images, video, audio, and text. Among the many applications AI has for us, I want to focus on one for this story. Its potential to transform imagination into a concept.
An example
A large part of our design work involves concretising our imagination, a tedious process that can be time-consuming before we can even share it with others for feedback. One of the media we use for this is 2D images. Now, AI can generate an image from your text in 10 seconds. Can these tools bring our imagination to life faster? Different tools for image generation exist and they differ in their quality and user-friendliness. But the essence is all the same, you type a sentence (aka a prompt) and it returns an image. Let’s try. “A circular room with a large shell in the middle where children can climb on. The ground is scattered with plush sea creatures.” Not bad!


I do want to change the backdrop so let’s see if the AI can generate this for me. ”A playful imagination of the Cambrian explosion with friendly looking sea creatures.”


Hmm, a bit more imaginative than I had in mind, but it gets the idea across. Now I would also like a close-up of the plush animals. “Plush animals of imaginative, soft, happy sea creatures.” Nice, I can even choose!


We now have three elements to convey a general impression to colleagues or clients. However, they are not perfect. In my imagination, the shell was a climbing object. The AI didn’t get this concept and refused to do it. This often happens. The tools work great to visualise the main elements of your idea but require adjustments to accurately represent our imagination. Moreover, we still need to rely on our imagination and design insight to prompt, judge images, and envision how elements can come together. The current AI capabilities do change the way you design, but it doesn’t replace you.
Limited representation of the world
While you might have seen impressive examples of AI-generated images on the internet, many of them are cherry-picked. For the plush animals, I only needed two tries. For the shell, I spent half an hour generating images (and using electricity) to get a good one. Sometimes, I don’t get anything useful out of the AI because it becomes utterly clear that it doesn’t understand the world or has a narrow representation of a concept.
One of my favorite examples is AI's representation of the concept "museum." It feels the need to put a showcase somewhere as soon as you mention that word in your prompt.
Here the prompt was “An ancient ship museum exhibit in a children’s science museum. Large room with glass walls. The ship is a playground with exhibits featuring pulleys and weight interactive elements.”


Since the prompt includes "museum" and “glass walls”, the AI put the boat inside a showcase but since it also mentions “playground” the AI also put the people inside the showcase. This example shows that writing prompts involves understanding the AI’s mental concept of a word and finding ways around it when you have a different concept.
This also brings us to biases. Something that worries me with the current state of AI. Often the models are trained on biased data, which are then expressed in its creations. As designers, we need to be aware of this in our work and find ways to circumvent it. Don’t just accept the image it returns; consider if it represents your beliefs. This can be time-consuming but important.
Here, AI made a decent image of a playful theatre setup for kids. The only change that was asked for, was the ethnic features of the children. It did change this, but also inserted stereotypes. Now the kids are sitting on their knees, have glasses, and are reading books. In this case, I do believe you have to try to change the prompt to find a way that represents your intention.


What’s next?
For us, AI has become an important tool for concept design. We can iterate ideas much faster, within our team and with clients. Visual language is more universal and leaves less room for interpretation. However, this also poses challenges. The generated images can feel more like a final design, limiting free ideation and creating incorrect client expectations. Moreover, we are limited to what the AI returns, giving us less control. While AI is developing rapidly, it will continue to be a different way of working than what we are used to.
In this story, I've only talked about visualising our imagination in concept design. But the application of AI is so much broader. It can be applied to many mediums and AI is broader than generative AI. One opportunity that excites me as well is the potential to apply AI in exhibits. To bring the imagination of visitors to life. The first applications of this are emerging in exhibits and we are currently in the prototyping stages of our first ideas. It's a journey for another story. But let me conclude with an invitation to you as a reader to allow your imagination to come to life. How would you use AI in an exhibit? Perhaps you could ask AI to create a visualisation of your idea?