With just a few words, stunning visual worlds can be conjured from the ether. Through an emerging and fast evolving genre of artificial intelligence known as text-to-image generation, tools including DALL-E 2 and Midjourney have opened up the doors to a cornucopia of visual creations. With neural networks trained on billions of images and their text descriptions, these tools can take simple phrases or word jumbles and offer up their interpretation in the form of highly detailed and surprisingly beautiful visuals, within seconds.
Among those field testing these tools are architects and designers. Some are beginning to use them as powerful new avenues for visualizing early stage concepts, testing out approaches to projects in the works, and even giving their non-designer clients a way to play a more active role in the design process.
“It’s almost like you’re speaking a building into existence,” says Paul Howard Harrison, computational design lead at the global architecture firm HDR, which has more than 200 offices around the world.
That might seem like a troubling development for an industry based on bringing buildings into existence with teams of highly trained humans, but Harrison says artificial intelligence will likely augment the work that architects do.
That’s already starting to happen. He’s been exploring the use of AI and specifically trained neural networks as design tools for the past few years, applying them to some of HDR’s projects, like a new hospital project. For another project, a heritage building in Ontario, the firm trained a neural network on every listed heritage building in the city, and turned the algorithm loose on generating new concepts that fit into the local historical style.
Midjourney and DALL-E 2 broaden the applicability of this approach, Harrison says, and he’s used the tools to quickly generate ideas for actual projects the firm is working on, including a mixed-use community center in Toronto. He’s also been exploring more playful concepts, including housing designed in the style of children’s book illustrator Maurice Sendak. Another visualization created buildings inspired by noted architects Robert Venturi and Denise Scott Brown, with the word “duck” on the façade — a reference to their influential book “Learning From Las Vegas.”
Foster + Partners, the largest architecture firm in the U.K., has also been exploring AI and machine learning tools in the early conceptual design phase of projects. “In addition to using reference images or studying precedents online, we can now use these tools to quickly illustrate an idea or a feeling that we want a particular space to evoke,” a firm spokesperson tells Fast Company. “This side of machine learning is shaping up to be a useful tool for drawing inspiration.”
Some of the outputs are clearly more realistic than others, which could be a concern. Harrison says the wide accessibility of these AI tools and the detail of the imagery they produce could create far-fetched expectations about what can actually be turned from a concept into a building.
“The danger is that we’re now suggesting a photorealistic level of resolution,” Harrison says. “It’s successful in establishing the aesthetic of a building, but really that’s only a small part of what we do as architects.”
Taking ideas beyond traditional drawings
Architect Andrew Kudless has generated thousands of images with these new tools in recent weeks. A solo practitioner who also teaches at the University of Houston, he says these tools are a fancy version of the old school sketchbook, where vague ideas and building concepts can take rough form. “The advantage of Midjourney and other text-to-image generation tools is they serve a real purpose at the beginning of a design project when you’re conceptualizing and dreaming about what a project could be,” Kudless says.
These tools could also speed up the process of turning those rough ideas and napkin sketches into workable designs. “One of the difficulties that we have often in architecture is that it’s actually quite difficult to make images that capture the mood or the atmosphere of a project without a tremendous amount of work rendering them,” he says. Like many architects, he’s lost hours toiling on tiny details in computer-aided design programs and then more hours of waiting for images to be rendered in high detail, only to have to go back and revise or rework them again and again. “We’re not paid enough to spend the number of hours we actually spend on projects,” he says. “We need tools to help automate the tediousness of a lot of the work that we do.”
Kudless has been using Midjourney to visualize the use of fabric in architectural designs, such as curtains and shades that become integrated with the facades of homes. It’s a concept he’s been interested in since the late 1990s, but has struggled to accurately simulate fabrics with the computational design tools that have become mainstream in the architecture industry.
“It was beyond my expertise and something that was very difficult to deal with. I just didn’t have the time to go down there,” he says. “All of a sudden in the last six weeks of starting to use these AI tools, the AI doesn’t know how to simulate fabric either, but it’s looked at millions of images of dresses and table cloths draped on a table. It knows what it looks like without knowing the physics of it.”
Images he’s created through Midjourney show imagined buildings that appear covered in soft pastel curtains, as if inside a kaleidoscope. “And it can do it in seconds,” Kudless adds. “I can explore some of these things in a way that I was never able to using traditional drawings.”
Homes only seen in dreams
That ease is also creating opportunities for non-architects. Harrison says AI can help make it easier for clients and the public to show architects what they think a project can or should look like. They already to this, Harrison says, but the tools they have are often rudimentary. “I’ve seen everything from Excel spreadsheets that have been colored in to be a floor plans to sketches, and more often now we’re getting things like Pinterest boards,” he says. “There’s a constant desire on the client side to express what they see, or what they would like to see.”
In a way, that could also be opening up more opportunities for architect. “I’ve had more people contact me about jobs in the last month than I have probably in the whole last year. People have so many ideas about houses they want to do or art installations, or music festival pavilions,” Kudless says. He envisions clients coming to him with all kinds of interesting conceptual images, whether based on places they’ve been or homes they’ve seen only in their dreams. “It’s been kind of overwhelming for me, but also you see how inspired these potential clients are.”
There are limitations to these tools, which their makers acknowledge. AI based on a set of images, even billions of them, is ultimately limited by what is in those images. That can result in AI systems that, for example, underrepresent women and people of color. In architecture, the bias is less discriminatory but still impactful. Harrison notes that the history of architectural photography has largely favored symmetrical views of buildings and rooms, and spaces devoid of people. These factors effectively narrow the pool of architectural images they can produce. Harrison says those limitations underscore the argument for architects taking artificial intelligence into their own hands, and developing their own neural networks based specifically on the types of projects or buildings they’re working on at any given moment.
Both Harrison and Kudless see these tools as part of the evolution of the architecture profession, not a replacement of it. They join a long line of technological advances that have affected the way architects work, such as the evolution from hand drafting to computer aided design.
“All of these things have changed the profession, but I’m not really worried for people’s jobs,” Kudless says. “There will be jobs coming out of this that we don’t even know about.”