Generative design vs generative AI: a guide to the basics

Architect and AI expert Keir Regan-Alexander explains how practices including Stirling Prize winner Mae Architects are beginning to transform their design processes through powerful new computing methods

While generative design and generative AI have existed for a while, it is only now that we are seeing them wrapped up and packaged in such a way that makes the technology readily accessible to the ‘no-code’ majority – which includes most architects.

Take the British Museum for example.  An ever-so-slightly crooked stone court built in the 18th century and later housed under a precision-engineered pillow of metal and glass, by Fosters + Partners in 2003.  This emphatic solution was a mainstream moment for parametricism, made possible by early generative design software that could run a ‘dynamic relaxation’ algorithm developed by Fosters’ Chris Williams.  This algorithm settled on an optimum form and sized the individual length of each mullion accordingly.

Increasingly, we are seeing the terms generative ‘design’ and ‘AI’ used interchangeably and a wave of products are being rebranded under the heading ‘AI’, because it’s all the rage.  Branding aside, the new toolset is so powerful that it may fundamentally change our design methods in practice. Therefore, we need to be clear in our minds on what we mean when we use these terms and decide which tools will work for us in practice.

Advertisement

In architecture and design, we work at two extremes simultaneously: first, we deal with the abstract idea – taking words from our clients and synthesising them into materials and spaces; at the same time, we also deal with the finite, with absolute requirements for measurable real-world instructions – we annotate drawings with dimensions and exhaustive performance specs.

To create architecture, we need to balance precision and nuance – like two complimentary hemispheres of the same brain.  An architect who is only good in one hemisphere is unlikely to consistently succeed in the profession; you must be able to do both.

On the precision side, the technology broadly referred to as ‘generative design’ is relevant.  Software in the generative design realm focuses on the quantifiable.  These programs ruthlessly execute procedures based on the deterministic parameters and controls defined at the outset. They give ‘predictable’ results that produce vectors with coordinates that can be measured in millimetres – think British Museum roof.

On the other hand, software that uses generative AI technology applies to the qualitative dimensions of practice and tends to sit more in the fuzzy creative realm of pixels and copywriting. As professor Neil Leach describes it, machine learning helped computers to get very good at correctly identifying the subject of an image, but then Ian Goodfellow made a giant leap forward in 2014 when he managed to fundamentally reverse this idea, taking a word and producing an entirely novel image.  This was a computer ‘imagining’ something from nothing, an entirely novel image, born in a synthetic imagination and previously unseen by human eyes.  This breakthrough (which was called Deep Dream) ultimately led to the blockbuster platforms of Chat GTP, Midjourney et al that are now present in 2023.

Here is a quick ‘cheat sheet’ for thinking about GenDesign vs GenAI, they are in many easy opposite phenomena:

Advertisement

What do we mean when we use the terms generative design and generative AI?

I have begun to think that generative design was waiting for generative AI to come along to really find a place in the mainstream.  In recent weeks I founded a company called Arka Works that helps early adopter practices with practical experimentation on live projects and practice challenges.  This has led to collaborations with AI-curious practices looking to see what can be done today.

Everyone has a different problem to solve. For example, I am speaking with practices about applying LLMs (like GPT-4) for bid-writing assistance tasks using fine-tuned models trained on previous submission material. We can read and summarise lengthy planning and regulatory reports being discussed live during meetings. Mid-journey is proving incredibly powerful for early-stage material palette testing and mood boards derived from a client’s written brief.

On a much larger project scale, one such example of both gendesign and genAI techniques being used in combination is an early-stage masterplanning project we have been working on with Mae Architects. Here we are testing massing concept ideas for a strategic housing-led masterplan to define block types and align to an area and unit brief. We decided to beta-test a new tool from an Oslo Start-Up that is a bit like ‘parametric sketchup’; called Spacio.

Source:Arka Works & Mae Architects

A live example of generative design and AI being used in practice: A hybrid workflow combining generative design software Spacio Beta being used in combination with a hand sketch exploring urban form, public realm, and character zoning

In this platform, you work by pre-programming building characteristics such as depth, façade grids, core access and windows and then you start drawing or autogenerating whole masses very quickly with single line inputs.  You are in control. You set the constraints and then adapt and refine your concept blocks by pushing/pulling whole facades and roofs to arrive at your target composition.

Working in this way allows you to test and validate the merit of a good or a bad idea very quickly and it also provides an immaculate record of areas at the same time - something I have always found problematic using more traditional methods.

We modelled three options for a 1,500-home scheme in two days and then worked onto the basic massing by hand with new ideas. We sketched ideas for character zones, defensible space, public realm, varied façade grids and roof forms directly on top.  Then we took this sketch and went straight to a rendering using Stable Diffusion with a feature called Control Nets which allows the AI to experiment with light and material, while being firmly constrained by your input design.  The end point is a striking rendered view that was produced at a sprint.

Source:Arka Works & Mae Architects

Image rendered in Stable Diffusion, using Control Nets - directly from the handsketch - no traditional 3D modelling is involved.

After the render we tested daylight, sunlight potential, wind comfort and embodied carbon all using technology from this new toolset.

This single-worked example demonstrates a striking new method for strategic design that feels very different.  It is ‘hybrid’ - in that it combines both digital and analogue methods. It is ‘parametric’ in that we are leveraging each click to produce many more procedures and it is made ‘vivid’ because we can take sketchy linework and in combination with written prompts, explore light and materials immediately.

We came away with powerful conclusions about how to progress the design going forwards, what worked and what didn’t. The ideas we discarded were therefore indulged for the minimum amount of time, they ‘failed fast’. Compare this to a design idea that is kept alive for several months only to discover a fundamental technical issue that meant it was flawed from the start.

My conclusion from such experiments is that we will soon move away from linear decision-making processes where ideas can only be validated by following a series of traditional gateways in the form of quantity surveyor, fire engineer, structures, LCA Assessor and similar reporting methods that take months to conclude.  Instead, you work out if an idea is good very fast and you adapt in a more agile way, designing and testing your ideas out very quickly.  Then when you come to run the full engineering analysis. You are simply validating the wise decision-making you have deployed already, upstream.

We can bring the team into a huddle and do shorter energetic sprints that are focused on one key learning at a time.  This new approach puts so much knowledge and insight in the hands of architects that we should feel empowered by it and excited about a new mode of practice in the future.

Keir Regan-Alexander, principal at Arka Works, is speaking at a free-to-attend AJ webinar on AI in architecture this Wednesday. For more details, click here

You might also be interested in…

Leave a comment

or a new account to join the discussion.

Please remember that the submission of any material is governed by our Terms and Conditions and by submitting material you confirm your agreement to these Terms and Conditions. Links may be included in your comments but HTML is not permitted.