Stefan Suwelack, Sep 16 2020
In October 2018, the first ML-generated painting “Edmond de Belamy” sold for $432,500 at an auction at Christie’s. In order to create the painting, the art collective Obvious used an open source implementation of a generative adversarial network and trained it on 15,000 portrait images from various periods. Naturally, the question arises if generative neural networks can be successful not only in creating art, but in creating new designs for media, architecture and engineering.
It is pretty obvious that creating “Edmond de Belamy” is much simpler that designing a building or an engineering part. However, the basic approach sounds feasible even for more complex tasks: Train a neural network with many old design variants along with the associated requirements, then let the network generate a design from new requirements. In order to tolerate unsuitable designs by the algorithm, an AI-assisted Design system could create many variants and let the engineer select the best one.
Because this approach seems pretty straight forward and because AI-based methods received a lot of attention, the term AI-assisted Design quickly caught on in the engineering community. However, there is currently no clear definition of the term. In particular, it is often used for processes and algorithms that do not use machine learning at all. In this blog post, we try to sort out the relevant methods in order to attempt a definition for AI-assisted design.
Engineers have used computer-based methods to generate designs from scratch well before the advent of deep learning. Starting in the late 1990s, companies like FE-Design (now part of Dassault Systèmes) and Altair pioneered an optimization technique that was able to automatically create a new shape from a given design space along with functional variables (e.g. loads, constraints). In order to achieve this, the behavior of the part is simulated repeatedly and its topology is changed with a suitable optimization technique such as gradient-based methods or bionic algorithms. While the approach is most commonly used in conjunction with a FEA-based stress analysis, it can also be applied to CFD simulations as well as modal or thermal analyses.
The result of the topology optimization is typically a bionical lattice structure with a discrete checkerboard-like appearance. In this form, such an optimized part is neither editable in a CAD system nor manufacturable. This is why topology optimization was initially only used to inspire engineers to come up with new designs. In the subsequent design process, the engineer would adopt some features of the optimized geometry while also taking into account other design objectives such as aesthetics or manufacturability. In this stage, topology optimization was thus a tool used in the conceptual phase of the design process.
Designs that were inspired by topology optimization quickly became popular especially in lightweight applications. However, the process of deriving the final design involved a lot of manual effort by the design engineer. That’s why current software tools use topology optimization as part of a larger generative design process. Within this process, engineers first establish a definition of their design intent in terms of goals and constraints. New design variants are then generated using topology optimization along with some post-processing such as smoothing. The engineer can compare these suggestions and can select the most promising designs. In a final step, these variants are converted into manufacturable and editable designs and the best one is chosen.
While the concrete steps may vary, there are many different software tools available that support similar types of generative design workflows. The crucial steps that set these workflows apart from vanilla topology optimization are functionalities that ensure the manufacturability of the design. For additive manufacturing, this step can be as simple as applying a smoothing filter in a post-processing step. However, typically this involves much more complex design changes. It is quite difficult to establish manufacturability criteria or even enforce manufacturability of a general design. This is still an active area of research and a promising application for machine learning.
In practice, it is often not apparent which algorithms are used behind the scenes in the commercially available generative design software. In particular, it is typically unclear if any data-driven or machine learning based methods are used at all. Nonetheless, these solutions are often touted as AI-assisted design workflows. In reality, this always means that the approach is based on topology optimization and it sometimes means that the workflow is supported by machine learning.
It is a rather straightforward routine work for experienced engineers to assess the manufacturability of a design. However, it is difficult to encode the necessary geometric intuition into rule-based software. In contrast, machine learning can capture this implicit knowledge. Many considerations in the design process rely on this kind of geometric intuition: Is the design aesthetically pleasing? Does the placement of ribs or beads make sense? Is the part easy to grip for a robot? Will the tooling for the part be complicated? These questions arise not only for parts that are created within generative design workflows, but also within classical design processes.
Machine learning tools that are able to automatically assess such criteria can both speed up the design process and boost quality. They can in particular provide a safety net that checks the design and highlights potential errors for the engineer. Along with the warning, the ML-based system can also display similar examples from past designs. Based on this information, the engineer can then choose to ignore the findings or to correct the design. Through the feedback from the engineer, the system can continuously learn to identify not only general design patterns, but also application-specific design guidelines.
We have seen that current generative design workflows in engineering work very differently from the process that was used to create “Edmond de Belamy”. In engineering applications, new designs are discovered by relying on physics-based models and optimization techniques. Machine learning is not used to create new concepts, but rather to infuse general knowledge about manufacturability or other design objectives. In contrast, “Edmond de Belamy” was created in a purely data-driven way. This begs the question, if such kind of generative AI will play a role in engineering in the near future.
Although generative ML methods such as Generative Adversarial Networks already achieve impressive results in image, audio and video applications, they do not yet work well for 3D engineering designs. However, given the tremendous progress over the last years it is highly likely that this will change in the near future. This means, that machine learning will not only be able to detect possible errors in a design, it might also generate proposals for a correct design. Such kinds of algorithms can probably be trained on the very same data that is collected when using ML as an error checking tool.
Even under the assumption that the current generation of generative ML methods will continue to improve very quickly, purely data-driven approaches will probably be limited by two factors. First, such a process will only be able to essentially generate sophisticated variants of known designs. Second, these algorithms work by learning distributions and correlations within historic data, they are not directly capturing design intend and causal relationships. This means that while these algorithms will be able to fix small design errors by comparing local patterns over a comparable set of historic designs, they will not be able to globally adapt a design to fundamental changes to the requirements.
In order to overcome these two limitations, the continuous evolution of currently available ML methods will most likely not be enough. Instead, general breakthroughs in AI technology will be necessary. In the meantime (and probably for a very long time), the current approach makes absolute sense: Human engineers use their contextual knowledge and imagination to create a basic design layout. Physics-based methods are used to generate optimized part shapes within this layout. ML-based methods provide manufacturing constraints and quality control for automated workflows.
We have seen in our previous blog post, that quality checking and small task automation are great use cases for ML-based methods. In this post, we established that the creative nature of generative design workflows is based on topology optimization and not on generative ML algorithms. This begs the questions, if AI technology is at all useful as a creativity tool for engineers.
I would answer this with a resounding “yes”. While it is true that the technology can only detect similarities and correlations and not causal relationships, these functionalities can be very powerful when combined with human knowledge, intuition and imagination. In a future blog post, we will detail how ML-based similarity measures enable engineers to generate new ideas based on historic design as well as experimental data.