Zum Hauptinhalt springen

🎉 We released Spotlight 1.6.0 check it out →

Use Cases for AI-assisted Engineering

· 8 Minuten Lesezeit

In a previous post we concluded that "AI-assisted Engineering" essentially means data-driven process automation. And while a true "Augmented Intelligence" for engineers will most certainly go hand-in-hand with significant changes in the product development process, first incremental steps towards the data-driven automation of manual routine tasks can be implemented within existing processes. In this blog we try to analyze categories and typical properties of use cases for this approach. It should serve as a first step towards developing an understanding for the potential and the limits of current AI-assisted engineering technology.

Function Approximation and Tasks Automation​

From a mathematical standpoint, deep learning provides a mechanism to learn highly non-linear relationships between input and output variables. That is why the technology can serve as a black box modeling tool for nearly any problem. However, the real question is if it outperforms existing rule-based, statistical or physical models. As of today, the answer to this question is very often a resounding "no". As the technology progresses this will definitely change: In particular, the possibility to pre-train the neural networks opens up a path towards more data efficient and robust, yet flexible and fast models.

One interesting application is the creation of material models for finite element simulations by training neural networks with experimental data. Even more impactful is the use of the technology as a surrogate for computationally expensive physical models. This would not only allow engineers to perform design analysis and optimization a lot faster, but also to fuse statistical information with physics-based approaches. This is especially useful in the context of digital twins. Currently, deep learning does neither reach the accuracy of physics-based models nor the data-efficiency of classical surrogate modeling techniques such as reduced-order models or response surface methods. However, it might provide a powerful framework for new engineering tools that can be used early in the design process or over the whole lifetime of a complex product.

There is one special case of general function approximation where deep learning most often outperforms classical computer algorithms: Automating tasks that are currently done by humans in a routine way. As discussed in the previous blog post, this especially applies to tasks that require little mental effort and instead rely on a kind of intuitive understanding of a scenario. In engineering, this description fits many tasks in the realm of process monitoring, quality control or geometry processing. The automation via deep learning technology is especially interesting for processes that consist of a large chain of small tasks which are easy to do for humans, but difficult to automate with classical algorithms.

General vs. Application-specific Solutions​

Physics-based models based on partial differential equations have become an indispensable tool in modern engineering. Due to efficient numerical solvers, these kinds of models are used for the design of the smallest printed circuit boards to the largest buildings. Apart from the need for suitable material models, the technology offers a very general solution for many problems due to its roots in universal physical principles.

In contrast, ML-based solutions are typically application-specific. That means that the models only work well if they are trained on historic data for the specific use case. The effort that is needed for this step in terms of data management and compute resources can be quite significant. It thus makes sense for established software vendors and startups to focus on use cases that do generalize reasonably well over specific use cases or at least over company-specific processes. However, there are probably only a few of these use cases in engineering (e.g. surrogate modeling, visual quality control, geometry processing).

The key to realizing the tremendous automation potential for all other use case is to make the adaptation process for ML-based solutions extremely efficient. However, this is not an easy task and creating easy-to-use authoring tools will not be enough. In addition, the education of engineering teams with respect to ML methods and the availability of a modular IT-infrastructure will also be key. Making the creation of robust AI-powered engineering apps easy and efficient is thus a task that cannot be done by a single entity, but has to be a community effort of software vendors, academic institutions, service providers and end users.

Intuitive Data Understanding and handling Big Data​

Manual routine tasks in engineering typically require an intuitive understanding of complex, unstructured data. A good example are geometry processing tasks such as reverse engineering, meshing and modeling. Experienced engineers develop a feeling when something "looks right" and act accordingly. These tasks still have to be performed manually, because this kind of feeling is very difficult to capture with classical rule-based software.

In contrast, a data-driven approach is much better suited to automate such tasks. There are two main factors that influence the performance of this method: The availability and quality of historic data as well as the required contextual knowledge to complete the task. With enough experience and the right methodology, the first aspect can be assessed relatively quickly for a given use case. The second factor is much harder to determine. A good example for this is the development of self-driving cars. Apparently, humans use much more contextual knowledge when driving than initially anticipated. ML-based solutions have to simulate this knowledge by digesting enormous amounts of data that have to include all possible edge cases. Theoretically, a purely data driven approach might solve the problem. In practice, we are currently seeing that this approach is extremely expensive and might not work out at all.

So, let's assume you have identified a suitable task for data-driven automation: Somebody has to manually assess the quality of a simulation result based on time series and geometric data. The task involves checking for known errors and general abnormal behavior, it doesn't require a lot of contextual knowledge and a human needs a couple of minutes to complete the task. Is it worth automating with ML-based methods? This depends obviously heavily how often the task is performed and as a consequence how much business value the automation solution creates.

Often, the most business value can be found in applications that have to deal with lots of data. Taking the previous example, it is probably not worth automating such a quality control step if it is only executed once a week. However, if hundreds of simulations are run every day in the context of optimization procedures it will be a very valuable thing to do. Manual routine tasks that don't need much time individually, but are placed in the context of applications that create or use large amounts of data are good use cases for data-driven automation.

Pattern Recognition vs. Pattern Generation​

Until now, we mainly talked about the possibility to detect certain patterns (or their absence) in complex data. A much more time consuming task for humans is the creation of such patterns. For example, it is much more work to design a part that can be manufactured with injection molding than to assess if a certain part can be manufactured with this process.

Generative machine learning methods have achieved impressive results in natural language as well as image and video processing. However, currently these methods do not robustly work in the context of geometric pattern generation. They are thus not ready for real-world engineering applications. It is worth pointing out that for many scenarios the training data for pattern recognition use cases is essentially the same as in pattern generation use cases. That is why companies that have already established ML-friendly processes and IT-infrastructure will benefit the most from generative technology once the algorithms are ready.

It should be noted that in the engineering community ML-based generative methods are sometimes confused with physics-based topology optimization techniques as both approached can be lumped together under the term "generative design". Given the hype around ML-based methods and the blurry definition of "AI", this confusion is sometimes intentional. Topology optimization is definitely an amazing tool and in the future ML-based methods will probably help to make this approach more manufacturing-friendly. However, currently these two approaches are very different from an algorithmic point of view.

Human in the Loop​

Humans do make mistakes. Engineers are sometimes hesitant to admit it, but mistakes are unavoidable in engineering processes. Thus, it is important to have mechanisms in place that detect and correct these errors. Here, ML can provide an additional safety net in the form of process monitoring and anomaly detection.

Like humans, ML-based methods do not achieve 100% accuracy. The bad news is, that if the AI is wrong, it can make absurd errors. The good news is, that it is possible to assign a confidence to the result. This allows to bring in a human reviewer if the confidence is low.

When thinking about possible use cases for AI-assisted engineering one of the most important aspects is to think about how errors of the ML algorithm are handled. What error rate is acceptable? Are a large amount of false positives or false negatives a problem? How can a human efficiently monitor and interact with the AI system?

Having good answers to these questions early on will make it a lot easier to build successful use cases for AI-assisted engineering. If the design of the human-machine interaction is efficient, ML-based methods can be very valuable even if limited data is available at the time of deployment. And from there the performance of the AI system will continuously improve as the human input serves as additional training data.

cadcae