Designing the Future - Engineering.com https://www.engineering.com/category/watch/designing-the-future/ Thu, 14 Sep 2023 10:30:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.8 https://www.engineering.com/wp-content/uploads/2024/06/0-Square-Icon-White-on-Purplea-150x150.png Designing the Future - Engineering.com https://www.engineering.com/category/watch/designing-the-future/ 32 32 New Tech Unlocks Circuit Design Potential https://www.engineering.com/new-tech-unlocks-circuit-design-potential/ Thu, 14 Sep 2023 10:30:00 +0000 https://www.engineering.com/new-tech-unlocks-circuit-design-potential/ Siemens' Thomas Yip on how software speeds circuit design.

The post New Tech Unlocks Circuit Design Potential appeared first on Engineering.com.

]]>

This episode of Designing the Future is brought to you by Siemens Capital Electra X.

Circuit development for electronic devices has never been easy. The schematic is key, and the path from ideation to an understandable, buildable circuit is essential for project success. Today, there are multiple software tools that free the circuit designer from the traditional constraints of conventional circuit layout, opening the door to faster iteration, and better, more reliable devices.  

Joining engineering.com’s Jim Anderton to discuss how high technology can free circuit designers to innovate faster and at a lower cost is Thomas Yip, Software Development Director, Integrated Electrical Systems, Siemens.

Learn more about the speed, convenience and efficiency of Capital Electra X. And, sign up for a 30-day free trial

* * *

Episode transcript:

Jim Anderton
Hello, everyone, and welcome to Designing the Future. Circuit development for electrical and electronic devices has always focused on the schematic. Circuits are generally complex, but a well-developed block diagram in schematic makes current and signal flow accessible and understandable and facilitates further circuit development as well as service. Now as part of an intelligent engineering development program, the process of ideation to manufacturing should not only facilitate understanding of device operation, but act as a launching pad for design iteration and both educate and inform the engineering design team. Now today there are sophisticated tools that take the grunt work out of circuit ladder and design and can help bridge the gap between concept and production release. Joining me to explain how design engineers and circuit designers can leverage these technologies is Thomas Yip, he’s Software Development Director, Integrated Electrical Systems at Siemens.. Tom, welcome to the show.


Thomas Yip

Thank you Jim. Thank you for having me here.


James Anderton

Tom, this is a fascinating subject. Now, when I began this industry over 30 years ago, layout, circuit layout was done on the back of a beer coaster or a napkin or a scratch pad and it was transferred to a paper schematic, which was then formalized with decals, with Letraset, and we would literally use a metal stylus and emboss these things in and it created a very rigid kind of a workflow or a system where making changes in the fly was quite difficult, quite expensive, and created a large sort of paper trail. How do these modern systems that we have today compare in terms of efficiency compared to the way that we used to do circuit design?

Thomas Yip
Well, this is what you’ll call that there’s a huge difference nowadays, mainly because the industry is driven by more and more complexity, more and more necessity for collaboration. Teams are spread all over the world, most of them remotely. Right. So there is a need to have these things done collaboratively with good workflow, version tracking and so forth. So that is a huge difference nowadays. The demand for such tools, especially in the last couple of years, has really accelerated.

James Anderton
Now we historically, there used to be a development process where we’d have an idea and then perhaps a block diagram and the block drive, we’d tend to compartmentalize or modularize the design and perhaps you might have a radio frequency expert on your staff or someone who’s the power supply man or woman, and of course you would then sort of sublet the design to those people, then integrate it later at this point. How is that design methodology? We live in a world now where your radio frequency expert might be a continent away or halfway around the world. How does that change the way that we design circuits these days?

Thomas Yip
Well, this is very common. In engineering, we tend to break down a big problem into smaller compartmentalized problems. Right. So with regular tools it is going to be a bit more difficult because it seems to be everyone has their own PC and their own software installed on their own PC. So in terms of collaboration and in terms of sharing it’s going to be a bit more difficult. But right now we have these cloud native tools, right, which essentially makes all these collaboration superbly easy. We also have real time collaborative diagramming. That means at the same time, Jim, you can work on your part of the circuit and I can work on my part of the circuit.

So these kind of tools are based on the cloud. They are built with the cloud in mind, enhanced with every layer of security in there to ensure that third parties don’t get access to the circuit that you and I are working on. So right now, especially so in Siemens, we are really pushing this sort of collaborative software and cloud-native software out there so that the users get to benefit from all this software without the need to set up their own IT team, their own servers and so forth.

James Anderton
Tom, there used to be an old expression that too many cooks spoil the broth and with collaboration, of course, we have the ability now to bring a very large number of people into the design process at this point, and historically with traditional systems, it’s been very difficult to corral and control the revision process to make sure that we didn’t go through the alphabet and have many, many dozens of different revisions before we lock the program down. It sounds like you’re talking about a way to sort of create some order from that chaos.

Thomas Yip
Yes, absolutely. Especially for us, we have the ability to say limit customers to view permissions, right, and comment the permissions. So certain people can comment but not work on the diagram. And even for people who can work on the diagram, there is a list of version history. We know exactly who has changes at what time, whether in the middle of the night or in the morning. Right. And we have the ability to be able to compare each different versions, compare yesterday’s version with today’s version, and if we didn’t like it, we have the ability to roll back and every single one of these things is tracked and can be rolled back in a huge jump or can be rolled back in a step-by-step. So these are automatically built into all this software.

James Anderton
Now I come from a manufacturing mass production world where designing consumer products in particular and consumer product design has some interesting complications. It could be as simple as, for example, a purchasing manager approaching the engineering department and saying, I need to change the source for a power semiconductor, which is functionally the same part but from a different vendor, which means it has to have a different part number. So a different part number has to be reflected in the bill of materials, in the documentation all the way backwards and forwards. So now we suddenly have a different schematic, we have a different bill of materials, we have a different part, essentially all of which has to still functionally operate in a production environment. How can systems like you’re talking about here help sort of smooth over that change? Because that used to create chaos, those sorts of changes.

Thomas Yip
Well, right now the issue at hand is that it used to be if you and I need to share these sort of drawings, I need to have these sort of drawings in multiple places, right, we used to do it by sending emails and then renaming it to V1 and V10 and V20 or this final version. Right. Because of the nature of the software that is built at the current moment, right, it sort of lives on the cloud and hence you can be on the plane and you can ask me to update something. The moment you land and we open your laptop, you can have the latest version. You can be assured that is the latest version.

So this save us a lot of time and a lot of problems in terms of disseminating all these things. We could have, the production shop floor have a view permission and I could be anywhere in the world edit that drawing. And the next morning if there’s a production run and when they open that drawing, it’s going to be updated, confirm. Right. So this sort of collaborative cloud software bring a lot of this sort of automatic syncing, right, that is extremely beneficial to the customers.

James Anderton
Yes. Tom, historically in my experience, there’s been a little bit of natural and actually beneficial tension between the production engineering staff of course, and the pure design element. It’d be quite common where the physical constraints imposed by the system, heat sinks, the physical size of components for example, I’ve seen circumstances where the substitution of a cheaper ceramic capacitor for an expensive tantalum capacitor meant a physical problem with printed circuit design, that kind of thing. It sounds like we’re talking about a way in which we can perhaps get the design engineers to work more closely with the manufacturing engineers.

Thomas Yip
Yes, absolutely. We have done a ton of studies and we find that lately not only that the project complexity has increased, a lot of companies are facing time to market problems. That means it used to be, for example, you may be able to design a new car in three years. Right. Right now that is not competitive anymore. You need a new car probably every two years or even every year.

So the amount of people that is needed to collaborate on all these things, including the production flaw and all these things is going to be increased by multiple fold, and hence there need to be a way for all these people to be able to edit and communicate. The reason a project is successful or failed relies heavily on whether the stakeholders is able to collaborate and communicate. Right. So we are extremely happy that the platform or the architecture, especially for our Capital Electra X, has all this collaboration, and syncing, and communication built in that will easily allow all these parties to be able to say, the shop floor make a command, okay, this capacitor is causing me a lot of problem, and that guy somewhere over a remote or in HQ can look at a command and can make changes and be rest assured that those changes are propagated to the shop floor again, really easily.

James Anderton
Tom, there is an increasing transaction in all manufacturing to push some of the design responsibilities out to the vendors. So we see sort of a decentralization of design responsibility and in some ways that makes obvious sense. If you have a semiconductor supplier for example, they will have expertise in their product, and so we’re at a point now where it used to be we would ask for a spec sheet and we’d get the specifications, and then we would turn around and we would then keep the entire design in-house. Occasionally you might telephone perhaps an engineer at the vendor there who might give some advice. Now it’s going a bit beyond that. We’re actually asking them to come into the design to a certain extent and say, show us how to use your product or your component in here. Is that going to change do you think, the way we do it? Is that the kind of thing this software can be used for?

Thomas Yip
Yes, absolutely. The way we do it is that everything is based on teams. You can look at it as a team or as a project. We have folders. In fact, everything, folders and files that you can share with anybody you like. So for example, if you’re working on a project, you can say this project is to be shared on a couple of vendors and a couple of customers. Right. And you can bring the expertise in and for them to help you and look at your circuit design and vet the circuit design or even suggest components that can be used in that. So traditionally this is going to be really difficult because you’re really worried about security and you don’t really want your file to go out and be passed around without control.

But with such a system, all you need is a browser and a secure sign in. And as you know full well, cloud-based applications, right, there’s a huge focus on security. So the onus is on the manufacturer of the software to put in all the security layers to ensure that this guy who signed in is really who he is and not some third party. Right. So you can be rest assured that you’re sharing to the right people, people who you really want to have that customized permission.

James Anderton
Tom, I’m glad you brought up the issue of security because we also live in a world now of iter of absolute restrictions on some things which are not just military, but an increasingly broad category of dual use because they’re dual use products in particular. I have met, for example, on the radio frequency side, some RF experts working in say millimeter wave microwaves where they’re not permitted to even publish their thesis from university because they’ve been classified. So we live in a sort of world where you need to get your product out there and sell it and collaborate globally, but at the same time, you’ve got to assure that if a customer or a military or a government agency approaches you, you can demonstrate that you have the security in place. Is that demonstrable level of security, is that the kind of thing built into these packages now?

Thomas Yip
Absolutely. Siemens takes all these things extremely seriously. We have in place many, many protocols and we go through a lot of vetting including ISO standards to ensure all these security layers and all these software are compliant, right, on multiple levels in terms of data privacy, in terms of security, in terms of sharing, even internally within Siemens itself, we have in place processes and so forth to ensure that data stay in a particular geographical location. So Siemens takes all these security extremely seriously, export control and so forth.

James Anderton
The common concern especially many SMEs have about advanced software packages always is that their engineering resources, which are expert at designing circuits, need to design circuits and not become experts in operating software. The difficulty of taking complex software where it’s a useful tool, but it also absorbs time and expertise to learn how to use the tool. Is simplicity of use, how much is that a factor in the implementation of these packages?

Thomas Yip
It is extremely, extremely, extremely high. I was in Silicon Valley and I met the guy, Barry Katz who invented the Apple mouse from IDEO. Right. So the fact of the matter is that right now with the proliferation of hand phones and so forth, our attention span on our screen has increased so much. So the way I look at this problem is that, right, if we require the user to spend time going to classes or reading a tech manual book to learn how to software, then we would have failed. I’m very sure the many new applications that goes to your phone or goes to your laptop nowadays do not require that anymore. 

So unfortunately, in terms of computer aid design software because of legacy problems, that’s still a little bit of a steep curve to get what they call that up to par, but especially so for Capital Electra X. When we first developed the software, one of the main criteria is that we want people to be able to use it without needing to read a book or go through any classes, hence everything is very intuitive web standards, drag and drop kind of thing. So we have made this one of the most important point because we understand as electrical engineers ourselves, that we really don’t have time to learn software. Right. We are needed to design a circuit and we are needed at a production site to commission the machines and all these things. Electrical engineers are really busy people, hence the focus on ease of use part of the software.

James Anderton
Now, traditionally, of course, many of these packages originally were derived from mechanical engineering software. So I always find it quite ironic that someone whose perhaps designing a motherboard for a personal computer is using similar software to someone who’s designing a bridge. So as this software specializes and deviates from that, will the things that the young engineer learns using basic simple general purpose design software, will that knowledge translate to this new realm?

Thomas Yip
Well, slightly. What we try to do is to ensure a good user experience. Right. And if the user have experience with CAD software, then that stand them in good state. Right. But we look at it this way, right, we want to focus on something where we have domain expertise and Siemens has a ton of domain expertise, especially so in terms of electronics, in terms of electrical, and of course also on the mechanical CAD part. Siemens has a complete portfolio of all this software. On our side, we are so much focused on the electrical part of the solution, therefore we want to be able to help the engineer, right, do the stuff that is important to them. That is we want to have the software automatically do a lot of the nitty-gritty details, but allow them yet to focus on safety and great design.

Now, there is design also need to be cost-effective. Right. So you need to focus on innovative design, cost-effectiveness and safety, right, rather than the nitty-gritty details. As you mentioned earlier, you need to make sure all these buildup materials, the numbers, the components and all these things must be correct. Right. So the software tries to help you in all these things so you can focus on the design. That’s what we are trying to do.

James Anderton
Tom, in the last few decades, a new generation of designers emerged who were what we used to call cowboy designers. They’d use this dead bug prototyping technique, for example, and they would build things and blow them up, build things and then say, we’ll put a trap in here to take out that erroneous signal. We’ll clamp the voltage here, and they would just sort of throw components at designs and then iterate their way and just run the schematic on a continuous basis to sort of get the outcome they wanted.

This is the opposite, of course, the original way that the traditional formerly trained designers would do it, which is based in mathematics and physics where they would carefully think their way forward in a block wise fashion from the front end of the circuit to the back end of the circuit to try and get to the finished schematic in as few steps as possible. Now, the current sort of thinking is that if you can get there by iterating multiple times, but do it quickly, it’s cheaper and better. Does software like yours, is this the kind of thing that’s going to basically allow that quote, unquote cowboy or that fast thinking young designer to just basically try things faster?

Thomas Yip
Absolutely. At Siemens, we’re working really hard, especially so in the areas of AI and generative AI. And the fact that the software can allow you to modify all these circuits and all these diagrams relatively quickly. We don’t allow problems because there’s always the possibility of rolling back to your original design if something doesn’t work out. So hence, this sort of software plays a huge role in the user’s ability to be able to shorten their design time, right, to be able to iterate all these things really quickly and get their product to the market. 

James Anderton
What about the “what if” scenario? In most engineering meetings I’ve met, there are all sorts of speculative questions like “what if”. What if we reduce the size of that inductor? What if we switch to a switching power supply, what would the consequences be? This sounds like a way in which you can sort of experiment in a much cheaper way than sort of building and blowing things up.

Thomas Yip
Absolutely, absolutely. In Siemens, there is this, what we call that, huge effort collaboratively with many big players in the industry in terms of building digital twins. Right. You would absolutely want to do these things digitally without a lot of costs. Right. While at this current moment, some of the models that is in the CAD drawing are actually accurate in terms of physics. They are just not a model, but they conform to what they call that, physical performance. Right. Okay. So that you can change something and you can understand and can see almost immediately whether those things affect your performance.

James Anderton
It’s a fascinating subject. We could talk about this for hours. You mentioned AI, of course, and naturally it has to come up because it is such a popular subject right now. It’s evolving so rapidly. And we’ve talked about the ability to simulate our way to success virtually using digital twins rather than having to go through the old awkward process of prototyping and bench testing at this point. Is AI going to replace the conventional human circuit designer in the future? Can you see a future basically when software like yours, for example, becomes itself a universal designer essentially, and a non-expert simply asks for a new circuit and gets one?

Thomas Yip
I think that’s possible, but I think at the current moment, I don’t believe it will replace the human designer. Right. I think the human has a lot of ability to put things together. You can ask the software to generate one part of the circuit for you, absolutely, but in terms of understanding the bigger picture and so forth, sometimes it is extremely difficult to articulate. So we have customers who design small machines, but we also have customers who design an entire factory. So there’s a lot of moving parts everywhere from the front of the conveyor to the machines to the interaction and all these things. I think it is possible that AI will generate some circuits for you, but I think you still need a designer to sort of put them together and ensure they work well together.

James Anderton
A brilliant future. Thomas Yip, CEO Siemens Capital Electra X. Thanks for joining me on the program.

Thomas Yip
Thank you, Jim. Pleasure.

James Anderton
And thank you for watching. See you next time on Designing the Future.

* * *

Learn more about the speed, convenience and efficiency of Capital Electra X. And, sign up for a 30-day free trial

The post New Tech Unlocks Circuit Design Potential appeared first on Engineering.com.

]]>
AI-Powered Engineering, Beyond Simulation https://www.engineering.com/ai-powered-engineering-beyond-simulation/ Tue, 24 Jan 2023 13:00:00 +0000 https://www.engineering.com/ai-powered-engineering-beyond-simulation/ AI and HPC allow new, powerful systems that help engineers make sense of data, and complexity.

The post AI-Powered Engineering, Beyond Simulation appeared first on Engineering.com.

]]>

This episode of Designing the Future is brought to you by Altair.

Engineering is applied science, the practical application of fundamental laws in disciplines like physics and chemistry, and is rooted in the mother of all sciences, mathematics. 

Today, engineers think not so much about numbers, but data, and today data encompasses much more in engineering than renderings of parts and assemblies. High-performance computing and artificial intelligence are taking the design process beyond the simple simulation of “what if” scenarios, and are creating new and unique, multidirectional data streams, data which must be analysed, understood and acted upon to optimize engineering designs with speed at low cost. The possibilities are limitless, and they extend to every aspect of engineering, in all disciplines. 
Joining engineering.com’ s Jim Anderton to explore this important subject is Fatma Koçer, Vice President for Engineering Data Science with Altair.

Learn more about design generation, design exploration and design optimization, powered by AI.

The transcript below has been edited for clarity:

Jim Anderton
Engineering is applied science. The practical application of fundamental laws in disciplines like physics and chemistry rooted in the mother of all sciences mathematics. Now, today, engineers think not so much about numbers, but data. And today, data encompasses much more in engineering than renderings of parts and assemblies. High performance computing and artificial intelligence are taking the design process beyond the simple simulation of what if scenarios and are creating new and unique multi-directional data streams.

Data which must be analyzed, understood and acted upon to optimize engineering designs with speed at low cost. The possibilities are limitless, and they extend to every aspect of engineering in all disciplines. Joining me to explore this important subject is Fatma Fatma Koçer, vice president for engineering data science with Altair, where she and her team works on engineering, data science, strategy development, execution, investigating and applying latest technologies in the field, and providing feedback to alter software and supporting customer projects.

She’s received a B.S. degree in civil engineering from the Middle East Technical University, Ankara, Turkey, and both MSC and Ph.D. degrees from the University of Iowa in Structural Optimization. Dr. Koçer has been recognized in Crain’s Detroit Business 2019. Notable Women in STEM Report. Fatma, welcome to the program.

Fatma Koçer
Thank you so much, Jim.

Jim Anderton
Can we start with a sort of a generic concept to get everyone up to speed on optimization? Now, optimization used to be a very simple, iterative process when I was doing this, and that was we designed something, built a prototype, break it and go back and redesign it.

And we do that again and again in iterative way down there. How what does optimization mean today?

Fatma Koçer
It’s been a while that somebody asked me that question, so thank you for that. Optimization is actually the use of mathematical procedures to automate the objective of finding the design parameter values that meets a set of design requirements while minimizing or maximize using an objective function. I am familiar with the way you describe design optimization, but the goal in all in I should say true design optimization is to automate that process to find the values that minimizes an objective function such as cost, while meeting all the design requirements that the application needs to meet.

Jim Anderton
Now, in many design processes, you know, the matrix grows, so you almost grow exponentially as you work your way through the design process as problems crop up, which must be solved, or sometimes scope creep where the parameters are expanded by the demands of the customer or other downstream parties. Is, is how much of design optimization, I should say, is based on a careful definition of what the requirements are. I mean, at the beginning of the process, how important is that.

Fatma Koçer
The most important step? And one of the advantages I think, that design optimization brings to the table is the step to engineer has to think through the entire product lifecycle and formulate those design requirements upfront so that they don’t creep in later on in the process when the design has matured. So as a design optimization engineer, for example, on, I was tasked with analyzing the lifecycle of a product and making sure that all the design requirements, all the operation requirements are included in the design optimization process so that we don’t run into an issue later on. So it’s one of the most important steps of the design optimization process, formulating the problem correctly and as a whole.

Jim Anderton
You mentioned overall lifecycle and today the overall lifecycle of a product now includes what happens to it at the end of its life. We’re all very sort of environmentally sensitive now, and I know there’s a large German manufacturer of luxury cars now that actually takes the vehicles back at the end of their life and disassembled them. And this has had interesting engineering implications, like the minimization of the number of different grades of plastics in use to simplify the process of the back end.

Now, if that ripples through back to the front end of the process to design, suddenly now the engineer doesn’t have the luxury of being able to select an engineering resin to get the modulus they need. They may be forced to use a commodity resin because there are only a handful of available resins to use because of the recyclability sort of factor.

Is this the kind of thing that complicates design optimization these days, this end-of-life issue?

Fatma Koçer
It’s one of the issues. And as an alternative, we actually do more of MOCA objective optimization in design optimization, meaning that you look at multiple objectives. In the past, we were more inclined to look at, you know, improving the strength of the design while minimizing the cost all the way. But now, more and more, we’re using multiple objectives in search of the optimal design, including the ones that you mentioned in terms of recycling of the product or the comfort of the, you know, the products, the cost, the manufacturing of the product is also one of the biggest considerations nowadays.

Jim Anderton
Yeah, we have a new world now where computer aided engineering and computer aided design has allowed us to do things like highly optimized designs that were greatly simplified previously. So, you know, a triangulation of forces, you know, a Warren truss, you can always reach into the handbook solution and find a relatively simple structure that would handle those loads.

Now we have things like new materials with variable moduli and we have of multiple sort of options that are complicated. Is it still possible for engineers to start the optimization process with a handbook solution? Do they start simple and iterate toward complexity, or do they have to start in the middle someplace?

Fatma Koçer
That’s a good question and I agree which computer aided engineering? We are being able to tackle much complex problems, both in terms of geometry and in terms of physics, just like we used to do single object optimization. Now we’re doing more multivariate optimization. We used to do single physics because that’s all we could handle, both in terms of our solution processes and in terms of our computational resources. But now we’re doing more and more multi physics problem like field structure interactions, solid mechanics and electromagnets. So yes, the problems are getting complex, but we are engineers are our job is to find simple solutions to complex problems.

Jim Anderton
How much of this is cost driven? I know the automotive industry that I come from. Cost is really a primary consideration and there’s many situations where it would be it would be useful to be able to use an exotic material, for example, a high strength alloy, but it’s simply cost prohibitive to do so. So more weight must be engineered into a structure to make up for the strength that could be had easily with with, say, Iron Modulus material, for example, or a more sophisticated design.
Is it is is cost still king in most of the engineering disciplines that you study.

Fatma Koçer
Cost is the king and they’re all cost you one are maybe in aerospace more risk driven and then the task to the engineer is within the limitations of the cost still find acceptable or feasible designs a design that meets all the requirements. And I think that’s what we excelling as engineers what we are excelling a trade with cost human cost limitations.

How do you find type performing designs? Of course, there are some industries that cost is not as much of a driving factor, I would say, such as electronics and some consumer products. But yeah, it’s of course an important piece of the puzzle.

Jim Anderton
You mentioned the aerospace industry. The aerospace industry has been revolutionized by things like composite materials and sort of a paradigm shift from metallic structures where we knew how strong it had to be, we knew what the safety factors were. So, we simply engineered to a given safety factor, perhaps tested a representative example just as a verification step. And then we moved on in things like composites, we have structures would operate by statistical sort of laws where it’s, it’s we bend it X number of million times and there is a resultant Y probability of failure and that we operate in a different sort of way of thinking about cycles to failure or cycles to that are reasonable within a sort of a design life expectancy. It’s this is not something you can test by making something and breaking it out. Where does design, optimization and simulation fit into that new world where things operate with unique materials and unique ways?

Fatma Koçer
So actually, when we talk about new materials, such as composites, such as additive manufacturing, it’s a real example, a field where simulation and data science converges because for some of those new materials to do not have enough understanding of the physics to use simulation fully, but we don’t have enough data either to be able to drive properties solid or fit.

Well, we can use both of them combined, right? Our limited understanding of the physics with the limited amount of data that we have speaking, combined, combine and then, you know, still be able to design all safe, reliable products. So I think that’s a good example of the convergence of simulation and data. For some of those, you’ll be surprised that we actually have much more data then then we will.

We think there is because there is very high-end testing facilities in the nation, labs around the country. That is supercross is just testing these materials, whether it’s composites or additive layered manufacturing, and provide the data to OEM, civil, aerospace or automotive. And there are we’re doing data driven decisions in picking the right rate design parameters like design dimensions.

And that’s where data science is complementing the simulation, the computer engineering world, as we know more and more about the properties of these materials, of course we’ll be developing the physics that solves them and we may rely less on data. But this is a good point where more data and simulation helps with data. Of course, we can also understand the uncertainty and the confidence, confidence in the models, and it becomes more important with these new materials.

New physics combinations are in for especially new complex applications

Jim Anderton
So much of engineering is empirical and basically, it’s many of the functions were derived from empirical observation. And it sounds like you’re talking about a world which operates the other way around where the mathematics informs the design rather than the experience of the designer informing the mathematics. Is that is that a shift in the way and design engineers should think.

Fatma Koçer
With design and optimization? Yes, the mathematics drive the design decision making process, but it’s to engineer that form leads to problem and it’s to engineer that’s going to be interpreting the results, making sure that they’re meaningful, they’re there, they’re feasible. So it’s a combination of engineers, expert knowledge and the data science. So one of the things, for example, that we’re working in my team, we call it expert, immigration expert.

That’s one of the AML solutions we’re integrating into hyper works. And on the objective in expertise, AI is to emulate, to mimic experts decision making process. In automated design optimization, there are a number of design requirements that are not quantitative like cost but good quality to like the look and feel of the product. Whether the the behavior is, you know, favorable or unfavorable.

And so that means during the design optimization process, we would have to have the human in the loop, which breaks to automation, which makes the process much longer. So, using machine learning, our particular classification models, we take to the data from the engineer in how they had sort of judged the previous designs. We train an animal model that can be used in our optimization products so that the design can be optimized in one iteration.

So again here, once the engineer formulates the problem, it’s the math that takes over, but really it’s the engineers problem formulation that determines what the outcome would be.

Jim Anderton
Now, that’s interesting. You mentioned that the of course, the traditional way to iterate to design success was often constrained by cost and by time. I recall many circumstances when perhaps we had four months to to formulate a finished design that was locked in for production and that distilled down to perhaps 18 or 20 testing cycles and whatever the product was.

At the end of that process, that was the design that was locked down for production. And many times you watch things go to production and think, Boy, we could have made that better. We could have pulled more cost out of it, we could have made it stronger, we could have made it more durable. But we ran out of time.

So in some cases, the more experienced engineer, the quote unquote better engineer in the design office was the one who could start at the middle or who could optimize the design on paper or as a rendering before the testing cycles began. So there are fewer cycles required to to optimize the design. You’re talking about a mathematically driven world.

Is that over now? Are we a world in which basically the the the machine says that that the more experienced engineer has no specific advantage over the the mathematically driven, data driven newbie?

Fatma Koçer
No, actually, it’s almost the opposite. All the expert engineers knowledge of the products is very important in the in engineering in terms of problem formation, what better designs can be find form better problem formulations. So if you’re an expert, an engineer, your problem formulation mimics your design requirements much better than if you were a new engineer. But so this is another point where simulation converges with machine only because we can learn from the historical data the expert engineer has created and trained machine learning models and deploy these models to meet the younger generation of engineers so that they don’t always have to start from the beginning.

It’s almost like they would be inheriting some of the knowledge that the expert engineer has in the form of these machine learning models. One of our other projects of integrating machine learning and AI to our simulation software is this exact notion of using historical simulation data. So being able to recycle the past experiences and expertize that the company has.

And the goal for that is again what you have mentioned, which is designing better products faster. So if you can train animal models using your historical simulations, then animal models run really quickly so you can explore many more designs in much shorter amount of time. And this larger exploration will give you more alternative designs. And then once your commercial feel that’s promising, then you can do the old school physics-based simulation, which usually requires more computational and time resources, and also do maybe one or two physical testing.

But that reduces to the expense of physical tests and end also, you know, computationally intensive physics based simulations. So this is another example of how Altair is converging on data science, machine learning, AI, rit, which see with simulations, stewards.

Jim Anderton
Fia CFD are examples historically were to a certain extent constrained by available computing power, and we’ve all seen circumstances where engineers design things, perhaps not optimally, but designed things that can be calculated. So we’re looking at a world now in which we have high performance computing that operates. I mean, we’re talking about, you know, exascale computing, you know, petaflops incredible, incredible speeds at this point is that has those shackles been removed? Is it at a point now where you can design the complexity and go ahead and use your physics?

Fatma Koçer
I mean, to say it’s much better than when I started in the industry. We were at a point that if you run five simulations, the six simulation would be questioned because we were you could be using the resources. Right. But it’s not anymore like that. I mean, people submit hundreds of simulations and be able to get the results within a really, you know, within acceptable time ranges like a day or maybe even a week.

So HPC is a very essential part of this process. All the barriers are much, you know, has been removed from the past, both in terms of the ability to access to these resources, especially with cloud services where the resources become expendable. For example, in our offering in all Tier one, both in terms of data storage in terms of job submission, in terms of resources like CPUs and GPUs, it’s expensive.

All expendable, right? It’s per paper usage. The more you need it, the more you have access to it. So you’re not limited to your fixed boundaries. As you know, this is before. So yeah, people are estimating much more of these solutions are much more accessible, much more affordable. And of course, another aspect of it is with all the data that you’re generating and then we’re using for machine learning models, the simulation data management, that’s also becoming a very, you know, even more important now than before, because there are all these going back and tapping the data that we’ve generated a year ago or two years ago, because that’s what we can use to train animal models, deploy them for quick design explorations, quick design, you know, decision making processes.

Jim Anderton
And for the experience and knowledge that we’re talking about, the very valuable experience used to be contained in the cranium of a good engineer. Then it expanded perhaps to the cumulative efforts of a team of engineers in a design office. Now we have a world which is cloud connected, and we’re talking about high performance computing, which naturally suggests cloud connectivity.

What is the difference now in a world in which engineers can be connected instantly and in real time to other groups or other machines or other technologies around the world? Is this going to change, you think the way that we simulate to success?

Fatma Koçer
I think so, because cloud means collaboration. More collaboration, cloud means easier access to the information and to the data, to the physics based simulations. Cloud means expandability in terms of your usage. And to me after three, collaboration is a very important aspect because then we can merge, you know, the expertise that’s in different parts of the world, that’s in different parts of the teams and make design decisions, you know, to the considering the product as a whole.

So we’re not limited to our silos anymore. We’re not just working, for example, to meet the New Age objectives. We are cooperative with the team that’s working on disability or the team that’s working on fuel efficiency and doing a true multi objective design optimization.

Jim Anderton
It’s it’s interesting, we there was a time when some production technologies were invented and the existence of new things like C and C machine tools for example, then gave engineers ideas for designing products that could be made with this new technology. So does this have the way around? Sometimes it’s the design requires a new technology to be developed simply to make it an exemplar.

So, for example, in the sophisticated forms of welding, for example, or adhesive bonding, we have something that is not new, but we think of it as new, and that’s additive manufacture ring. So we have the ability now to make shapes that are not constrained by the traditional triangle of forces. And I know engineers, engineers are trained to think in rectilinear terms, in terms of Cartesian coordinates, and that sort of organic shapes are nothing more than, you know, line segments and planes, you know in a mesh shrunk down get it to an infinite level is are technologies like additive manufacturing is this going to change the way we design and the way we simulate to success, the fact we can make any shape anywhere?

Fatma Koçer
Yeah, it is it actually is changing and talking about, you know, engineers liking rectangular things. So before topology optimization, before topography optimization, all the stiffening members that you see in things like brackets or old drips that you see in stent parts, there are all these, you know, perpendicular members, you know, nicely spaced, equally space, I should say. But then topology optimization, type of optimization came along.

And now we start on, we start getting used to these organic structures that makes better use of the material, better use of the design space. So just like that with Additively manufacturing, we are now, for example, seeing these letters structures or structures that have, you know, repetitive pattern that is, you know, that wouldn’t be able to be manufactured in any other ways.

Yeah, it looks unique, it looks different, but it’s also expanding our design space, which means that we are being able to design even lighter, even better performing designs.

Jim Anderton
Recursion is something in nature that you see frequently, things like on a macro scale, and as you go down to a micro scale, you see repeating patterns over and over again. Is that something that that lends itself to this mathematical physics-based approach in design for that lightweight structures? You’re talking about lightning holes within lightning holes within lightning holes.

Fatma Koçer
That’s a good question. The pattern repetition is, you know, sometimes because these technologies are new and there are some limitations that they convert so that we would have to repeat the same pattern. But just the fact that these patterns are very light. You know, we’re literally cutting holes in very much smaller scale than we did before.

It allows us to achieve lighter designs. But we were seeing that, for example, again, machine learning combined with this pattern increase creation, we don’t have to be limited to the repetitive pattern. So we can change the pattern and even reduce more read, create more lightweight structures. Still meeting all the design requirements for this.

Jim Anderton
There’s so much to talk about. We could go on we could do this for hours, I’m sure. But, you know, the clock must intrude. A final question for you. And this this is sort of it’s near and dear to my heart, to many engineers. It’s in the design process. It’s because of the limitations traditionally and the number of times you can iterate toward success, you have to start with a good design and then use you simulate or test to success to a very good design or a great one if you’re lucky at the ability to simulate quickly and in, in vast number of runs over and over, thousands, millions, perhaps at this point, does this mean now that it’s possible to try things that you would not dare try from a design standpoint before? Can you throw something against the wall and see if it sticks? Can you try something crazy and just press the button there and stand back and see what miracle evolves?

Fatma Koçer
That’s exactly where the convergence of simulation and machine learning is for us. Again, we can use historical simulation data to train models, and these animal models gives you the performance of a new design instantaneously, right? So that means we can try many more designs than before we can go crazy. Of course, we would have some indication of whether that that that prediction is reliable or not.

But as you have more data, you would be training with more variety and your your email model would be able to have confidence even in, you know, in crazier designs. Right. So this is this is the power of simulation and and machine learning, and we can be trained much more quickly. One of the all the limiting factors, all relying entirely based on physics-based simulations, was the amount of design exploration. But with the convergence that barriers also being lifted.

Jim Anderton
Fatma co-chair Altair, thanks for joining me on the show. And thank you for watching. See you next time on Designing the Future.

Fatma Koçer
Thank you so much. Thanks for having me.

Jim Anderton
And thank you for watching. See you next time on Designing the Future.

The post AI-Powered Engineering, Beyond Simulation appeared first on Engineering.com.

]]>
Design Speed with Control: The Executable Digital Twin https://www.engineering.com/design-speed-with-control-the-executable-digital-twin/ Tue, 17 Jan 2023 13:00:00 +0000 https://www.engineering.com/design-speed-with-control-the-executable-digital-twin/ The executable digital twin leverages advanced tools like simulation for faster, better results.

The post Design Speed with Control: The Executable Digital Twin appeared first on Engineering.com.

]]>

This episode of Designing the Future is brought to you by Siemens Digital Indutries Software.

In engineering, moving a concept from idea to eventual hardware has always been a challenge. And when those engineering projects are large and complex, the need to establish a single source of truth to keep multiple engineers and development teams working together is essential. Configuration control of conventional CAD processes help, but today’s computer-aided engineering includes development tools that go far beyond rendering, such as computational flow dynamics and simulation. The ability to iterate quickly and virtually puts a premium on project management.

The solution is the digital twin, which promises to maintain order within a rapidly accelerating design iteration process. And the executable digital twin is the key to real-world, cost-effective application of the digital twin concept.

Joining engineering.com’s Jim Anderton to describe the executable digital twin are three experts from Siemens Digital Industries Software: Ian McGann, Director of Innovation for Smart Technologies, Doctor Leoluca Scurria, Product Manager, Executable Digital Twin and Doctor Durrell Rittenberg, Director, Simcenter Experience Product Management.

Learn more about the executable digital twin (or xDT), the next evolutionary phase of the digital twin.

The transcript below has been edited for clarity:

Jim Anderton: In engineering, moving a concept from idea to eventual hardware has always been a challenge. And when those engineering projects are large and complex, the need to establish a single source of truth to keep multiple engineers and development teams working together is essential. Configuration control of conventional CAD based processes help. But today’s computer rated engineering includes development tools that go far beyond rendering, such as computational fluid dynamics and simulation. Now the ability to iterate quickly and virtually puts a premium on project management.

The solution is the digital twin, which promises to maintain order within a rapidly accelerating design iteration process. And the executable digital twin is the key to the real world use of the digital twin concept. Joining me to describe the executable digital twin are three experts from Siemens Digital Industry Software. Ian McGann, director of Innovation for Smart Technologies, Dr. Leoluca Scurria, product manager for Executable Digital Twin, and Dr. Durrell Rittenberg, director SIM Center Experience Product Management.

Can we kick off just a sort of level set and establish something basic? What is a digital twin?

Ian McGann: Well, for us it’s a virtual representation of a physical asset.

Jim Anderton: It’s a very simple way of describing that, but it’s in a sense, isn’t everything a digital twin in a sense, the concept or the idea an engineer has for design is in a sense a digital twin and a rendering is in a way, a digitized twin. But we’re talking about something that’s a little bit different from just a CAD file, aren’t we?

Ian McGann: We are, yes. Like you mentioned, documentation can be a form of a digital twin as well. What we’re talking about is when you can capture the dynamics, the movements, the forces, the stresses. That’s for us in engineering, let’s say a more functional digital twin and capturing all of that, let’s say complexity of the structure. So we would say we have a complex digital twin or a comprehensive digital twin. So we’re capturing the fluid structure interactions, the mechanical, the electrical, everything in that digital twin model that represents and is linked and connected to the physical asset.

Jim Anderton: Leoluca, we’re talking about the executable digital twin. What is executable? What makes digital twin executable?

Leoluca Scurria: Yeah, so this links actually to the definition of digital twin because usually these digital twins are used for specific purposes. So when you have multiple representations of different, let’s say physics and behaviors of the assets and the way we make it executable, basically we package it and we package it in a way that can be used also for other purposes. And this is key because our customers put a lot of efforts and invest a lot in the creation of digital twins. And basically our goal is then to enable them to leverage these descriptions, these behavioral representations also outside of the usual, let’s say, environment. So you can imagine a digital twin that gets created during the design process then can be reused also on the machine to optimize the control or the maintenance, and basically can extend the value creation over the entire product life cycle.

Jim Anderton: With a rendering, a rendering is a digitized version of what used to be an ink on paper hard asset, something that was a physical touchpoint you could hold in your hand and you could refer to. So if there was a question, it would be, let’s check the print. Now it’s let’s check the rendering, let’s check the CAD file. Does digital twin have that same relationship to a physical asset? Is this something where you’d think on the shop floor, or if there’s a question of a manufacturing engineer talking to design engineer, you’d say, let’s check the digital twin?

Leoluca Scurria: Well, that’s one possibility, and that links to the one source of truth that we always worked over. So, what you have with rendering is really an image or like a visual representation we instead look at representing the physics of the system. So when we go into how is actually this asset meant to work, then we can look at the digital twin and say, Hey, this is how we see the physics working and interacting with external world.

Ian McGann: Well, like Leoluca says, it’s brilliant in that and in this same reference that you had related to the CAD file, like you’d look at the CAD file, but that’s always a point in time. Once the machine is made, things change and the CAD file is still representing the perfect scenario the way we want it to build it, but it doesn’t represent that moment in time. And it’s the same if you just looked at a physics model. You’d see that’s the way we intended it to work.

But what we’ve done with the digital twins is we’ve gone that step further, is we’ve made a connection between the physical asset, the machine, and that physics based model or complex model, let’s say, and the model updates into match the virtual in real time. So if you take a look at it, you’re on the shop floor, you say, okay, let’s take a look at the model. You’re going to see exactly the state of the machine, the physics behind the machine, at that point in time. But the cool thing is you can go back in time as well. You can say, well, what was it last week? Did it change?

Durrell Rittenberg: Well, one of the things also is to think about the types of challenges that an executable digital twin can really address. And Ian, one of my favorite examples is we’ve done a fair amount of work in the automotive industry, but in other industries as well. And we work with one of the larger, actually work with all of them, but this is one of the larger German auto manufacturers. And they came to us with one of the challenges they have, which is, Hey, you’re doing such a great job of simulation that you’ve actually reduced the need for physical prototypes, except we still need physical prototypes. We need to be able to test them. We need to be able to validate them. What can you do within Siemens and your digital strategies that might help us improve what we get from a physical model? And so we started by going back to the engineering groups and saying, okay, well what kind of digital twins do we have that we might be able to leverage?

And we actually chose vehicle dynamics, it’s not important. What is important is that working with that team, we were able to show them how a digital twin could actually help them on the physical side. And they went from setup times that were in the three day, four day timeframe down to about four hours, which was fantastic. But more importantly, within those four hours, once they got it out on the track, they were getting 10 times more information because they were using the executable digital twin as part of the process.

As a result, their models, they didn’t have to go through five days of testing, they could get it done in one day. Which means that the amount of time, calendar time that they have, wall clock time, they can use this physical prototype for other types of analysis, went way up. So it’s the kind of thing that extension of the engineering that’s done typically in design and extending it into the physical prototype. And ultimately those models with the executable digital twins can be deployed either in the AV testing mode, so they can put it inside of a model that runs for AV testing, or it can actually be deployed on the vehicle itself, which means that your car has an executable digital twin running in the background that helps the car improve tuning of the suspension system, the breaking system, the safety systems. All of that now is leveraged based on the work that was done by the engineering teams back in the design process.

And it’s that kind of connectivity that the executable digital twin is bringing to modern engineering context, that this connectivity, this representation is really changing how people think about what they do on the engineering side and how that engineering work can be transformed and used in different contexts. And I think that that kind of helps phrase exactly the types of problems we’re trying to get after. Yes, the technology is based on lots of different types of engineering and types of model reductions and these processes are real time, but it’s important to take that step back and think about how this is helping modern engineering corporations and companies around the world tackling difficult types of problems and do things in a very different way. So it’s really quite exciting.

Jim Anderton: Durrell, you brought up a couple of things which are worth unpacking that feel like a paradigm shift. Look, traditionally in product development process from engineering perspective, it doesn’t have to be automotive, it could be almost anywhere. There’s a design iterate, test, break, redesign, iterate. And there used to be a saying my old engineering boss used to say, there is no design, there is only redesign. And so the process is about how fast can we iterate our way to an acceptable solution and then we go.

And we iterate our way, historically in many cases by building prototypes and then real world testing them and then breaking them. Now we’re looking at a world in which not only can we virtually test things and break them, but Durrell is talking about a world in which we can deploy product into the field and then get real time feedback about the performance of that product and cycle that back into the redesign process. So are we blurring the lines between where design stops and production begins? Are we going to see a future where redesign is constant for everything from the shoes in our feet to the cars we drive?

Ian McGann: So that’s a good point. So we have to combine the shift left strategy that we see in our customers. So less prototypes. We need to achieve the same output in shorter time reducing the prototype cost. And so we have two ways of doing it. So one way is to basically accelerate the design process by combining the digital twin with the physical testing to get more information during the design phase. But also we need to connect the products that are out there in the field creating very useful and meaningful data connected back to the design phases. And this creates, let’s say an ecosystem, a digital thread throughout the entire life cycle that allows a quicker innovation, a quicker improvement of the product themselves. So this is the ultimate goal to really create an ecosystem where the information can flow across the enterprise and the product lifecycle.

One of the things I love about our customers is they always push us. They challenge us a little bit and the latest challenges, well yeah, I want to know what’s going on in the vehicle at this point in time, but I want to know, well, the example they have is a battery degradation. They want to be able to take the battery that’s in the vehicle and resell that at the end of its life, but they don’t want the battery obviously to be damaged when it gets resold. So there’s an optimal point where you say, “Okay, now the battery is the perfect point to resell it. Let’s take it out as a vehicle put another one in and then give that battery a second life.” So the analysis and the work that you have to do, you don’t want to put sensors all over the battery.

So we use digital twins to give us these virtual sensors to do battery degradation analysis in real time and then report back to either the tier one, if that’s the owner, let’s say of the battery if you’re leasing the vehicle or the OEM themselves or even the owner of the vehicle. So he could get that information and say, “Ah, you know what? Time to change my battery. I’m going to install it in my house as a backup generator and go get a new one.” But it’s that type of thinking that I love. That’s what our customers are bringing to it that we didn’t have before.

Jim Anderton: Durrell, that’s an interesting point is that we live in an IOT age where we’re talking about a future in which we have sensors embedded in everything and many sensors embedded in a product. The feedback I hear from individuals working in this sector, there’s a worry that we’re going to overwhelm engineers with data and that processing that data is going to be a factor. The simulation community turn around and say, “We’re not going to need 10,000 data points for a tennis shoe. We’re going to simulate the product up front basically, and we’re going to optimize it to the point where we can do three sensors and we can get the actionable information we need. Is simulation going to basically sort of wrestle that data overload problem to the ground, do you think?

Durrell Rittenberg: I think, there’s two pieces to that. One is obviously simulations are getting faster, more accurate, and you can get more information in that design phase and that actually can increase confidence for sure. But there’s another piece to that which is, there are strategies with machine learning and AI that we’re using that help kind of take that information and actually provide insight, not just data. I mean, the problem we have right now is it’s easy to create so much data that you really find yourself in the, my favorite is Delta Airlines. The CEO actually talked at a conference I was at which was an engineering conference. He said for every flight they’re bringing back about a, I think it was four terabytes of test data for every engine and every system in an aircraft.

And he said, “We have all this information and someone said, “Well that’s great. What do you do with it?” He said, “Well, we send it back to the engine manufacturers, let them make sure that they know we have this data.” But ultimately what they want to get to is how do you get information back to the people who have to maintain that aircraft that you know what? Engine number two is probably getting close to end of on wing life and needs to be refreshed and how do you predict that. And that’s where the executable digital twin, which can be powered by IOT, which can be powered by machine learning algorithms and other strategies can help predict when you might need to do that. So that idea of predictive maintenance which is kind of where the goal of a lot of the work that’s being done in engineering today is really around that because there’s such a business reason to do it.

It also comes back to that performance optimization. Ian brought up a great point, which is how do you make those batteries last longer? Well, you do that by understanding how they’re performing. And you do that by understanding the information that’s coming out of those using this an encapsulated, executable digital twin strategy within that battery pack, and it’s giving you more information, more insight about how the battery’s performing that you wouldn’t get otherwise. And it’s that insight that allows you to say, you know what? It is time to slap that on the wall and make it a Tesla battery or whatever, the one you put in your garage that can power the house.

So that’s the kind of thing we need to be really thinking about, it’s a shift in the way we think about how information could be used and the kinds of things that these executable digital twins can provide in terms of information and insight into a complex engineering problem. It takes it out of the engineering domain and takes it back to the guy who’s got his new Lucid or whatever the most recent, EVS and it’s telling them, “Hey, you know what? Look, the way you drive is impacting the battery, you might want to consider changing it or maybe it makes a shift inside the actual drive mechanism to save battery life.” Anyway, these are the kind of things we need to think about. It’s a lot more than just the technology. It’s what you get out of it.

Ian McGann: My little addition, so the machine learning aspect of this I think is quite interesting. We have one customer who is using the digital twins combined with physical assets. So they have a physical asset, they have a digital twin, physics based digital twin combined with it. And what they want to do is generate massive, massive amounts of data for the purpose of machine learning. They don’t have a history, that’s the problem with it. So in order to get that data they’re using the digital twin models with defects programmed into them that allows them to annotate or label the information far more accurately. So that when they’re creating that machine learning algorithms, they now have labeled information of thousands and thousands of data points that they wouldn’t have gotten until the systems were deployed. So that’s the other use case where you have the digital twins are complimenting the machine learning and actually generating more data for you. But it’s smart data, it’s insightful data.

Jim Anderton: Product design has always been constrained by manufacturability. You can design anything, but can you make it? I’d say we’ve got some technologies like additive manufacturing, which have removed those constraints to a great extent. If you can imagine a shape and test that shape virtually you can make it now. Is the executable digital twin, will that work with technologies like additive or process automation do you think, to alter the way engineers go about the design process? Are they more free now with this technology?

Leoluca Scurria: So when we look at innovative production systems like additive manufacturing, the knowledge that the customer have on the production processes is limited. And often when we look at manufacturing, most of the decision making are based on prior knowledge about the production process. With executable digital twin, basically, we can maximize and give smart data to our customer based on few prototypes of the innovative production system. And that basically allows to come up with a, let’s say, effective production process much quicker.

We actually had a talk with an aircraft manufacturer that was trying to optimize some manufacturing application for composites. They were saying like, okay, we have these new production processes that we want to speed up, but we don’t know what is the right starting point to optimize our parameters. So, that’s where we started engaging and talking about how we can really reduce that lead time from initial prototypes to, let’s say, actionable production processes through virtual sensors, performance optimizations and performance predictions. And that you can only do it if you can combine the few information that you have from initial prototypes with a digital twin to maximize not only the information, but really the insights about your production process. That’s where the real value of executing by digital twin is, it’s really to transform data or big data that you have in two insights and actionable information.

Jim Anderton: Durrell, we see in business software world, the movement towards software as a service rather than selling a CD-ROM with a package in it and then mailing updates. An individual in the electric motor industry was in conversation with recently mentioned that they felt that the future for that ubiquitous product, which is used in engineering manufacturing everywhere, was to perhaps go away from selling an electric motor to a customer, but actually selling power by the hour and a model used by the jet engine industry before. Because this ability to feed real realtime information back means that the electric motor manufacturer could schedule preventative maintenance or even swap the motor out without the customer even being aware of the performance of that motor.

So, in a perfect world that imagine that you’re leasing the motor and the motor manufacturer sends a technician who services it or even replaces it, perhaps without the customer even knowing that individual’s coming. So it sort of worry free, trouble free. If you extrapolate that world, you’re talking about a potential world in which everything from the clothes on our back to the shoes on our feet to the robots that build our products down there no longer exist as an owned asset on the factory floor. The executable digital twin, does that play into that future?

Durrell Rittenberg: It absolutely does. And actually I think the example that you just gave with the electric motor or even with the gas turbine or wind turbine, really doesn’t matter if you start to think about it as, as an organization, we basically get paid for how much power we generate with our wind turbine. And if we can start to figure out ways through the executable digital twin to improve outcomes by looking at not just one wind turbine for example, but maybe looking at all the wind turbines that we have in a wind farm. Within those wind farm wind turbines, we have executable digital twins running within the motor that gives us information about the mechanics of the motor and whether it needs to be serviced. We can also look at wind performance. We can start to bring together all this information and use that as a decision making platform to optimize how much power we’re generating. That’s going to help someone who’s in the business of selling that power maximize their return.

It also has an environmental impact that we don’t often think about that. But the idea that you can improve not only the efficiency, which has a direct impact to how green a particular type of energy might be, you start to think about how an executable digital twin strategy can be part of a sustainability effort within an organization. There’s another example which is kind of slightly tangential to the wind power, but it has to do with food manufacturing. We don’t think about food manufacturing the same way that we think about building a jet aircraft. But food manufacturers generate a tremendous amount of product and one of the main people working with right now, they make cheese puffs. We all like cheese puffs. They’re delicious. But you’ve probably been in a situation where you took a cheese puff out and you bit down on that guy and it was super hard, you almost broke your teeth and you’re wondering what the heck is going on? Well, it turns out that food science, the process by which they make those cheese puffs, that whole process is heavy engineering.

So if you can come up with this executable digital twin that can provide insight to the manufacturer that, hey, my extrusion mixture is off just a little bit, and it has that outcome where you get things that are just not sellable, that executable digital twin is going to save them a tremendous amount of energy because they can actually optimize their output, it creates a better product for everyone, and it does it at a more sustainable way in the context of how much power he uses. So when we think about these digital twins, we really need to be thinking about how executable digital twins and this overall digital thread from requirements through the design, through the actual manufacturing, and ultimately into the bag of cheese puffs, all of that, there’s digital twins across that entire workflow. The more that we can start to leverage that, the better outcomes in any one piece of that.

But this is the kind of thing we should be thinking about as an industry because we need to figure out how engineering is going to help us address some of the major challenges we have as a population. These are not just making novel devices. This is about really looking at how we can impact the world in a positive way.

Jim Anderton: Ian, Durrell has just touched on sustainability and there’s no way to have any talk about engineering at all in this day and age without talking about sustainability. And how many times have we talked to … I talked to manufacturers, engineering firms to say, “Great, I’d love to reduce my carbon footprint, but I make valves. I’m not in business to reduce my carbon footprint.” Can you talk about how the executable digital twin can help them square that circle, which is address sustainability issues, but not lose focus on the core business?

Ian McGann: If you look at where the valves, pumps, et cetera, will be used in the end, it’s going to be in … Well, we have one example. Customer is a water reservoir. And it’s covering a country. Right? So this is quite a …. 16,000 pumping stations at extremely complex set up. And the problem is that you’re pumping water from one location to another, and by the time you get the real data back, it’s too late. You’ve already pumped too much or you’ve already distributed too much water. Right? So that the information doesn’t come in from the real sensors in time.

So we create a digital twins of the entire setup, all the pumps, all the stations, everything. And we then start optimizing that. So it’s a information leads to insight, leads to optimization. And I think that’s the point that we’re getting to this, is that if your valves, your pumps, if they have these digital twins connected to them already, you’re selling that information as well as the value of the pump. Right?

And that’s connecting to a bigger system. And that bigger system is basically conserving energy by saying, “Well, you don’t need to pump so much because we’ve predicted that it will be full. So stop now before the real data gets there.” So it’s like an extremely advanced model predictive controller. But the model is of the entire system, and to the levels of complexity that you normally wouldn’t see in its standard MPC.

So yeah, we have examples of customers and they don’t ask us for a digital twin. That’s the nice thing, is they don’t say, “Oh, we need digital twin technology.” No. They come in and say, “We want to be more energy efficient. Can you help us? We have this target. We want to …” You know, “How can we get there?” And in pretty much all of those cases, the combination of the executable digital twin and the physical sensors that we have and can distribute throughout the systems is what’s allowing them to get there.

Leoluca Scurria: And another extremely important point is that, as Ian was mentioning, that we have customers which have system in place since 30 years. Right? And you don’t want to go there and say, “Look, I got an amazing solution for you, but you have to go back to your system and put, I don’t know, 1000 sensors of it so we can optimize it.” With executable digital twins, we can be what is called brown field compatible. Right? So we can transform the information that you have from your system into meaningful insights.

And that’s what drives in the end your, let’s say, the business value of it. Because we can really reuse what is already in place. And then you can imagine that there are also companies that then use it also to improve their business models. So they can say, “Okay, are you a power user? Then you need to have this type of capabilities on your machine, this type of performances.” And as it is software based, they can easily switch it on and off depending on the customer need, to optimize also the usage and their business model in the end.

Jim Anderton: As a question, sort of a wrap up question, can I just go around the horn quickly and ask about the executable digital twin as a starting place? How can an engineering organization now which uses conventional software-driven design processes make that transition? What is the first step that they need to do? Durrell.

Durrell Rittenberg: It’s a great question. Of course, it’s going to depend largely on the type of engineering they’re doing, but it’s kind of going back to what I talked about in the automotive sense. It’s really starting to look at where are those challenges that we want to address, and looking at the kind of information we have to start. And that’s really where we would come in and work with an organization is to try to understand, “Okay, what is your ultimate outcome? What are you looking to do? And let’s take a look at the assets you have in the context of the water system.” And using that as a start point, really trying to connect the dots.

Siemens, as an organization, as you know is somewhat broad in terms of the kind of things we do. We have motion control, we got the factory automation bit, we’ve got the software, the part that we fit into. But all of those pieces are part of this broader ecosystem of digitalization that organizations are going through. And it’s really taking a look at, “What are the outcomes?” And starting to look at the engineering information that’s available as a start point.

Because what we find is oftentimes there’s enough information to get started within an existing design process, that we can start there as long as we know where we’re going. And that’s kind of the key thing. And we can help with that as well.

Jim Anderton: Leoluca, your recommendation, how do they start?

Leoluca Scurria: Yes, indeed. So I want to link a bit to what Durrell was saying because what I always say when I talk to customers is like, “Okay, we have this amazing technology that really solve business problems. We are expert in digital twin, executable digital twin, but you customer are the expert in your application, and you only know your problems.” So we always start with having a conversation, and understanding how we can find a compelling problem that we can solve with executable digital twin.

And then from there we can move in different directions. So that depends on the stage of digitalization of the company. So you have some enterprises that already have a broader adoption of digital twins. So in that case is more shifting the paradigm to also use in service and for maintenance. And we have companies which are, let’s say, that use less digital twins. And in that case, we can also, let’s say, go on a journey together starting from digital twin creation and then leverage this digital twin also outside of the, let’s say, more economical usage.

Jim Anderton: Ian, what’s the first step?

Ian McGann: Well, the first step is creating your digital twin. Right? So that’s the first step. And if you go back five years ago, it was an extremely difficult process. And we worked really hard to make that one button click and it’s done. And it’s an executable digital twin that you can now deploy, manage across your machines. And I think that’s the point is people might have looked at the digital twin technology three years ago, four years ago, and they said, ‘Ah, it’s too difficult. It’s not working.” Or, “We’re not getting the connections between the physical assets and the virtual like we want.” I’d say, “Take a look at it again. And what you’ll find is that that creation process, the deployment process, that’s in place now, the management process is a lot, lot easier than it was five years ago.”

Jim Anderton: Durrell Rittenberg. Leoluca Scuria, Ian McGann, thanks for joining me on the show today. And thank you for watching. See you next time on Designing the Future.

The post Design Speed with Control: The Executable Digital Twin appeared first on Engineering.com.

]]>
Making the Digital Twin Work in Your Application https://www.engineering.com/making-the-digital-twin-work-in-your-application/ Tue, 29 Nov 2022 16:00:00 +0000 https://www.engineering.com/making-the-digital-twin-work-in-your-application/ Digital Twin is powerful across multiple applications. It can make your engineering project better and more efficient.

The post Making the Digital Twin Work in Your Application appeared first on Engineering.com.

]]>

This episode of Designing the Future is brought to you by Altair.

Engineering is more than applied science, it’s the application of creativity to solve real-world problems. Ideation is the cornerstone of design engineering, but a major difference between good engineering and great engineering is the ability to transfer ideas into renderings and workflows that generate products and processes that are true to the original concept. That concept itself is usually iterated, both in the mind of the designer, and in the development process itself. For most of history of engineering, the interface between ideation and rendering has been an informal process, buried in the mind of the designer. 

Today, that’s changing with new generation of engineering tools that simultaneously impose rigour on the design process, while freeing the designer to explore novel solutions, many of which would be impossible to execute with simple computer-aided design. It’s the digital twin which makes this possible. Jim Anderton discusses the implications of digital twin in real world applications with Keshav Sundaresh, Global Director of Product Management – Digital Twin and Digital Thread at Altair. 

Learn more on how digital twins help companies optimize product performance.

The transcript below has been edited for clarity:

Jim Anderton: Hello, everyone, and welcome to Designing the Future. Engineering is more than applied science. It’s the application of creativity to solve real world problems. Ideation is the cornerstone of design engineering, but a major difference between good engineering and great engineering is the ability to transfer ideas into renderings and workflows that generate products and processes that are true to the original concept. Now, that concept itself is usually iterated both in the mind of the designer and in the development process itself. 

For most of the history of engineering, the interface between ideation and rendering has been an informal process buried in the mind of the designers. Today, that’s changing with the new generation of engineering tools that simultaneously impose rigor on the design process while freeing the designer to explore novel solutions, many of which would be impossible to execute with simple computer-aided design. It’s the digital twin which makes this possible. 

Discussing the implications of digital twin in real world applications is Keshav Sundaresh, Global Director of Product Management, Digital Twin, and Digital Thread at Altair. Keshav brings more than 17 years of customer success and engineering experience to his role for digital twin and model-based systems engineering at Altair, and is responsible for technical thought leadership, strategy, and driving the development of integrated software solution offerings that enable open, traceable, collaborative, and holistic digital twin thread on Altair One. 

Keshav has worked with customers across multiple industries globally on smart systems, mechatronics, robotics, and multi-body dynamics applications. Keshav, welcome to the show. 

Keshav Sundaresh: Thank you for having me, Jim. It’s a pleasure. 

Jim Anderton:  Keshav, that’s quite a list, quite a resume, and multiple applications, multiple industries. One of the interesting things about talking about digital twin and engineering is that it feels like that legendary, mythical, universal solution. Very frequently when computer-aided design was developed, it was really an aerospace application. It was driven by the aerospace industry, developed originally by one of the major aerospace companies, and then it was accepted there, and then it moved laterally to automotive and other consumer goods and other areas here. But when we’re talking about digital twin, we don’t usually append it to a specific birthing industry, do we? 

Keshav Sundaresh: No. No, we don’t. In fact, based on my experience working with a lot of different customers across different industries, I’ve come to experience and observe that digital twins mean many things to many people. It has several different forms, and I think there are really three different contexts with which you can look at, I guess, creating a framework around digital twins. 

I think the first context is what I call scope or scale. Depending on who you work with or what products you develop, digital twins can be made of a physical process, and so by physical process, I really mean it could be of a part, a subsystem or an interconnectedness of the subsystems into a product or how the product would interact with an environment in terms of a process. But digital twins doesn’t necessarily get restricted to mocking up something digitally for a physical process in terms of capturing the elements and the dynamics. 

It also spans into the biological process. So in terms of modeling the human anatomy or the physiology marks or having a library of virtual patients and a virtual test bench to optimize for health. But then digital twins also expand further in terms of having a Digital Twin of a customer from a business process standpoint. So to give you an example, I think you and I use credit cards every day, and there is actually a digital twin of us residing in our own banks where our activities and our transactions are being monitored and based on our transaction history, anomalies get detected, fraudulent activities get detected and so on and so forth. So the first context with which we’ve seen customers use, apply, and benefit from digital twins is the pure scope or the scale of it. The second context in our experience has been the purpose and the system life cycle. 

So we have seen a lot of our for instance, manufacturing customers start from a product definition digital twin, which is more like a specified version of what the final product should actually be doing. So you would start with let’s say, a voice of customer document that summarizes the key functional and the non-functional requirements a product should posses, and you would have an abstract first principles understanding of how the total system should function. Now as the product matures, so you can move on from an as-specified version of a twin to an as-designed version of the twin, which is where you try to model the various elements and the dynamics of your mechatronic system. 

So it’s not just about simulating individual domains, but understanding how these different domains interact with each other and perform as a whole. Right? But then once you mitigate all your technical risks in your as-specified and your as-designed phases of the digital twin, you can then have an as-built configuration. 

So you might have a physical prototype and you might have some test data in an emulated form, for instance, that tracks a certain set of KPIs or behaviors. So you would want to come back into your virtual systems model and tune the model to mock up the real world behavior. But then once you refine your prototype, you would want to mass produce it. So there is an as-manufactured variant of the twin, which is where you would have applications around augmented reality or virtual reality, where you would want to come up with training simulators to train operators to help optimize for maintenance, for instance. 

But last but not the least, you have the as-sustained version of the twin. So as you release these products out to customers, the customers start using your products. There is a lot of physical sensors with data being captured. So now, you could do a round trip with all the physical sensors from the customer usage and behavior and essentially have a digital representation, albeit a machine learning or an AI model that can predict, for instance, the future states. 

So really in our experience, the second context to a digital twin is the system life cycle itself. There are what I call purpose driven models that customers build depending on where they are in this map. But then the third context, which is arguably more important, is one of the key differences in our experience working with customers between a digital twin and a virtual prototype is that digital twins add value to our own customers as customers. 

So that is perhaps an abstract term called an as-a-service component, but really the key purpose of developing digital twins is to optimize for health, if you’re looking at a biomedical system or a healthcare system or optimize for service, if you’re looking at a product development type of a system, or it could be to optimize for production, if you’re looking at a manufacturing facility and you’re looking to optimize the throughput or the quality or minimize the downtime, or it could be to optimize for engineering, because at the end of the day, it’s important to again, have that feedback loop of how your customers use your product and you start to increase the overall product quality and performance. 

Jim Anderton: Keshav, it’s interesting, you brought up several things which resonate pretty heavily with me. I come from manufacturing originally, and there’s always an issue when developing a new product with things like tolerance stack. We have individual components which go into a sub-assembly. The sub-assembly goes into an assembly, the assembly goes into a finished product, and it is possible if the tolerances fall the wrong way, all four components or sub-assembly are perhaps at the high end of a controller at the low end, But in the end, you have a non-functional product or product that doesn’t fit well. 

Then the regressive, of course, design engineering process is to establish which part or component or sub-assembly do we have to pull into tighter control to make the entire system work And historically, that is a very difficult thing to do with a complex product. So the risk minimizing strategy, of course, was to pull multiple things into tighter control to make sure that we hit the target. That of course adds additional costs. In that example in particular, is a digital twin a way to optimize the process before we actually hit the green button and start production? I mean, could we minimize risk at that level? 

Keshav Sundaresh: Absolutely. I mean, that’s one of the key benefits of applying digital twins, even before I would say the concept level, right? Having a holistic understanding of how the complete product functions or behaves is crucial in terms of minimizing the number of design errors that you would discover at the Fagan of the process, right? I read this book a few years back where the author said, “You don’t have to be right all the time. You just have to be less and less and less and less wrong.” Right? 

So it’s all about minimizing the number of risks that you might foresee or that you might see at the Fagan of your product development process. Or even worse, once your customers start seeing these product failures, you usually have more risk that you have to mitigate for. 

But I can expand on the scope a little bit more, Jim, and I can talk about organizational behavior, if you will for a second, because as much as digital twins and the practice of digital twins is about having a process of integrating different types of data streams from a physical asset all the way down to different types of virtual assets, for me it’s also about cross-functional collaboration and using a common model as a primary means to collaborate as opposed to using informal documents. 

So to give you an example, we worked with a customer, a wind turbine manufacturer where the research and development cost, or rather the spend on research and development kept going up and up year over year. But at the same time, they discovered that the warranty costs for their products was also going up and up. 

Jim Anderton: Not uncommon in many industries. 

Keshav Sundaresh: Right. So they were puzzled. They were like, “Well, on the one hand we are investing more in research, but on the other hand, we also see a lot of complaints, warranty complaints.” So they decided to collaborate with us and to understand what’s going on, right? So what we discovered in partnering with them are really two major challenges. One is what I call horizontal silos, and the other is what I call vertical sites. 

So what we noticed with this team, or rather teams, is that from going from the product definition, which is usually done in a requirements management environment or an enterprise data store, to having different concept design models and then further down into verification and validation and manufacturing and in service through these different functional areas, if you will, the primary mode of collaboration and communication between these different user groups where through informal documents. 

We even have a tagline, we call it the Microsoft Office Engineering Suite. Okay, I mean, people use the tools that they know, but at the end of the day, they toss out the report and they say, “Hey, look, this is what you have to really look into.” 

That can lead to multiple sources of truth and it won’t really give you the traceability to have a clear status of where the program is headed and a clear stock of what are the number of assumptions we have made and what are some of the errors that we are yet to capture through digitization or virtual prototyping, if you will, right? So that’s what I call horizontal silos, which is really, instead of having an inconsistent or an informal way of collaboration, there is a need to have a common model or a model based systems engineering type of a practice. 

But then on the flip side, just in terms of the models itself, a mechanical engineer really has his head down to focus on I want to create the best mechanical system possible. The electronics engineer, same story. The thermal engineer, same story. So we realized that a lot of these groups had a very strong understanding of modeling, analyzing, visualizing and optimizing their respective domains. 

But when it came to understanding how these different domains interconnected and what was I guess the total system dynamics or the living and breathing evolution of the models, they didn’t really have such a framework developed. So breaking these vertical silos is an activity for digital twins in our experience, and breaking these horizontal silos is the practice of model based systems engineering, and by extension, digital thread. 

Jim Anderton: Interesting you put that, as you’re describing it in the horizontal and vertical silos immediately came to mind were matrices. I imagine sort of systems of differential equations and that desire to compress it down as small as possible so you can resolve the damn thing. Now, from an engineer perspective, you brought up a couple of interesting points that immediately came to mind. 

One is that the constraints in the design process, the design end, are frequently time and money. So in a perfect world, we would like to iterate our way a thousand times to achieve perfection, but the reality is we may be able to iterate four or five times within the six weeks allotted or the $2 million of budget allotted, and so it injects some conservatism at the same time. 

Now you’re talking about a possible world where it may not be necessary to actually design for perfection before you start production because you can in real time pull information back from the end user and then integrate that feedback into the redesign process. So the redesign process which historically is complete by the time that you start making something now becomes a redesign process that may extend over perhaps the entire life expectancy of the product or service. 

Keshav Sundaresh: Absolutely. I mean, there is this whole world of connecting the engineering world with the world of data analytics, right? The angle with model based systems engineering, and by extension digital thread, is basically the synergy between engineering and IT, if you will. We want engineers to do more IT/project management type of a thing without really feeling like they’re doing IT work. 

But in terms of developing more accurate and more reliable digital twin models of whichever system that you’re trying to develop, it is important to have an open architecture where regardless of which cloud vendor you choose, regardless of which IoT environment you use, regardless of what type of sensors you want to track, you have an open enough system to stream that information as inputs to a virtual representation or a virtual model that in a way has the same core but contextualized to the environment or to the customer’s usage. 

So when you close the loop between the world of let’s say physical sensor data to the world of real-time machine learning or AI models or real-time physics based digital twin models, you are in a way bound to have what is known as an intelligent digital twin because you’re no longer relying on the previous assumptions that you made for your product, so to speak. You’re actually using real data, if you will, from the customer or from the physical sensors that’s being streamed during the usage to monitor the performance, the health and the status of the equipment or the machine or the asset. 

Jim Anderton: Yeah. That’s an interesting approach. We know with Internet of things and the catastrophic sort of that enormous collapse in the cost of sensors, we can embed them in large numbers in products large and small. But historically, sensor feedback was really a matter of analog devices that sensed levels, things that were usually converted into usually an analog voltage signal or just simple analog to digital converter, and there was a bitstream that was fed into some central processor somewhere, which could be manipulated. 

Now, we’re looking at a world in which these sensors are not only microscopic in size, but they’re relatively intelligent. So they actually do some of the signal processing at the sensor level. Yet those sensors might be sold by one of dozens of different possible vendors and dozens of different applications within the same product. And I hear frequently from engineering firms that have difficulty in collating and assembling that data or even sifting out, which is actually relevant information over which is not. How does a Digital Twin digital thread play into that? Can that really helps sift out the rubbish from the gold? 

Keshav Sundaresh: It can. So I think there are a couple different ways to look at this, at least in my point of view. 

Number one, if you have let’s call it an idealistic definition or an idealistic reference point in terms of how your product should behave, then you’d have a corresponding physics based representation, if you will, of the model or the asset. But in terms of doing anomaly detection, if you will, to figure out whether the product is really wandering off of the failure trenches, whether the product is about to fail and so on and so forth. 

You can then start using failure data and train a bunch of neural networks and embed these neural networks either on the edge where the competitions can happen in real time or in forage where it can happen inside a real time visualization and dashboarding environment like an IOT environment. Or the training process can also happen offline where you have all the data collected from these sensors and you use this information to train a machine learning model, but then embed that logic into a living and breathing digital to system. 

So that’s really the first angle, which is you can start with having a holistic understanding of how your product should ideally behave. You have in a way captured that behavior in a digital representation or I would say a digital dynamic representation. And you just keep appending to it over time and checking whether or not the data that you’re receiving is off track or is anonymous or not. 

But then the second track is if you really don’t have any past history of the product and if you are really starting from just the physical asset itself and you want to do some data driven discovery, there are also methods, solutions and practices out there for you to be able to use low code, no code platforms to quickly do your data prep to quickly figure out, well, what are some of the signal processing or the statistical analysis checks that I need to make? And also to send out different types of alerts back to the user in terms of over the air updates or in terms of updates through your phone or a specific device, and so on and so forth. 

Jim Anderton: Keshav, we’re about the process. There are other aspects of engineering as well. One is the project management or the management of the engineering process. I read a stat once that said that 10 years into most professional engineers careers only about a third of them are actually still doing engineering. The rest of them are actually managing engineering processes. 

And there’s an incredible irony. It drips of irony to me that the most experienced and best engineers are not actually doing engineering in many circumstances. They’re actually just attempting to herd the cats and get a team to move forward in the correct direction. 

Are we talking about technologies that semi-automate that management process? Is it still going to require one individual who stands over a monitor and says, “No stop. That’s good enough. Move on to this aspect?” 

Keshav Sundaresh: That’s a great question. The the best way to answer that, Jim, is to go back to some of the classes that I took from Stanford on behavior psychology. There is this model or a framework, if you will, where Dr. BJ Fogg spoke about where he said, “Motivation alone is not enough to get things done.” Because at any given point in time, your motivation can go up and down depending on the mood, depending on how you feel. 

So, while motivation is one access of behavior, the other very important point of consideration is the ability. See, the more hard something is to do when the motivation is high you might have the energy, the time, and the effort to do the challenging things. But when you really don’t have high motivation, but if the thing to do is extremely hard, it’s just a matter of time where you’ll actually give up or go back to your old habits. 

So for me, habit formation is actually at the core of project management and by extension the practice of model based systems engineering and or Digital Twin. Okay? And so I’ve come to realize that the easier, the more frictionless you make the process of even generating reports or importing data or automatically creating architectural models, people will at least try and see the value in it as opposed to always being skeptical about it or basically run away from even experimenting. 

But to be more specific, Jim, if there are for instance, requirements, management tools that essentially track tens and thousands of product requirements, there are systems engineers within enterprises that are in a way and project managers that are bookkeepers to track all these requirements and the evolution of it in terms of performance, cost, mass, what have you. 

But then there are sub teams that are only responsible for a handful of these requirements. Not one guy would be responsible for all 10,000 plus requirements of an automotive system or an electronic system, et cetera. 

So what we’ve seen our customers want to do is to quickly be able to extract a subset of the requirements into a format of their choice, Microsoft Word, Excel, XML, I-REC, which is another open standard, and quickly bring that information, that set of what I call document centric requirements. And quickly render them into a structural model which captures the overall static structure of what are the different product subsystems. How are the parts connected to the subsystems? 

It can also very quickly render the leaf level requirements required for a specific subteam. It can also render down different types of behavior models like creating a use case diagram, developing an activity diagram, creating a schematic diagram, so on and so forth. 

But as you move down the ladder, now, you know are making it as easy as possible because you have the ability to just inherit various types of documents, various types of bill of material, logical decomposition, if you will, into actual models. You can start increasing the fidelity of it and start making it more mature over time. 

Jim Anderton: It’s funny you mentioned it. My first engineering job and many of my generation was configuration control. A very, very boring, frustrating task of actually physically making sure that everyone was using the correct version of the rendering. And if we are in iteration F, to make sure that every blueprint actually was version F. 

And that required a separate infrastructure of documentation to track the distribution of physical blueprints. Because the were legacy parts which from automotive manufacturers that were still done on paper even though we had CAD. And that literally meant that someone had to make sure that we physically removed prior iterations from the hands of individuals in multiple locations to make sure that we were all on the same page, that single source of truth. 

And to do that required a process, which itself ended up as an engineering project which generated its own part numbers. So the document to control configuration had itself a part numbered. It became a separate sort of engineering product itself. And you can see how the process begins to spin out of control until soon you’re no longer designing pumps. You’re designing processes to control processes that design pumps. Are we talking about a way to get away from that sort of bureaucracy heavy stultifying effect? 

Keshav Sundaresh: Absolutely. I think with the ability to leverage, I guess, new and modern ways of applying convergence, if you will, across simulation, high performance computing in AI, but also in terms of just being able to extract metadata at wherever the metadata resides in, be it your desktop, be on a server, and so on and so forth. There are newer, modern, simpler, and straightforward processes that are now available to really have different groups and enterprises move away from a document centric collaboration process to a common model centric systems engineering process. Documentation should always be a side effect to a product. Documentation should never be the first and the only thing that engineers do. So with some of the solutions that we’ve seen our customers use for instance, documentation is just created automatically as you start using a model as a common form of collaboration and communication. 

Jim Anderton: Keshav, we’ve talked at the 60,000 foot level about a subject that is so fascinating, so diverse. I think we could have drilled down to any one of these dozen or so topics there and talked for hours about just that one. But I’ve got to ask as a concluding question, perhaps a fundamental question here, which is, is this technology going to change the way engineers ideate, think, design, and develop new products and services? There are those in the popular press who claim that this is another form of automation and we’re going to push a button and AI and generative design will engineer the future and there will be no such thing as an engineer anymore. Do you believe that? 

Keshav Sundaresh: Well, I wish I could predict a future, Jim. I mean, I’m just a simple engineer who just loves to solve problems. For me, it’s about solving user level problems, understanding the delta of where our customers are today and where they want to go and take these small risks or take smaller bets in life. But I do see that the more we continue to integrate technology with psychology, you’re actually going to get more people to at least start experimenting if not standardizing on these practices. But I’m also really certain, Jim, that with the amount of data that we have of customer usage, it’s going to fundamentally change. If not, it’s already changing how people are looking at developing new products because you have so much information that’s just lying around that can be used to capture higher order elements in terms of understanding and create knowledge based models. And not just leverage or not just rely on expert knowledge to take better decisions or to take newer risks in terms of developing new products. 

Jim Anderton: Incredible future. Keshav Sundaresh, thanks for joining me on the program. 

Keshav Sundaresh: Thank you so much for having me, Jim. 

Jim Anderton: And thank you for watching this episode of Designing the Future. See you next time. 

Learn more on how digital twins help companies optimize product performance.

The post Making the Digital Twin Work in Your Application appeared first on Engineering.com.

]]>
Can Underground Agriculture Feed the World? https://www.engineering.com/can-underground-agriculture-feed-the-world/ Thu, 20 Oct 2022 17:35:00 +0000 https://www.engineering.com/can-underground-agriculture-feed-the-world/ Greenforges uses advanced engineering tools to iterate a novel way to grow crops sub-surface.

The post Can Underground Agriculture Feed the World? appeared first on Engineering.com.

]]>

This video is sponsored by SIEMENS.

Feeding the population of the planet of 8 million and growing, is a fundamental challenge for the 21st century. The green revolution that began in the 1950s relied on massive chemical inputs, in fertilizers, pesticides and herbicides. Today, environmental concerns, plus a warming climate and limited agricultural land has been the impetus for new ideas in agriculture. Could we go below the surface and use advanced technology to allow food production anywhere, including cities?   

Montreal, Canada-based Greenforges has developed a novel vertical system that allows agricultural production almost anywhere, without the traditional constraints of weather, irrigation or land-use. It’s harder than it looks to grow food underground, and development uses advanced tools to iterate cost-effectively. Joining engineering.com’s Jim Anderton to describe the technology and how simulation was essential in its development is Jamil Madanat from Greenforges and Carl Poplawsky, Engineering Services Manager at Maya HTT.

Learn how Simcenter unleashes your creativity, granting you the freedom to innovate and allowing you to deliver the products and processes of tomorrow today.

The transcript below has been edited for clarity:

Jim Anderton: Hello everyone and welcome to Designing the Future. Feeding the population of the planet, 8 billion and growing, oh, it’s a fundamental challenge for the 21st century. The green revolution that began in the 1950s, well, it relied on massive chemical inputs in terms of fertilizers, pesticides and herbicides. Today, environmental concerns plus a warming climate and limited agricultural land has been the impetus for new ideas in agriculture. Could we go below the surface and use advanced technology to allow food production anywhere, including cities?

Well, Montreal, Canada based GreenForges has developed a novel vertical system that allows agricultural production almost anywhere without the traditional constraints of weather, irrigation or land use. Now it’s harder than it looks to grow food underground and development uses advanced tools to iterate cost-effectively. Joining me to describe how the technology works and how simulation was essential as development is Jamil Madanat from GreenForges and Carl Poplawsky, engineering services manager at Maya HTT.

Jamil is a bachelor’s degree in mechanical engineering from McGill University where he specialized in machine design and project management. He has five years of professional experience in the impact startups world with a focus on social entrepreneurship and sustainability. Jamil is currently the CTO of GreenForges, the first underground controlled agriculture environment farm early next year.
Carl holds a master of science degree from Purdue University in Mechanical Engineering, and before his appointment as engineering services manager at Maya HTT, he was a senior applications engineer. Previously, Carl was VP of engineering at the Engineering Sciences and Analysis Corporation, and was technical consultant with the Structural Dynamics Research Corporation. Carl and Jamil, welcome.


Jamil Madanat:
 Hi, Jim. Thank you.


Carl Poplawsky:
 Thank you. Glad to be here.


Jim Anderton:
 Jamil, can we start with you? This is an intriguing solution to a pressing problem, feeding 8 billion plus people and growing. The stresses on the environment, in the Western world, we’re paving over agricultural land. We’re looking at a change in climate. There are a lot of factors here that are putting constraints on agricultural production. How big is this problem? I mean, do we need to find these ad advanced technology solutions to feed ourselves?


Jamil Madanat:
 Absolutely. Actually, this is how the idea came to me. So our founder, Phil, was going through a report describing projected food shortages all around the world, and the report was looking into different methods and means that will alleviate this food crisis issue by looking into urban agriculture. The study looked into, okay, how can we leverage urban agriculture to provide more food in the cities, and was looking into rooftop farming, indoor farming, shipping containers. A couple of days after that, Phil was happened to be thinking about this problem looking outside a window and saw a water well.
That’s when it clicked in his mind, why can we use the underground for food production? So the conclusion of that report saw that even if we leverage urban agriculture, we would still cover 4 to 5% of that food shortage. But now with underground being an option, I think we have a lot of more space to utilize and a lot more means to alleviate the projected food crisis that’s approaching faster than what we’re ready for.


Jim Anderton:
 Jamil, can you give me a brief overview of the GreenForges’ system? You mentioned underground. Underground, sometimes we think of sort of an abandoned coal mine or a cavern or something, but these are rather more like missile silos, aren’t they sort of a cylindrical vertical shaft?


Jamil Madanat: 
Precisely. So we’re taking a more simpler approach here. So what we’re starting with is think water wells or just pile foundations, same ones you would may use for building foundations. Initially, we’re starting with a diameter size of 60 inches, so closer to 1.5 meters. So it’s nothing too big. But the advantage that you would really get is going underground and now we’re experimenting with a model going 15 meters underground. With that you’d be surprised how many plants you can fit in there and grow. The scalability would just really grow much faster as you arrange these forges in a grid system.
As you mentioned in the intro, it is a controlled environment agriculture system. With that, it means we get more control over the plants that we want to grow, expedite the harvest cycle, get more precise and refined flavors and control the crops that we want to use. So it’s not just only we’re leveraging the underground system, filling it with plants, but also expediting harvest cycles one, and two, just becoming weather independent. So that will give a lot of additional advantages with keeping the food production running all year long too.


Jim Anderton: 
Jamil, is this a hydroponic process? Are you growing plants in soil? How does it work?

Jamil Madanat: Yes, it is a hydroponic system. Basically for those unfamiliar, a hydroponic system would be just water mixed with nutrients and oxygen that the plants need and runs continuously just touching the roots, the back of the roots, giving the plants the nutrients that they need. It is a continuous loop system. So really we just keep continuing using this water that’s getting fed to the plants while monitoring at the surface kind of the nutrients being consumed, how much extra oxygen it needs, fill it, and then recycle the water.
So on one hand, this saves a lot of water, almost 90% of the water is, if not more, is getting recycled back within the system. The second advantage is pest management. So we see a lot of contamination that happens when using soil based systems, but with hydroponics, you’re really creating that barrier for preventing a pest and contamination.


Jim Anderton: 
What sort of crops do you anticipate will be used with this system? I see some green leafy vegetables. I believe it is in your background.


Jamil Madanat: 
Yeah. What you see here is a couple of what we call grow modules being harvested from underground, extracted and just organized in a radial manner here for harvest. Initially, we’re starting with leafy greens and herbs. Now, the good thing about leafy greens is that they require less input from nutrients to energy to light, and they have faster harvest cycles. So starting with those gives us the advantage of running harvest cycles faster. So we get to iterate faster. Plus, these plants just tolerate variations in the environment, in nutrients much better. So this way, any changes and tweaks we run to the system will still end up with a higher success rate of production.


Jim Anderton:

Now, it’s interesting. So you’ve found a way to take that farming underground, and of course we’re interested in the engineering aspects of this. By the sound of it, there are several. It sounds a bit like a closed loop system. So we’re talking about gas, we’re talking about water, we’re talking about heat. So there are energy flows and there are physical flows of things happening inside these chambers, and there’s also a structural component, in that you have a vertical shaft. Are these things made of ferrous, non-ferrous metals, reinforced concrete? What’s the basic structure made of?

Jamil Madanat: 
So the casing would be made of steel. We’re using special coating. The special coating has to take into account multiple factors. One, obviously being non-corrosive, nothing to contaminate the plants. Second has to be antimicrobial. So it just doesn’t promote any growth of algae. Third, we’re adding a white coat layer that incentivizes light reflection. So this way, in this tight space that you have underground, you’ll be able to capture most of the lights.

Jamil Madanat:
In this tight space that you have underground, you’ll be able to capture most of the light that’s feeding the plants. So the casing is still with coating on the outside and on the inside. As for the internal structure, it’s a pretty simple structure. I can’t get into too much detail about the current design that we’re working on, that’s still being developed and just has a patent protecting behind it. But I wouldn’t say anything too complicated and we’re always keeping food safety in mind and operational ease in mind when designing these facilities.


Jim Anderton: 
Well, it sounds like you have multiple systems that are sort of interrelated and overlapping at the same time. You have a mechanical engineering task, you also have sort of dynamic systems operating at the same time here. How complex is this from an engineering perspective? What sort of tools do you use to design these things? Are we talking about a conventional CAD system, FEA simulation? How do you go about this?


Jamil Madanat:
 Absolutely. So generally when we look at the design of the forge, we split it between structural, mechanical, electrical, and digital systems. Now, the biggest challenge that we found with designing the forges is that once you go underground, there’s very little literature on the climatization of these farms, which has to be done very finely tuned, very controlled environment. As a side note, you’d be surprised how plants sometimes can be very sensitive, certain variations, as we plan on expanding the crops. So we got to be very certain what the environment looks like underground. And to do that, we had to work with Maya HTT to simulate how the heat transfer happens at different soil levels, at different soil types, different humidities. So maybe that’s something Carl can tell us a little bit more is they provided cell work to us, helping us understand better how to climatize the environment underground.


Jim Anderton: 
Carl, tell us about it.


Carl Poplawsky:
 The simulation services group at Maya HTT focuses on what we call virtual prototyping. Virtual prototyping is a technique that we use to test the mechanical design long before it’s manufactured or before physical prototypes are produced. We use computer rate and engineering software, CAE software to do that. We standardize a Siemens software products in center 3D, and we look at, in this case, the thermal and flow situations going on within the vertical farm. Our contributions to this project focused on energy efficiency and water use. So we use the software to predict the cooling load within the silo or the vertical farm. That cooling load is a function of not only conduction to and from the surrounding earth, but also quite a lot of heat load caused by the lighting that is necessary to provide the solar heating for the plants to survive.


Jim Anderton:
 Carl, that sounds like you got several inputs here going on at the same time. It’s also got something which goes is vertical, that’s goes to quite a depth at this point. Is there a temperature gradient from the top to the bottom of this system?


Carl Poplawsky: 
Yes, absolutely. It starts with the earth itself in that the surface of the earth is basically an ambient temperature. As you go down the surface of the earth reaches the constant, relatively constant temperature within the depth that we’re talking about. And so you have a temperature gradient going down through the earth, and then when you’re pumping air down into the farm, you’re going to see heat transfer happening, picking up heat, and then you have to bring that hot air back up and through the heating, ventilation, air conditioning system. We call it the HVAC system. So we help to size the preliminary sizing for the HVAC system and also the piping and pump sizing in order to remove the condensation that collects at the bottom.


Jim Anderton:
 Carl, tell me about humidity. that’s something where anyone who’s a greenhouse operator, of course controlling humidity is a major factor here. This sounds much more challenging. This is rather a closed system with artificial light and the heat input coming from that, plus the aspect ratio of this operation seems to be quite high. It’s a tall, skinny cylinder. Is that a factor?


Carl Poplawsky: 
Well, first of all, the software handles humidity and condensation and it can predict the condensation that collects on the walls and also provides the relative humidity distribution throughout the system. And of course, that has to be controlled pretty tightly for the plant’s health. And that will certainly influence how the final HVAC design evolves.


Jim Anderton:
 When you’re designing an HVAC system, and Jamil, maybe I’ll throw us back to you the same way is that it’s much engineering development is iterative and in a lot of cases if you’re breaking out, breaking new ground and doing something which has not been done before in the way that you’re doing this, building prototypes, testing, breaking them, going back and redesigning is a very common way to design components in areas like automotive that I’m familiar with. You’ve got a very large and expensive and complex process here. You can’t dig a hundred holes and then iterate a hundred different designs and then go back and then figure out what works down there. How do you cut the corner on that? We know simulation is a great tool to do this, but even with simulation is that you’ve got multiple variables interacting at the same time here down there. Do you have simplifying assumptions you work with or do you just crunch the numbers in a brute force way? How do you attack it? How do you attack this problem?


Jamil Madanat:
 Absolutely. So, obviously following the engineering method, we go with subsystem testing. So you can’t test the whole system altogether. And as you said, drilling multiple holes on your ground, it’s an expensive process. Same thing is you can just keep flying rockets every time you want to test something there. So what we really try to do is take isolate systems and try to experiment, iterate, and prototype with them. So be it that the lighting system that we’re working with, digital systems and how they work with the lighting system, the climatization system, and how it works with the plants and then for things that we can’t really replicate.
So we do have multiple labs that are running multiple experiments in parallel, whether it’s the extraction system, whether it’s the integration of, call it the hydroponic system with the controllers and specifically for the HVAC and climatization we said, okay, we want to validate a couple of major assumptions, which is how much heat does this soil absorb during the day cycle? How much heat does it retain during the night cycle? And let’s run the simulations based on these assumptions. That would give us at least the groundwork for what we know is at least true. And then you build foundations from there. But yeah, always working from first principles and running subsystem tests, then you can validate and build on top of these building blocks is generally the easiest. Easy is an overstatement, but the best approach to get a more comprehensive design.


Jim Anderton: 
Plants are an interesting phenomenon. We know that early designers of space station systems had considerable difficulty with things like humidity control, temperature control, also in a closed system. In this case you’re looking at plants and the amount of biomass inside your system is considerable at this point. And of course transpiration is a factor here. So the plants are an active component of changing the environment that they work in. Is there a difference depending on the type of crop that you grow, are those lettuce leaves different from a different type of plant?


Jamil Madanat: 
Back to kind of controlling the climate of the plants, we have external factors and internal factors. On the external factor side, obviously, and I think I’d like to point out an important piece of the design we’re taking into consideration that I haven’t alluded to before, but when you go underground almost worldwide, below the seven meter mark, the temperature converges to the annual average regardless of what the surface variation is like. When you do the simulation, you want to take that upper piece of variation into the climatization model and then account for that kind of all the way consistent climate that you want to account for. Now, the interior internal variations that are taking place is one, depending on the crop, and second, depending on the growth stage of the crop.

Initially, the first about two weeks, we almost assume no humidity generation. The plants are just growing, evapotranspiration is very low, and then it just exponentially increases in majority of the crops. Now, different crop size also breathe differently and need different humidity levels, different temperatures and different light requirements. Taking all these into account, initially we’re working with all leafy greens that just have a very kind of narrow window of variation. Then as we validate one, we just build on top of the others. Hope that answers your question.


Jim Anderton: 
It does. Carl, if a mechanical engineer were to design, for example, a ground source heat pump, you can approximate that roughly into sort of a coiled two heat exchanger and think about what we think as the classic sort of convection conduction radiation equations of heat transfer, integrate them with your form factors, your shapes. But in this case, they’re so many other factors going on here that are complicating this issue. Is this something you could run on sort of when you think of a stock simulation software. Does this require a coded solution, a low code, no code solution?


Carl Poplawsky: 
No, it doesn’t require any additional coding. The software out of the box handles this problem. It is quite complex. Not only do you have the temperature gradients going down through the earth, but you have the humidity distributions and everything else going on. Well, actually in the initial simulations, we looked at just pumping air down in without the HVAC in order to calculate what sort of HVAC requirements we need. The software is quite sophisticated at this point can handle those kind of things. Again, what we’re really trying to do is shortcut the design process to some extent in that. As Jamil mentioned, and you mentioned you can’t go out and drill 50 holes. So we’re going to drill 50 holes in a virtual environment with software and look at the mechanical thermal float performance of design. Then when Jamil is ready to drill those holes, he’s going to drill only a couple because he’s going to have a much higher probability of his design being successful, thanks to the virtual prototyping,


Jim Anderton:
 Carl, many users simulation tend to think of it in turn as a validation tool as much as the development tool essentially. We know where we want to go, here are targets, we check our design, does it work or does it not work. However, we know the simulation can also feed useful information back in multiple ways into the design process. The aerospace industry, for example, sometimes they discover things they didn’t understand or didn’t realize about a design by actually sort of flying it virtually with simulation. Does that happen in this case too?


Carl Poplawsky: 
Absolutely. One of the major advantages of this technique is we can look at what I call coffin corner conditions. These are conditions that maybe you can’t physically prototype. Of course one example is in a spacecraft craft industry, when you’re flying something in outer space and we get involved in a lot of spacecraft applications, you can’t actually test all those things in a terrestrial environment.


Jim Anderton:
 It’s funny you mentioned Jamil, more than one engineers proposed that what you are doing may be the only way to actually feed colonies some places like Mars and the moon. Is there a possible connection here?


Jamil Madanat: 
Potentially. I mean, virtually you can drill these systems anywhere. The fact that they’re insulated from the environment that’s happening on the surface, regardless of weather conditions, so severe hot or cold climates. The fact also that you can just build this standalone system, lock it, seal it, let it run its harvest cycle and then move after. With little maintenance, most of the work can be done once you put in the plants and then once you want to harvest them, makes them potentially viable for multi planetary agricultural systems, so you never know.


Jim Anderton:
 Jamil, how did you go about approaching Maya HTT and Carl in this process? Did you have a problem and then say, “we have a complex problem here we need to solve,” or did you run up against a roadblock that seemed insurmountable? How do you get to that point where you say, “Wow, I need to consult with someone outside my industry just to fix this problem?”


Jamil Madanat: 
I’ll share with you kind of the thought process that we went through here, which is try to understand how the environment for the plants will look underground and ultimately you just want to make sure the crops are climatized based on the what we call the crop [inaudible 00:21:59]. Now, generally you design based on surface conditions. You say, okay, well the temperature outside is going to be as such, we want to climatize the environment up. Inside we want provide this much heating or cooling or dehumidification because outside the kind of temperature and climate variables are very well defined. Underground, while looking through different literature and even engineering formulas in front of me, I realize that the problem is just multidimensional or multifaceted. It’s not only I have to understand, well, okay, we have these LED columns running between 12 to 16 hours a day providing heat to the soil.

How much of this heat is the soil going to absorb? What kind of soil absorbs the most versus releases heat the most? Once you go underground is you have just a wider gradient of different soil conditions and different humidity. Based on the humidity, how much heat is going to be absorbed, how would it be retained? And then let’s scale this a little bit more. You have a grid of, let’s say three by three or 10 by 10 forges, how close should they be to each other so they won’t affect the heat between each other. That kind of dynamic or behavior of temperature distribution underground at different soil conditions is not something that you just can pull off at the back of the napkin. That’s definitely where the Maya HTT team came in very, very handy and useful at helping us understand it.


Carl Poplawsky:
 Our simulations found that the performance is heavily dependent on the soil conditions, the amount of moisture in the soil, whether it’s sand or a clay or something like that. When you run these simulations, you have to make some judgment calls about how large the earth domain is going to be because it has to extend out way past the vertical farm in order to get correct results. So for instance, if you look at image number one, what we’re showing here is the temperature distribution in a cylinder of the earth, in which the shaft or the farm is contained. And we’ve got some color bars here. Red is hot, blue is cold. So you can see it’s cooler at the bottom. You can see how the temperature contours flatten out. That’s giving us a good indication that this cylinder, this basically arbitrary cylinder of earth is large enough so that the simulation will be sufficient. And of course, this was also going to tell us how closely you can space these things. If you look at image number two, this is the closeup of the air domain within the farm itself. This is just the top. That cylinder over on the left is the air handler.
You can see the little squares in the middle are the LEDs. You can see how they’re producing heat. Again, red is hot and blue is cold. So these are examples of the kind of results we can get. These are of course temperatures. If you look at simulation, I’m sorry, image number three, you’ll see that this is showing the temperature contours going down through the depth and it’s very easy to see now how we do have some significant gradients there. Image number four shows the velocity profile. So we’re pumping air down in and then it has to come up. And of course we don’t want to tear the leaves off the plants, so we have to be concerned about what that velocity profile actually looks like.


Jim Anderton: 
Remarkable. Intuitively growing things is something that’s as old as humanity. So intuitively we have a simple process. You’re going to take a greenhouse, we’re going to stick it underground. Carl, you just showed us, is that this is more akin to the engineering environment in a space station than it is to farming in a sense. There’s a lot of complexity, a lot of things going on here. Generically for companies that have things that have a lot of things going on, like you have here, Jamil, Carl in this case, how much does an engineer have to know to approach you with a problem? Do they simply have to say, “These are the parameters I have to hit with this design. Am I going to get there?” Or do they have to turn around and say, “I need results on this, this, this and this to understand how they interrelate?” Just how deep do they have to go?


Carl Poplawsky: 
Yeah, great question. Really, it’s the virtual prototyping techniques that provide the information for how all these parameters interact with each other. So as a mechanical engineer, we think about what we call control volumes. And so we have boundary conditions on those control volumes. Here, the control volume would be that cylinder of the earth, and Jamil is providing us certain boundary conditions that are going to influence the simulation. For instance, the heat dissipation of the LED lights, the transpiration rate of the vapor coming off the plants. So those are basic boundary conditions. And then we take it from there and providing the information on how all these parameters interact with each other. In particular, we can change the performance of the HVAC system and change the locations of air ducting, inlets, outlets, things like that, and look at the total performance of the system.


Jim Anderton: 
Jamil, did you have any idea when you joined this project that it would be as complex as it is?


Jamil Madanat: 
Not initially. Not initially, because you look into an idea and you’re like, “Okay, I think we can make it work.” But the more you dig in, the more you realize there’s just way more to it. It really is so multidimensional, as I mentioned, on the mechanical, the structural, the lighting, the just horticultural side. So ultimately we have the plans at the core of our design and we want to make them happy. Also, on the other side, we have our operators on the farm and you want to make it accessible for them to extract and clean and work with it. And the unit economics have to work out too, the carrier capital expenditure and operating expenditure. So try to balance all these parameters together is a challenge. But that’s engineering, right? It just keeps you going and it’s that excitement of discovering something new every time we approach a new problem. So yeah, so far really enjoying it.


Jim Anderton:
 It is exciting. And one last question, Carl. For design engineers that are working with complex systems and have problems that require the kind of professional help that you and Maya HTT can offer, what’s the number one piece of advice you could give them to prepare themselves before they approach you with a problem? What homework should they do, before they present you with an issue?


Carl Poplawsky:
 Yeah, I don’t think it’s really not that unusual. You have to think about what your goals are going to be. If somebody comes and says, “I need a thermal simulation.” I don’t have a lot of information there. If he tells me that I don’t want the material to exceed certain temperatures and I need to minimize my energy consumption, now we have something to work with. So it’s really just like anything else in life. You have to decide what your goals are going to be and then we help you meet those goals.


Jim Anderton:
 An exciting project. Jamil Madanat, GreenForges. Carl Poplawsky, Maya HTT. Thanks for joining me on the show.

Learn how Simcenter unleashes your creativity, granting you the freedom to innovate and allowing you to deliver the products and processes of tomorrow today.

The post Can Underground Agriculture Feed the World? appeared first on Engineering.com.

]]>
Is Simulation the Way Forward to Fully Autonomous Driving? https://www.engineering.com/is-simulation-the-way-forward-to-fully-autonomous-driving/ Fri, 17 Jun 2022 15:15:00 +0000 https://www.engineering.com/is-simulation-the-way-forward-to-fully-autonomous-driving/ Mike Dempsey of Claytex on why simulation holds the key to robust, reliable autonomy.

The post Is Simulation the Way Forward to Fully Autonomous Driving? appeared first on Engineering.com.

]]>

Truly autonomous vehicles have been the dream of motorists and automotive engineers from the beginnings of the industry and have been predicted by futurists and writers for over 60 years. Few technologies have been so desired and anticipated, and few have been so difficult to realize. The major inhibiting factor to its development, low cost and portable computational power, has largely been solved, and advanced driver assist systems exist today. 

But as of yet, full autonomy remains elusive. The problem is incredibly complex, and the imperative to produce systems that deliver road safety at levels matching or exceeding human drivers puts a premium on system redundancy and error proofing. An increasing number of industry experts believe that only simulation can deliver the level of testing necessary to create truly robust systems in a world where the number of edge cases seem infinite. 

Joining Jim Anderton to discuss the implications of simulation in this space is Mike Dempsey, Managing Director of vehicle simulation developer Claytex, a TECHNIA company. 

Learn more about how simulation is driving the future of transportation.

This episode of Designing the Future is brought to you by TECHNIA.

The transcript below has been edited for clarity.

Jim Anderton:
Hello everyone, and welcome to Designing The Future

Autonomous vehicles, self-driving cars and trucks, it’s been the dream of motorists and automotive engineers from the beginnings of the industry and it’s been predicted by futurists and writers for over 60 years. The major inhibiting factor to its development, low cost and portable computational power, well that’s largely been solved. And advanced driver assist systems, they exist today. But as of yet, full autonomy remains elusive. The problem is incredibly complex and the imperative to produce systems that deliver road safety at levels as good or better than human drivers puts a premium on system redundancy and error proofing. 

An increasing number of industry experts believe that only simulation can deliver the level of testing necessary to create truly robust systems in a world where the number of edge cases seems infinite. Joining me to discuss the implications of simulation in this space is Mike Dempsey, managing director of Claytex, a TECHNIA Company. 

Claytex has been working on simulation technology for autonomous vehicles since 2016, including vehicle physics models for full motion driving simulators for Formula 1 and NASCAR. Mike is an automotive industry veteran and has studied automotive engineering at Loughborough University and has developed powertrain simulation systems at Ford and Rover. Mike, welcome to the show. 

Mike Dempsey:
Hi, Jim. Great to be here. 

Jim Anderton:
Mike, this subject, it’s a hot topic right now. It’s all over the media. There’s so many misconceptions out there, even amongst industry professionals and engineering professionals. But let’s start by talking about self-driving from an information perspective. Self-driving is about absorbing a huge amount of information from the environment around the car, and that’s got to be processed. And decisions, they have to happen at thousands per minute to make this work. That’s true whether it’s a human driver or machine. 

Now, some companies use real world driving data to try and generate the algorithms or work necessary to make these decisions. But once the basics are mastered, motorway freeway driving at this point, their mastered. Edge cases appear to be where the action is and where the difficulties are. So this would suggest that there’s a diminishing marginal returns in real world testing. The better you get at it, the more difficult it is to get to the edge cases and the farther apart in time they become. Is this a factor? Are there natural sort of intrinsic limits to real world testing? 

Mike Dempsey: So the edge cases, as you say, they were all the risk is for an autonomous vehicle or for any vehicle really. They’re the things that happen very rarely. So you can drive for 10 years and you’ll never have an accident. But one day you might have some sort of strange incident occur. And those are the edge cases. Those are the things that are going to challenge your autonomous vehicles, and those are what we need them to be prepared to handle. 

But we can’t really recreate those or capture those through real world testing, because driving around on public roads, we don’t really want to have an accident. We don’t want to try and cause those accidents to happen on proving grounds, in controlled facilities because these prototype autonomous vehicles cost perhaps millions of pounds to put together. We don’t want to risk damaging them. 

So the way to do it is to look at simulation. And in simulation, we can put our autonomous vehicles into all sorts of edge cases, really high risk situations to see what will they do, how will they handle it. Then we’ve still got to get to the point where we can simulate all of that. We’ve got to have the simulation technology that allows us to fully immerse the autonomous vehicle with all of its sensor suite into these complex environments where we can put these edge cases to them. And that’s really the challenge that we are working on. 

Jim Anderton:
We at engineering.com, we’ve talked to many in the industry and roboticists and some software people. I’ve heard varying opinions about the amount of context necessary to make sense of this sort of self-driving environment. There’s some that will tell me that it ultimately will be important to know the difference between a raccoon, a cat and a piece of paper blowing in the road. Others to say, “No, we can simplify the problem greatly by resolving it down to an obstacle and simply determining whether it’s of a significant mass or what a trajectory is.” Is that still a debate going on in the industry? Does it still matter? Is the machine going to need to know what it’s seeing to drive us safely? 

Mike Dempsey:
It needs to know to some extent what it’s seeing, because it needs to know whether it should expect that thing to move. And if you know what kind of thing it is, you can expect how fast it might move. So if you can recognize that that’s a jogger, you can predict a certain speed that they’re going to go and start to think about the trajectory they’re going to take over the next few seconds. If it’s a dust bin, well it’s probably not going to move. Unless it’s windy and then you might want to recognize that it’s bin and it’s windy and it might move. So there’s all these things that we as humans can extrapolate from the objects that we see and recognize that the computers, the AI systems, need to be able to do similar to be able to understand and be aware of what might happen in front of them in the next few seconds. 

Jim Anderton:
Now that’s interesting. So in the case of your jogger example, for example, as a human driver, I have an expectation that a jogger will move at the speed of a running human. And therefore that individual might be two meters in front of my car in the next second, but he won’t be eight meters in front of my car in the next second. Is that the same sort of process that the machine has to do? 

Mike Dempsey: Exactly. The machines are trying to work out “Where is it safe for me to move? Where can I drive? Where is the safe space that I can move to in the next 2, 3, 5, 10 seconds?” And so understanding where all of the moving objects in the world around them are going to end up or likely to end up helps them plan that out. So if they can know that you are going at four miles an hour because you’re walking quickly or you’re jogging at eight miles an hour, you’re not suddenly going to leap out into a particular lane in front of them. And they can use that information to help them plan. 

Jim Anderton:
Now for human drivers in the example we’re using, for example, we have an intuitive sense of things like wind. Wind speed, wind direction. So basically if we have a sense that it’s blowing left to right, that’s where we expect the rubbish to blow from for example in that case. Is that level of sophistication necessary for autonomous driving? I mean, these things, can they process information very quickly? Or can they just react to the obstacle when it pops up? 

Mike Dempsey:
It’s a good question. And I’m not sure I really know the answer to that. I mean, you’re right. We instinctively, or we recognize from what we see around us that the wind’s blowing across us. That maybe helps us anticipate that if we’re going to go past the truck that we might suddenly hit across window as we go past it. It would be helpful if the vehicles could know that, because you know as you’re going to do that you might have to put some correction on the steering. If the vehicles can also anticipate that, it will probably give them a smoother ride, but I don’t honestly know whether they can or not. 

Jim Anderton:
Yeah, I’m not sure anyone does. A question Mike about the complexity of this problem. Motion in pure kinematics of course is relatively easy to resolve, not too many systems of equations, but we’re talking about things which may move erratically and which may have multiple degrees of freedom. Mathematically, how complex is the problem of predicting even say the potential trajectory of an obstacle? 

Mike Dempsey:
Well, it’s really challenging. So actually, the way that it’s done inside the vehicles is they’re using neural networks to be able to predict from patterns of where you have moved, where you’re likely to go and predicting multiple different paths to have some sort of confidence or some sort of prediction of that could be in this space. From a simulation point of view, we’re trying to then recreate all of this in a virtual world with all of these things going on. 

And that’s really hard. The physics of how a car drives around a road we can. Do the randomness that all of us that sit in those cars and control them and drive them around these worlds and do often unpredictable things, that’s the bit that’s really challenging. And the way that we deal with that is by having these edge cases where we’ve been able to use data from real world sources to capture strange things that happen. 

Jim Anderton:
It’s interesting. In systems that are sort of pseudo automated, I think as some of the point of sale devices for example that we use or even some animation, even the Teams that we’re using right now, latency is always an issue. So the hardware can only process information so fast. Is that something you have to factor when you consider were your simulations for self-driving? 

Mike Dempsey:
Yeah, absolutely. So as you say, all the sensors run at different race. They produce information for the vehicle at different frame rates. So cameras might be running at 30 frames per second, the lidars will be spinning at 300 RPM, the radar will be producing data perhaps every 50 milliseconds. So inside the simulation, what we’ve got to do is make sure that all of those sensor outputs are coming at the right rates and are including all the right dynamic effects as they do that. 

Jim Anderton:
Well, I’m glad you brought the notion of sensors because the industry appears to be settling into two sort of generic camps about sensors. One is led by a very famous global entrepreneur who started an electric car company who appears to favor an all camera, all machine vision approach. Others in the industry want to go multi-spectral. They want perhaps millimeter wave radars, they want lidar, multiple different sensors, different kinds of inputs. And some have gone so far as to say that true autonomous driving will never be achieved with a vision-only approach. And that’s a pretty powerful statement at this point. If you’re simulating this, how do you simulate multiple inputs from multiple different sensors that are delivering different kinds of information about the environment around the car? 

Mike Dempsey: So the way that we are doing it with AVSandbox is that we have an instance. So what we call an instance, a simulation of each sensor type. In each of those sensors, the virtual world that they see is adjusted so it looks right to that particular type of sensor because a radar sees things very differently to a lidar, to a camera. And so we need to make sure that the world that we immerse each sensor into looks correct to that sensor. And then you’ve got to make sure that your models of those sensors are including all the right effects so that for a radar in particular, there’s a lot of bouncing going on between the radar signal. And we’ve got to make sure that the simulation includes all of those reflection effects, because that gives you false targets that the AI system has to be able to figure out what’s a real target, what’s a false target. Otherwise, it might end up trying to avoid the wrong thing. 

Jim Anderton:
Sure. Yeah. Ground clutter’s always been an issue for the radar people. Well, in this case, are these things systems operate sequentially? As individuals, of course, with our eyes and our ears and our proprietary receptors, we’re pretty parallel when we drive. We absorb a lot of information simultaneously, and we sort of make decisions. But I’ve always wondered, do the machines, are they going to have to take inputs one at a time and then make an iterative decision making process? 

Mike Dempsey:
No, they take all that data at once. There will be perhaps different processes that will analyze the lidar data, the radar data, the camera data, to do object detection within them all, and then compare to see where the objects are, “Do all three of the sensors see an object at the same place?” Some people are even going to the route of “Lets just take all the raw data into one perception algorithm and have that look at it all as an instant.” But yeah, they’re not doing this sequentially. They are continuously looking at what the cameras are seeing, what the lidar, what the radar is seeing. And really why they’re heading towards using multiple different types of sensor is that they all have different strengths and weaknesses. 

So you look at a lidar sensor, for example, doesn’t behave particularly well when there’s lots of rain around. But a radar isn’t really affected by rain. Cameras and our own sight is affected by rain as well, the radar less so. So by having multiple different types of sensor, you can really take advantage of the different strengths of these systems, what they can do, what they can’t do, to be able to extend when you can drive the vehicle. That redundancy is a key thing for a lot of the automotive manufacturers. They want to see that there are multiple redundant ways that they can detect things in front of them. 

Jim Anderton:
Well, I’m glad you brought up redundancy because of course that’s a critical issue for safety in any system like this. I think of redundant systems, pseudo AI, a classic example originally was a space shuttle which used a polling system of multiple processors that had to agree on the same result. And if one deviated, the others would then rerun the problem and potentially gang up and shut down the dissenting machine. That’s one way to approach redundancy. Another one is just to error proof the hell out of one system and then test and then go with confidence. Is there one way forward to creating redundancy in the way we need it? 

Mike Dempsey:
There’s probably not one way forward. I mean, all of those methods that you just talked about are already used within automotive to look at having error checks to make sure that sensors are behaving, that they’re seeing what we expect to see. I think that needs to happen more and more for an autonomous vehicle where you’ve got all these different types of sensor. Part of the reason that we are using redundancy is that, as I say, sometimes the sensors don’t work as well, particular weather conditions or particular types of object that sensor can’t see. So usually there is some sort of check within those systems to see “Does more than one sensor see this object and do they all see it in the same place?” Because that helps build confidence that it’s really there rather than just a ghost target or a graphic, a visual artifact that’s they picked up in the camera. 

Jim Anderton:
Mike, how much fidelity do you need to create in your simulation of the environment around the vehicle? I mean, can you simplify the problem down to almost a wire frame model or do you need to see every grain of sand on the roadway? 

Mike Dempsey:
You need a lot of detail in there really, particularly when you are getting to the point of, “We want to use this simulation to prove the vehicle is safe.” So with AVSandbox, what we are creating are digital twins of real world locations. And those are millimeter accurate. That’s really grown out of what we did initially with Formula 1 and NASCAR where motorsport needed to move into using driving simulation because they were no longer allowed to go and do physical testing. And so what they wanted was accurate models of their race tracks and all the tracks that they wanted to go to. If those curve stones or the bumps on the track were out by even a couple of millimeters, then the drivers would be complaining that it wasn’t the same track, it was slightly different, because they are so precise when they’re driving those cars. 

And so the virtual environment that we use was born out of that environment, that real push for high fidelity virtual world. And then that really lends itself to what we’re now trying to do with this autonomous vehicle simulation to be able to use the same technology, the same capabilities to build those accurate models so that when we’re testing in the simulation, it looks like the real world location to the autonomous vehicle. And so that’s one of the… I guess one of the key things is that as we go into proving that something is safe, we’ve got to validate the simulation. You have to go through a step of being able to do tests in the real world, either on approving ground or actually on public road. We’ve got to be able to recreate those inside the simulation environment, because we need to see that the autonomous vehicle responds the same in those world, in those scenarios. That gives you confidence that your simulation is working. And then you can start to extrapolate and look at all these edge cases that you wouldn’t want to do in the real world. 

Jim Anderton:
How complex are these simulations? Are we looking at the number of lines of code and the amount of perhaps processor power that rivals the actual self-driving system itself? 

Mike Dempsey:
Oh, yes. I mean, if we want to simulate an autonomous vehicle that perhaps has 10 cameras on it, several lidar, several radar, we are going to need several PCs all running the very latest and most powerful graphics cards that we can get hold of to be able to generate all of that virtual environment to immerse the AI system in. And so the simulators themselves may even be more powerful than what’s needed in the car because we’ve got to generate all that vision feed that’s going in. And when we get to the stage of trying to verify that that control system’s correct, we’ve got to do all of that in real time. So with AVSandbox, we have the possibility of achieving real time simulation, or we can run it as fast as possible, which might allow you to put much more detail in, but not having to meet that realtime requirement. 

Jim Anderton:
Mike, talking about AVSandbox, all of us for the engineering community, Of course, we’re very driven by cost. Time deadlines are always a factor, but of course overriding everything is cost. It’s clearly enormously expensive to real world test autonomous vehicle systems. Enormous fleets of vehicles are necessary. We mentioned the difficulty in finding all the edge cases, and then even aggregating the data at this point. How much cost saving? Is there some way to estimate how much it’s possible to save in time and in money by using something like AVSsandbox versus the iterative “Let’s build a thousand vehicles and throw them on the street” approach? 

Mike Dempsey:
Well, that’s a good question. I mean, it’s really hard, I think, to put a number on it. But what you are doing with the real world testing is you’re just going out and driving miles and you are hoping that something interesting happens when you get a test scenario. That hoping that at a junction, something a little bit challenging happens and you can experience something new. In simulation, we can guarantee that if you start, you can design a scenario that within 30 seconds, you’ve put the car into something it’s not seen before. 

And then we can put it back into that scenario every time you make a change in the line of code to see “Now, what does it do? Now what does it do?” So we can actually repeatedly and much faster put the vehicle into these scenarios so we can save a huge amount of time, huge amount of effort, and just really improve the robustness of the system as well by being able to do this over and over again in a repeatable way. 

Jim Anderton:
Mike, how about the human in the loop? I think of the aviation industry of course they pioneered simulation both for pilot training and as a development tool. Modern systems, I mean, Airbus comes to mind. I understand that it will refuse an irrational input by a clumsy pilot. It’ll simply sort of analyze sort of the state of the aircraft, the flight, and simply prevent the human from entering the loop and doing something that would compromise the flight. Is that a possibility you think with human in the loop for autonomy? Is it possible to simulate what the person will do regardless of what the machine will do? 

Mike Dempsey:
It’s certainly possible. Well, simulate the person, I don’t know. But certainly we can put the person into a driving simulator. So the sort of driving simulators that have been developed for these motorsport applications and are also used in road car development today, they can provide you with a really immersive environment. So we can put people into those simulators to see how do they react to the experience of being driven around. And it’s actually a really interesting sort of area of research to understand. We talk about proving these vehicles are safe, but what is safety? As an absolute, it’s that you don’t crash into something. But if we put you in an autonomous vehicle at 60 miles an hour down a freeway and it misses another vehicle by a quarter of an inch, technically it’s safe because it didn’t crash, but you would probably never get in that vehicle again because you would not feel safe. 

So there’s a whole load of interest there as to how do we define and figure out what makes us feel safe and what doesn’t. And so I think those kind of technologies will come to the fore and being able to do that we can again put humans into those edge cases and see how do they feel, are they happy with how the vehicle handled that? Or are they scared and therefore, right, we need to change how it handles particular situations. 

Jim Anderton:
Well, that is interesting potential way to describe it. Is it possible that perhaps that automakers, for example, could use simulation technology like this to determine consumer preferences and to see how their potential motorists will react to the autonomous systems themselves? 

Mike Dempsey:
Yeah, I think so. I mean, within… We’ve been working on a project here in the UK which is funded by the UK government. Project’s called Derisk. One of the partners there, Imperial College in London, they’ve done some work with using VR headsets. They put the public into scenarios where there’s traffic coming and trying to understand from those subjects how they felt and did they feel safe and how did they perceive that scenario. And it’s some really interesting work that’s kind of leading us down this path of what is safety and how do we quantify what that is for an autonomous vehicle. Yeah and I think the OEMs, there’s a huge amount of work to be done there, whether it’ll be done by universities or more by the OEMs. I think perhaps by the OEMs, because it will become something that defines your brand as to how safe you are or how you handle things. 

Jim Anderton:
Mike, a final question and a tough question is that, is the technology as it exists today, our ability to code our hardware, our sensors, our processors, do we have the technology on the ground today to achieve truly autonomous driving based on what is available off the shelf right now? 

Mike Dempsey:
So yes, we can make the vehicles drive around and handle the day to day situations. The challenge is all of these edge cases and how are we going to handle and be sure that we can handle these high risk situations. That’s really where the challenge is now, is to get to a point where we can be sure and have that safety assurance that the vehicles are there. Because we already see trials and we have done for a few years now of vehicles handling the routine driving, they can do that. It’s how do we take the safety driver out and make sure they can handle everything else. 

Jim Anderton:
Mike Dempsey, managing director of Claytex, thanks for joining me on the show today. 

Mike Dempsey:
Thanks, Jim. 

Jim Anderton:
And thank you for joining us. See you again next time on Designing the Future.

Learn more about how simulation is driving the future of transportation.

The post Is Simulation the Way Forward to Fully Autonomous Driving? appeared first on Engineering.com.

]]>
Advanced Designs Need Advanced Design Tools https://www.engineering.com/advanced-designs-need-advanced-design-tools/ Thu, 12 May 2022 12:45:00 +0000 https://www.engineering.com/advanced-designs-need-advanced-design-tools/ Complex technologies are developed by processes that must cope with high levels of complexity, and sustainability adds a new dimension to the design challenge.

The post Advanced Designs Need Advanced Design Tools appeared first on Engineering.com.

]]>
This video was sponsored by TECHNIA.

The design of the technologies that we use every day, from cell phones to jet airliners, has changed significantly over the last decade, a change that is accelerating. The tools that help engineering professionals advance the state-of-the-art are changing too, and those tools are arriving just in time to address a new set of unforeseen challenges facing designers in the 2020s, notably climate change. How do these new tools intersect with the new technologies designed by those tools? Joining Jim Anderton to discuss the issues is Rolf Wiedmann, Managing Director at TECHNIA.

Learn more about about how simulation-driven design can help make sustainable products.

The transcript below has been edited for clarity.

Jim Anderton: Hello, everyone. And welcome to Designing the Future. You know the design of the technologies that we use every day from cell phones to jet airliners. Well, they’ve changed significantly over the last decade. A change that is accelerating. Now, the tools that help engineering professionals advance the state of the art, they’re changing too. And those tools are arriving just in time to address a new set of unforeseen challenges facing designers in the 2020s, notably climate change. So how do these new tools interact with new technologies designed by those tools. Joining me to discuss this important issue is Rolf Wiedmann, Managing Director of TECHNIA. Rolf, welcome to the show.

Rolf Wiedmann: Hi Jim, nice talking to you.

Jim Anderton: Rolf, let’s dive right into this, because there’s so much to talk about and these issues are so important. Let’s talk about sustainability. It’s an important subject through all the engineering disciplines these days. I mean, what do we mean by sustainability from an engineering perspective?

Rolf Wiedmann: Sustainability has matured. So as of today, as we are meeting many engineers in our daily work from OEMs to startups to larger tier ones, tier two suppliers, we’ve seen that sustainability is no longer perceived as a cost factor. It’s rather perceived as a success factor, actually. And so the attitude of engineering, the relevance has changed significantly over time. We are seeing that it’s no longer an issue of cost. It’s more an issue of setting up the right products for the future and the boundary conditions we see from governments and organizations. Also in the behavior of our clients from the end user is that sustainable products are accepted and at the end of the day are the most cost effective things that customers can produce. So we see there’s a significant change to adopting sustainability concepts in their daily work.

Jim Anderton: For many years, sustainability meant coping with waste, recycling waste disposable. That’s how we thought of sustainability. If we dig a large hole and we throw the rubbish in and cover it over, then then we are sustainable. In Germany, the automotive industry led the way with a cradle to grave approach to this problem. BMW famously, I understand, will dismantle the cars at the end of their life and recycle the components. Now this must put pressure on the design phase. I can imagine that designing a new automobile, for example, with 200 different kinds of polymer parts would be a nightmare to recycle at the end of its use. Do you see the pressures from the sustainability aspect starting off early in the design phase?

Rolf Wiedmann: Even more true than before. Just to give you an example, if you look at state of the art engines in electric vehicles, you see that, for example, the topic of rare earth is extremely important. So especially in times where we all know some political crisis can happen each day, the question is how am I independent of such situations. We’ve seen him with the chip crisis also. So really our customers and especially OEMs are striving to be in control of their product development. And so they also focus it very much that they can produce without being too reliable on third parties or supply chains.

Jim Anderton: That’s an interesting point. I know that, of course, there’s a CO2 footprint to a large supply chain as well. And in manufacturing, we moved from many years a just-in-time model of attempting to get component parts and assemblies delivered from many, many suppliers, all coinciding at the assembly point with exact perfect timing. And I think the COVID crisis has shown that’s a very fragile system. That’s easy to disrupt. Is there a sustainability issue there? Can you make an argument for purchasing more locally to reduce that carbon footprint?

Rolf Wiedmann: As I mentioned before, it’s about being in control of their supply chains. And sometimes also the logistics of producing are changing currently. So new concepts are being applied and certainly also this gives some requirements to the engineering community. Maybe they have to design more alternative products, more variants to be more flexible. Even starting the process before that, if it comes to the supply chain management later on, they have simply more options to work on that one. So it’s actually also has some impact on our engineering teams and the engineering community.

Jim Anderton: Rolf, when I was in the industry, design was really mostly about performance and cost. And that was it. It was essentially a matter of you’re tasked to design a component part or assembly that meets certain performance specifications and do so at the lowest cost. And things like sustainability were just not factored into that decision. How is that changing the mindset, do you think, of the design engineers now? Do they have to look at something and think, “Well, I would like to use nylon six for this part, or I’d like to use an aluminum alloy, but now I can’t because there are implications at the end life.”

Rolf Wiedmann: That’s true. But as we know, our engineers are extremely creative and they are defining, let’s say, the footprint of the product in a very early stage. So they determine it and they create solutions for that one. And it’s another challenge. Yes, it might be easier to stick to known concepts. But I think this is also what our engineers like to be challenged and to develop alternative concepts, take into account what, at the end of the day, our clients and our customer’s clients want to buy. And again, sustainable products are in the mind of our customers, our end users. And so the success of a product is certainly defined by its sustainability and how it can be presented in the market, in that sense.

Jim Anderton: We’ve talked about sustainability in reducing the impact of designs that we make now. But of course, reducing CO2 itself has become a large industry with new technologies emerging to things like capture carbon, sequester it. Europe leaves the way in many of these technologies down here. A growth industry, is that a place where you think where we’re going to see some innovative engineering growing in the future?

Rolf Wiedmann: So we have a constant dynamic currently. But it’s very interesting to see, I think, since I’m in industry, I’ve never seen a phase where so many new initiatives at existing traditional customers have emerged, driven by that momentum that we see here. And also so many new startups getting into business, taking care of these new topics. So there’s a big dynamic and great innovation. Especially, look at topics like battery systems that you manage. And here also, if we relate back to engineering tasks, we see that topics like systems engineering become much more in the mindset of our engineers. Because it’s not only creating shapes of products, it’s also predicting their behavior and optimizing their behavior. Also in with regard to their efficiency, like heating systems and so on, these new type of…

Rolf Wiedmann: If you take the example of electro-mobility, I mean, we are not only in the part of transportation, mobility, different industries, but especially in transportation. Mobility, it becomes topic about the heating systems, the climate control and so on. And that has also been the reason that we as TECHNIA, I just recently acquired a company called Claytex that especially engages in simulation and especially in the simulation of battery systems and so on. Where we see that our customers are facing us with the topic on a daily basis.

Jim Anderton: Well, I hope that we’ll chat in a moment about simulation in greater depth. But one thing I want to touch on and you brought up the systems engineering is at cloud connectivity. Now we’re coming out of a COVID-19 global pandemic first time in a century that this kind of thing has happened. But most engineering teams work collaboratively and they work with meetings and they’ve historically worked face to face. And that’s been important. I mean, historically global firms, like yourself, often would fly people from continent to continent to make sure that those meetings could happen to move projects forward. We have software that allows us to meet virtually, much in the way we’re doing it now. But engineering is also about sharing technical ideas, including renderings, models, algorithms, entirely different world at this point. Are we going to see a future from this point forward, where we decentralize everything in engineering, everyone designs and works from home, and we use technologies like we’re using right now to replace the face to face meeting.

Rolf Wiedmann: You’re completely right with that topic. But take in a bigger context, maybe. This is more or less a social change that is taking place that not only affects the engineering community, but virtually all workforces that we are having. But with the difference that for the engineering community, they sometimes have to do the hard stuff, to do the heavy lifting in terms of, as you mentioned, designs and so on. What we believe in, actually also in our company and many of our clients, is offices will merge into kind of a social hub. So there will be a new environment in the companies. A place to meet for business reason, but also to socialize. And it won’t be thus frequented as before. But it will be vital that the engineering teams can meet then be creative, exchange ideas together in a spot, in that hub.

Will they work the full week then in that hub? I don’t think so. And good news is, that as of today, tools have been available in the market. I mean, we are heavily pushing forward the 3D experience platform from Dassault Systèmes here to enable not only the collaboration, but really have the product described in a complete virtual way, combined with all the metadata. So it’s interesting. I had called some customers that we are adopting the platform. Hey, I can open up the design in the subway on the way to the customer or on my way home. And it has been extremely pushed these kind of solutions, especially by the change. Sometimes it needs such a dramatic change to create significant progress. And the adoption of these kind of tools has been extremely pushed forward what we are seeing here. And you also mentioned the topic and relation to cloud, and this all needs to be cloud-enabled or else it’s not feasible.

Rolf Wiedmann: And we are acting mainly also in Germany with a big chunk of our business. Traditionally Germany is quite conservative due to security and all this kind of stuff. But this has also changed in the minds of our customers. Because today is perceived to be more secure to be on the cloud, not to be exposed to cyber tech, in a sense. Easily to restore systems after a tech. And they believe that the big providers that are around like as Assure, AWS, and others. At the end of the day, they are more secure than the midsize, medium business that we have here locally. So it also has been a major shift in the adoption of cloud for technology reasons to provide that data in a distributor environment. But also it’s changed dramatically in terms of security. Has it been a negative point before? It is not perceived as a benefit for security topics to work on the cloud.

Jim Anderton: Well, I’m glad you brought up the issue of security because I’m hearing so much about it in our industry. It’s historically, of course, there have been trade secrets. There have been reasons to keep designs away from other people’s eyes. But there have also been issues with a customer. Specifically in areas such as aerospace and defense, where there may actually be legal requirements for keeping security of a specific design. I remember once visiting a division of a large aerospace manufacturer that we won’t mention that is based in Toulouse, France. And in that division, they actually were using specially made USB sticks and physically carrying designs from machine to machine to keep them away from the cloud out of security concerns. Yet by the same token, I see many firms today with current technology freely using internet cloud-based systems to move IT regulated designs. Designs that are quite secret in many cases down there. Is the confidence there? Do you feel now to stop worrying about hacking or attack from the cloud?

Rolf Wiedmann: We need to really distinguish between really mission critical things and the day-to-day work that our customers are doing. For sure there are standards, like tsocks and others, that regulate how and what data and what needs to be the secured infrastructure to work with. So that is a given that also will take some time. It’s not the case that this is completely free now at all can collaborate. But what we see is currently we have examples from large tier one suppliers that are setting up hosted environments, which are then secured by itself within the company. And then connect to gather around their teams in the world, to staff a big OEM project that are currently coming up. So there are many ways hosted-cloud, public-cloud. So certainly you have to find the right way to secure the needed security. But on the other hand, let’s say the frontier is really pushed forward currently. Also in the relationship between the OEM and the supplier, because simply the benefits are there.

Jim Anderton: Rolf, so much to talk about in connectivity. But I’d like to touch base about simulation. We’re hearing so much about it now in its new advanced forms. Of course, simulation has always been around. Engineering’s about simulation. We simulate in our mind. Perhaps historically, we would make small test articles or prototypes. We would test them in this respect. But we would iterate our way to success, start with an idea and then alter the design. Alter, alter, alter until we get something satisfactory. Now we’re looking at a world where we could almost take a crazy idea, even a loose concept, and then we can, using simulation software, actually simply test it rather than develop it in our mind first. It feels like we’re almost skipping a step where we can simply try 10,000 different ways to do something rather than simply attempt to narrow the focus from the beginning down there. Are we going to change the way that we approach this? Is this strictly about simulation to reduce cost? Or simulation going to get us to an ideal design faster? Both?

Rolf Wiedmann: It’s all about the concept of try and fail fast. And this is extremely supported by new ways of simulating. Not only design in a classical sense that you have this stiffness or the load on a part, it’s actually much more. We see it. For example, the current industry, you see, there are so many different types of cars brought to the market that customers simply don’t have the time to really do the home location with all of these cars. So they need ways of doing virtual testing that is per se, given for them. And virtual testing has also increased not only just to look in the rear mirror. Can you see the background in the right way? No, it’s much more. It’s the definition of complex systems.

And what I mentioned in the term of system engineering, system behavior, you’re virtually now the chance to simulate the performance of a complex system, including software. We all know how important software is now in the value chain of many companies. So a lot of the value of the product, also the maintainability is determined by the software they’re using. And there are some US firms that seem to have, especially in the car industry, some competitive edge. But as I heard, some of the German car makers are coming up.

Jim Anderton: Rolf, I think obliquely, you’re perhaps mentioning Volkswagen with their big electric vehicle push and that their growth has been remarkable and, in many quarters, unexpected. But you brought up complexity, I think. And I think that’s a key issue here for firms like Volkswagen as a case in point. When simulation began to really take off, we’ve noticed that designs became more complex. And we began to wonder our system’s more complex because simulation allows us to make things that are more complex. Or is it the other way around? In a sense, are the algorithms basically pushing designers to make things that are more complex because they now can optimize them at levels they couldn’t before. I think of additive manufacturing, for example, or you can make a simple bracket, which looks almost like an organic shape, very, very complex. Too complex to design in conventional ways. So do you think it comes from this advanced software or it’s a natural part of using this software?

Rolf Wiedmann: So I think just an old say form follows function. From the boho style before. So I don’t think actually that this drives complexity because there will be a negative aspect it’s much more. That also not only, as I said, the varieties are getting more, it’s also that we all know the lead times. The cycles are getting much closer. So per se, the complexity is increasing. So I don’t think that engineers just use complexity yet because they can manage it. I think still this perfect design, this perfect function is looked at. And it’s a major principle of successful engineering. So I believe in our community that they are really having these values in mind. But as we have shorter cycles and more variants, simulation helps keep track of it.

And as we are adding more components… I have an old timer from 1975. There’s not too much software, and I can tell you. But in the new cars there is. So it’s a given that also offering more functionality to products, we simply have more components. So we have to cope with software and all these kind of things. So I think these things are getting together. This is natural. Complexity is evolving. And so our systems need to keep track to help us manage it in a controlled and secure way.

Jim Anderton: It’s funny you mention that, Rolf. When I was a teenager, the Bosch Jetronic injection on a Super Beetle seemed so complex that I removed it and installed carburetors.

Rolf Wiedmann: Okay.

Jim Anderton: I drive a Honda right now and the technical manual for the Honda is 2,763 pages in length. So it has to be delivered as software. Because it’s simply impractical to print something. So the complexity is wild. But I come from the automotive industry. And I come from the automotive industry. And change is a natural part of all manufacturing. But configuration control is always an essential part. And you could see this really in older, more experienced engineers who are always very wary about constant modifications and changes to parts. And often they wanted to sort of suppress this as much as possible. Or tell a young up and coming designer and say, “Yes. You can change it five different ways to make it better. But we must hold and wait for next year or perhaps two years, hence, when we do an overall system redesign and we implement changes then.”

Jim Anderton: And one reason was, of course, the cost of tracking these changes, revising blueprints, for example, renderings. We used to say, “They go through the alphabet. So many changes, A B, C, D.” And so always there was a desire to crush this. Even if in their heart, the engineer knew I can make this better. If they’ll just let me. Are we at a point now where we can just release or free the engineer to go ahead and make those changes monthly, weekly, daily?

Rolf Wiedmann: You’re right. There’s a strong antagonism between what is possible and what is manufacturable on the other side. But this is a discussion we have having for many, many years now. Yes. True. But completely speaking, the new systems allow us to better keep track. I mentioned we are different industries as TECHNIA. One of our biggest industries is life sciences. And if you have an implant, for example, you need to keep track of it maybe longer as the projected lifetime of the patient is. And if you start to make a lot of changes in the product, and you’re not sure what is then delivered to the client at the end of the day, you are in big [inaudible 00:22:35] troubles. So especially in that industry, it is a strong momentum to use systems like engineering the platforms, the 3D piece platforms to really document where you are and have a concrete representation of the product and all its related data materials and so on.

So this is extremely important. And I think now that we mature on that one, customers trust to allow even more flexibility to come quicker, to better design and not restrict the young engineer with the prior idea to the next year’s release, where I can use that stuff. By the way, it’s the same thing with cars. That this automated update is something that is very much perceived by customers. Like with your phone, where you like to get a new update and new features in the product, not only all three years when you buy a new phone or on the fly. So this is also something that keeps you awake as a customer. So in order to track all this, it’s extremely important of systems in place. And we see heavy investments from the big OEMs that they’re using. Especially these type of platform systems to keep track. And on the other end allow flexibility.

Jim Anderton: Rolf, I’d like to ask you to project into the future. Artificial intelligence, AI, we have to mention it. It’s on everyone’s mind everywhere. In all aspects of software, not just an engineering software at this point. And it is an aid which helps engineering professionals do their jobs better. Will it replace engineering as we know it now? Is the age of the designer going to disappear? Will generative design and AI simply take the tools out of the hand of the craftsman and do the job itself?

Rolf Wiedmann: No, no, definitely not. The role of the engineer in the forefront of developing products, thinking of new products and initiating them. I think there’s a strong position in our industry for the role. Yes. Will there be additional tools that make your life easier? I mean, we’ve seen it in artificial intelligence in code making. So I mean, why is it important that I’m coding programs? Maybe, I just talk and say what function I want and it’s directly coded in a language of choice. So it’s a help. Like we have been introducing CAP Systems many years ago, still engineers are there. And so I strongly believe in that community. It’s growing, for sure. Everybody has to adopt his skills. One of our values at TECHNIA is keep learning. So I think we all need to keep learning and engage with new topics. And at the end of the day, be professional in what we are doing.

Jim Anderton: It’s an amazing future. Rolf Wiedmann, Managing Director at TECHNIA. Thanks for joining me on the show.

Rolf Wiedmann: Thank you for having me.

Jim Anderton: And thank you for joining us. See you next time on Designing the Future.

Learn more about about how simulation-driven design can help make sustainable products.

The post Advanced Designs Need Advanced Design Tools appeared first on Engineering.com.

]]>
Advanced Tools, Advanced Sustainability https://www.engineering.com/advanced-tools-advanced-sustainability/ Fri, 22 Apr 2022 17:00:00 +0000 https://www.engineering.com/advanced-tools-advanced-sustainability/ New ways of managing the design process will accelerate the move to a clean, efficient economy.

The post Advanced Tools, Advanced Sustainability appeared first on Engineering.com.

]]>

This video was sponsored by SIEMENS.

For most of human history, the power to advance human civilization was derived from animals.  Human labor, draft animals and the use of wind in sails kept the pace of technological advancement slow, and the population grew slowly as well. 

But 250 years ago, pioneers like Thomas Newcomen and James Watt developed technologies that exploited hydrocarbons, notably coal, and discovered a way to harness the heat energy of combustion to create modern machines. The steam engine was the first of multiple technologies that converted heat into motion, delivering modern shipping, rail transportation, motor vehicles and finally aircraft. 

The growth potential of these technologies appeared unlimited, but in the last few decades, the environmental consequences of fossil fuel combustion have become clear: there is a price to be paid for unlimited use of carbon. But the demand for progress is relentless. Early engineers created the technologies that threaten the environment today, and today’s engineers are tasked with finding the solutions. The goal is sustainability, and the race is on to make maximum use of available technologies to transition carbon-based economies to cleaner alternates without serious disruption of modern life. 

To do this, advanced engineering tools, and new ways of thinking about power, work and energy are needed. Jim Anderton discusses these important issues with Chad Ghalamzan, Marketing Manager at Siemens Digital Industries Software and Stephen Ferguson, director of marketing content for Siemens PLM software.  

In the coming decades, we will have to engineer a future free of fossil fuels. This transition is one of the biggest engineering challenges that our planet is facing, and will only be possible through the extensive use of simulation and test. Learn how Simcenter is helping to engineer a low-carbon future.

The post Advanced Tools, Advanced Sustainability appeared first on Engineering.com.

]]>
Towards the Perfect Design: Simulation for Faster, Optimized Engineering https://www.engineering.com/towards-the-perfect-design-simulation-for-faster-optimized-engineering/ Mon, 21 Mar 2022 17:15:00 +0000 https://www.engineering.com/towards-the-perfect-design-simulation-for-faster-optimized-engineering/ TECHNIA’s Johan Kolfors describes how simulation is changing the way design engineers approach their craft.

The post Towards the Perfect Design: Simulation for Faster, Optimized Engineering appeared first on Engineering.com.

]]>

This video was sponsored by TECHNIA.

Engineers today face multiple challenges. Designs must be optimized for low-cost, light weight, long life and environmental sustainability. These requirements are frequently conflicting, but the search for the optimum design goes on regardless. New plastic resins, metal alloys and composite materials offer significant performance advantages over commodity materials, adding options for designers. Not only new materials, but new processes such as 3D printing offer design engineers new flexibility in making parts and products of extreme complexity—parts that could not be made with conventional processes at any cost. Simulation is the key to unlocking the potential of new materials and processes. TECHNIA director of simulation Johan Kölfors discusses the state-of-the-art with engineering.com’s Jim Anderton. 

Learn more about about how simulation-driven design can help make sustainable products.

The transcript below has been edited for clarity.

Jim Anderton: Hello everyone, and welcome to Designing the Future. The object of engineering design is simple. Take an idea, then develop it, render it and release it for production or deployment. Now for most of engineering history, testing the safety, practicality, and usefulness of a design was a matter of redesign, testing, and prototyping. It’s an expensive way to achieve perfection, but modern tools allow engineers to iterate their way to success virtually, and the simulation tools can do more than just virtually test parts. Joining me today to talk about how these modern tools are changing the way engineering professionals approach their art as Johan Kölfors, Director of Simulation for TECHNIA, the global Dassault systems partner. Johan is a civil engineer and holds an MSC in Civil Engineering from Lund university in Lund, Sweden. Johann, welcome to the show.

Johan Kölfors: Thank you, James. Nice to be here.

Jim Anderton: Johan, tell me a little bit about TECHNIA.

Johan Kölfors: Yeah. TECHNIA is an expert company in the field of PLM engineering and simulation, and I’m responsible for the simulation team, TECHNIA Simulation, and we are 60 simulation experts today, covering 14 different countries and with expertise in a wide range of different simulation domains, helping our customers to use simulation, to create better and more sustainable products.

Jim Anderton: Johan, simulation is such a broad topic. There’s so much to talk about. It’s hard to even know where to start, but one thing that we talk a lot about now in the Americas and Europe and Asia, everywhere, is sustainability. We know that we’re working toward decarbonization of many aspects of society, of the economy. This has ripple effects in everything, from the way we make concrete to the way we make airliners. Can you tell me a little bit about what you feel the role of simulation is in approaching the sustainability problem?

Johan Kölfors: Yeah, true. And I think most companies have high on their agenda to create, to develop more, better products, more sustainable products. And I’m convinced that simulation has a very important role to play to make that happen. So simulation makes it possible to explore more design alternatives within a certain space of time. So this makes it possible to optimize products, to make them lighter, use less material and less energy, for instance. So simulation is, for sure, very important in the creation of more sustainable products. And something that I would point out is the importance of introducing simulation earlier in the design process to really get the full benefit out of simulation. So, and if many companies still use simulation quite late in the design process and use it more for validation of a design and maybe replace some of the physical prototyping with virtual prototyping, but to get the full benefit out of simulation, it should be used all in the design process, when you still have a lot of design options available.

Jim Anderton: In my introduction in general, many people think of simulation, as you described it, as just a validation and testing tool. I designed the part and I virtually test it using simulation and then discover, all right, I have a problem that I go back and redesign in that conventional iterative way. When you talk about integrating simulation earlier in the design process, are you referring to integrating it into the concept stage of design or into the point where you’re testing sub-assemblies or individual parts rather than all up testing of a whole design?

Johan Kölfors: It could, in many cases, be introduced as early as in the concept phase, absolutely, using maybe simpler simulation strategies or models for simulation, and then introduce more complexity into the models later in the design process. But in order to do this, it is important to introduce what we call the MODSIM concept, when we make simulation available, not only for simulation experts, but also to design engineers that is involved in the project earlier in the process. So we help a lot of our customers to implement this MODSIM concept based on, in most cases, based on the 3D experience platform from the source system, making it available to use different types of simulation roles for this different types of users in the organization. So it’s smaller subsets of the simulation tools for design engineers. We have guided workflows to make it easier to use and easier to learn, and then the full set of functionality for the experts, but everybody is using the same data and it’s based on the CAD data. So there is no need for exchanging CAD geometry files, and so on.

Jim Anderton: We always, of course, think about the high profile industries when we think about any of these advanced engineering tools like simulation, we think of aerospace, we think of automotive, but there’s a broad spectrum, of course, of industries that use design into the process industries into, even these days, even areas like agriculture, areas that are not traditionally thought of as conventional part making industries there. Does simulation have a role in areas outside those conventional metal cutting, traditional engineering fields?

Johan Kölfors: Absolutely. And one industry or one area that is growing very quickly is life science and medical equipment. So I would say that simulation would be beneficial in most industries.

Jim Anderton: Johan, these days 3D printing is very popular. It’s moving beyond just a prototyping technology into a production technology. And now parts are being made this way. And of course, additive manufacturing, it allows a designer some freedom to design parts that are radically different in shape and form than the traditional way that we sort of think of the triangle of forces, and some of these parts are very organically shaped. They look almost like biological structures, like the bones of animal, perhaps. Does simulation have a role to play in this new way of part making?

Johan Kölfors: Absolutely. There is several good softwares on the market for topology optimization, for instance. And the value of this kind of simulations might have been a little bit limited before because of the limitations we had in the manufacturing methods, but, with 3D printing, you could really make any type of shape and you could use the full power of optimization simulations. And so I’m sure that we will see a lot of more consumer products that is 3D printing now when the technique is getting more mature and more inexpensive to use.

Jim Anderton: I’m glad you brought up that point about consumer goods, because we have a new generation of consumer goods now, which are not only made of more sophisticated material with more sophisticated manufacturing processes, but we expect them to be connected now. We’re looking at a future where the shoes in our feet to the appliances that we use every day are all connected through the internet, and they’re moving information back and forth. Does simulation have a role to play in this new connected product design?

Johan Kölfors: Absolutely. We have… I think that is the single fastest growing single domain we see right now, the electromagnetic domain. So as you said, more and more products are getting wireless connected. We see the trends with the electric vehicles and so on. And still a lot of companies in these industries spend a lot of time and a lot of money on physical prototyping, but there is the same option to use simulation for virtual simulation in order to develop new products, to develop antenna sensors of different types.

Jim Anderton: You mentioned this. This of course is… This is a radio frequency world we’re talking about, which is, even within the rum of electronics, is sort of like a black magic historically or operates of on different levels than say just optimizing a schematic. Do you see the simulation tools being equally applicable toward the people that are working in those more, I call, obscure areas of RF technology or connectivity compared to those who are working on, for example, a durable elast- or rubber to make the sole of a shoe? Is it the same simulation tool used by those radically different designers doing different things, or is the tool itself different for each?

Johan Kölfors: The tools could be different, but at the same time, as I mentioned before, we are working mainly with the 3D experience platform from Dassault Systèmes. And that is an environment where all the products from Dassault Systèmes is implemented in the same environment, and that is everything from PLM to CAD and all the different simulation tools. So the solar technology could be a little bit different from application to application, but the interface and the usage are more or less the same.

Jim Anderton: From a product development perspective, Johan, I know that product development or part development, even, it’s often constrained by the need to iterate the design to whatever level of perfection that we desire, whether it’s reliability, whether it’s safety, whatever the parameters are. And historically that meant often making multiple prototypes, then physically testing them to check performance, then going back and then redesigning. So that iterative system was often controlled by physical factors, your ability to have a prototype shop, make something, and then to go and test it. With simulation, we’re looking at a world possibly where we’re going to compress that timeline. Do you think that simulation’s going to take the product development timeline and make the design part of it very much smaller in terms of the part of the overall product development? Will design simply become something that happens lightning fast?

Johan Kölfors: Yeah, like you say, the iterations will be quicker and you could use it for shorten the design time, but you could also use it for exploring more design alternatives in order to really find the best and the most optimal design of your product. And I think in most cases, there will be some kind of mix you’re trying to get the products quicker into the market, but it’s also important that you really have the best product on the market, an optimized product.

Jim Anderton: One of the ways, of course, to hit development within an acceptable timeline, acceptable cost, of course, is to try and reduce the number of iterations. And historically many designers have been quite conservative with the design to make sure that they could achieve those cost and time constraints. With simulation at this point, does this… Will this free designers to be a little more radical in the way they think? Is it possible now to try ideas that may have seemed crazy and just to see if they work and then take it from there?

Johan Kölfors: Yeah, I think so. It wasn’t possible before because it… You need to be quite on track from the beginning. But since you have, you can easily get some results from simulation and explore completely new design concepts. It’s possible also to find completely new and more innovative designs using simulation.

Jim Anderton: Johan, we talk about additive manufacturing as being a hot trend, hot technology. We talked about the internet of things, connectivity. Cloud connectivity, it is on everyone’s lips. Everyone’s talking about it. We now think of collaboration, especially given the constraints of COVID now of teams working remotely, people working remotely, people using software as a service, rather than owning a seat in the traditional way down there. Tell me about the cloud. How does that factor into advanced simulation in this modern engineering world?

Johan Kölfors: Absolutely, simulation is following the same trends, and more and more simulation products are available as cloud simulations. And cloud computing is a really hot trend, and we work together with Dassault in promoting the cloud solutions. And this is something that is important for many new companies that are new to simulation because simulation used to be connected to quite heavy investments in computer hardware and so on. But with the cloud solutions, you can pay when you use it and you don’t need to invest in own computer hardware. So this, it’ll be important to make simulation available for more companies.

Jim Anderton: I’m glad you brought that up because it’s… One thing we’ve noticed historically with all computer-aided design, CAD/CAM, is that early adopters were often very large organizations. They were large airplane companies, automotive companies, companies that had the resources to buy very expensive hardware and software and buy a lot of seats and then sort of put a team together. Then the tier one supplier community that fed them were somewhere large firms. There were the Bosches and Continentals of the world that had similar power to their customers, but behind them were another layer of many more suppliers that were smaller firms. And as we went down the supply chain, the firms became a little bit smaller, and it became more difficult for those firms to use the same high technology tools because they were expensive. You’re talking about a world in which you can sort of pay as you play and you can buy what you need, but not necessarily buy the, a very expensive suite, right out of the gate. Is this something that smaller firms can use to sort of bring up to the level of the large customers?

Johan Kölfors: Absolutely. We have a lot of start startup companies. They can be up and running with CAD and simulation within a couple of hours, I would say, or days. So it is a game changer for many companies. And important, it is the same simulation technology that’s behind the cloud solutions, so it’s not… There is no difference in the capability, in the software capability, it’s the same, only more available and more affordable.

Jim Anderton: Johan, we’ve talked about how there’s a democratizing quality to this simulation software and the smaller works can use it as well. Does this work all the way down to perhaps to the individual consulting engineer working by himself or herself at home, perhaps?

Johan Kölfors: Absolutely. Especially with the 3D experience platform. You could be located anywhere and collaborate with your colleagues located in different countries, different offices, and so on it doesn’t matter. You will still work on the same data. And so you could have a design team in one country, the simulation team working from another country.

Jim Anderton: Johan, for engineers and designers using current tools, is it a difficult task to train to use new tools such as yours at this point? Can those that are familiar with current CAD/CAM technologies train quickly?

Johan Kölfors: If you have been using simulation previously in the career, it’s easy to learn a new simulation tool, but this is also an important part of the TECHNIA simulation offer, to give good training to companies, to train their personnel. And we have a frequent training schedule for it, yeah.

Jim Anderton: Johan, one final question. It’s exciting future. Where do you see simulation 20, 30, perhaps 40 years from now? Will it be radically different from what we see today?

Johan Kölfors: Yeah. In the future, I’m sure that we will see simulation being a very important key knowledge and part of the product development process in most companies. In fact, it will be a natural part of most companies. And that will also be, as we said before, a key to making better and more sustainable products.

Jim Anderton: Johan Kölfors, thanks for joining me on the show today.

Johan Kölfors: Thank you, Jim.

Jim Anderton: And thank you for watching Designing the Future. See you next time.

Learn more about about how simulation-driven design can help make sustainable products.

The post Towards the Perfect Design: Simulation for Faster, Optimized Engineering appeared first on Engineering.com.

]]>
Finding Success With Digital Twins https://www.engineering.com/finding-success-with-digital-twins/ Sun, 12 Dec 2021 12:12:00 +0000 https://www.engineering.com/finding-success-with-digital-twins/ ANSYS’s Manzoor Tiwana explains the “crawl, walk, run” approach to starting with digital twins.

The post Finding Success With Digital Twins appeared first on Engineering.com.

]]>

This video was sponsored by ANSYS.

Digital twins present an exciting opportunity across industries, but for many organizations, it’s not always clear where to start. Manzoor Tiwana, Lead Product Manager at ANSYS, has some advice: look for the low-hanging fruit. 

On this episode of Designing the Future, Tiwana takes us through several successful implementations of digital twins and offers a roadmap for getting started. He also provides the latest updates from the Digital Twin Consortium, elaborates on the concepts of virtual sensors and model order reduction, offers insight into the evolution of ANSYS Twin Builder, and speculates on what role digital twins may play in the future.

Learn more about ANSYS Twin Builder and start a free trial at ANSYS.com.

The following transcript has been edited for clarity.

Michael Alba: Hey everybody, and welcome to Designing the Future. Today, we’re going to dig deeper into digital twins to discover some successful implementations and typical pathways to success. We’re joined by Manzoor Tiwana, lead product manager at simulation company, Ansys. Manzoor oversees Ansys Twin Builder, the company’s product for creating digital twins, and he’s previously held positions at Autodesk, MathWorks, and Bosch. He’s got an MBA from Carnegie Mellon, a master’s in automotive engineering from FHT Esslingen, and a bachelor’s in mechanical engineering from UET Lahore. Manzoor, thanks for coming on the show today.

Manzoor Tiwana: Thanks for having me.

Michael Alba: So, you’ve been in the industry long enough, I’m guessing to have seen maybe a few different interpretations of the digital twin. Could we just start today by going back to basics? Could you tell me how does Ansys define the digital twin today?

Manzoor Tiwana: Yeah. As you said, the term digital twin has been used in a lot of different contexts in the industry. And Ansys is part of the Digital Twin Consortium, and we have adopted the definition of Digital Twin Consortium. And they define digital twin as a virtual representation of real-world entities and processes, synchronized as a specific frequency and fidelity. What that means is that on one hand you have your physical asset and on the other hand you have your model or the visual replica or digital twin, and you want to connect them with some sensor. And with the help of this model, you can track the past, what happened to your asset. You can generate deeper insight into present, how is your asset is performing. And you can also predict and influence future behavior.

The thing that is very unique to Ansys is that it allows you to build these models from both physics or simulation-based model, what we call them, and we can also add data analytics to create those unique insight into your operations.

Michael Alba: So, what are some typical applications you’ve seen from your customers?

Manzoor Tiwana:  We have seen a large-scale adoption in several industries, from automotive to automation. But there are three major areas or applications that we have seen a lot of traction. Number one is this industrial flow network. This includes like the performance of your overall fluid networks, performance of your rotating machines, like pumps, compressors, turbines, process optimization of fluid mixing and blending processes.

The second area where we have seen a lot of use or a lot of traction is in the automation industry or in the elective drives. Some of the typical application includes the performance and thermal management of the drives. Also, in the EV and HEV space, in the automotive space, we have seen the thermal management of batteries, battery packs, and also the complete EV powertrain. So, these are all the application we have seen in this area.

And there’s this new or emerging technology or application what we are seeing in the industry, is also the heating and cooling application, like optimization of HVAC systems, zero carbon, and carbon capture. These are all the application what we are seeing in the industry.

Michael Alba: So, wide-ranging. Are there any particular exemplary success stories you’ve seen with any of your partners that would illustrate this concept in a little more detail?

Manzoor Tiwana: Sure. We have seen several customers from, as I said, we have seen a large-scale adoption from automation to automotive, and we cannot talk about all of them, as the customer relationship or they have not signed those disclosure agreements with us. But there are some customers that have done a press release with us, is ABB, EDF, A123, Volkswagen, to name a few.

One of the recent or the newer story is the ENGIE. They are the world’s leading supplier of energy efficiency services, and they are helping their customers to transition from traditional carbon-based energy to carbon-free energy. And they are employing digital twin to reach that goal, to improve or optimize the burning process to reach a zero-carbon target. In addition to that, we are also working with a number of partners in the industry, like Microsoft, Rockwell Automation, to deliver the solution to our customer.

Michael Alba: So just on the topic of those partners, you are one of the founding members of the Digital Twin Consortium, which I believe began early last year, could you update us on the status of the Digital Twin Consortium and Ansys’s role in it?

Manzoor Tiwana: Ansys is the founding member of the Digital Twin Consortium, along with those companies like Microsoft, Dell, and GE. And the goal is to drive the development and adoption of digital twin technologies, also to drive common terminologies and standard. One example is the development of digital twins definition language or DTDL. So, Ansys is collaborating with Microsoft to work on this DTDL. And we are also working with Microsoft to create the reusable reference architectures. So the goal is or the aim is to help these IoT solution to drive this standardization and help these IoT solution to talk to each other from multiple solutions, so you can combine them in a single solution.

Michael Alba: And how far along is this project? I mean, how mature is digital twin technology at this point? Are there still a lot of technological obstacles to overcome before we can really get to the vision that you’ve outlined?

Manzoor Tiwana: So digital twin technologies, I wouldn’t call them as emerging technologies anymore, so we have used this term that it is emerging, but I think it has established itself quite a little bit in this area. Having said that, several companies are still on the journey to explore and adopt the digital twins. One of the challenges that our companies are facing is both people that the organization need to evolve and adopt this technology, and also on the technical side, some of the typical challenges they face is how they want to put the sensors inside, how they want to collect the data, how they want to model these digital twins, from where they create those insights.

So, these are the very typical challenges that especially if you are basing your digital twin or model on the analytics only, it requires you to collect a large volume of process data for training and also the accuracy might be insufficient and limited just to the observed data and to the available sensors. On the other hand, Ansys provide physics-based models and virtual sensors that can integrate your physics-based model and the data analytics together to generate better understanding of your operations.

Michael Alba: Could you elaborate on that concept of virtual sensors? What does that mean?

Manzoor Tiwana: So virtual sensor is, in the physics-based modeling, you can predict quantities that are not measured directly from the sensors. To help understand you, I’ll use the example of an electric motor. For example, you have an electric motor and you want to, say, find out what is the temperature of your rotor is or what is the temperature inside the motor is, the one way to do is that to put a physical sensor inside, but that is not all feasible or it’s not very cost effective, or sometimes it’s physically not even possible to put a sensor inside. So, with the help of other sensors, for example, with the help of current, you can predict what is the temperature inside of your rotor is, so you know the properties of the material of your motor.

And with the help of some sensors, you can predict other sensors or other quantities like temperature in this case. So if you have increase in load, so that will increase the current in the motor, and with the help of these models, you can predict what’s the temperature is going to be in the future, what’s the temperature is currently. And also, if you keep on using this motor, how long you have till it will reach the critical temperature. So all these kind of things you can do with the physics or the virtual sensors.

Michael Alba: So, in your digital and physical twin, you’ve got a combination of physical and digital sensors, giving you a complete picture of the system. What’s the balance between those two types of sensors. Could you theoretically have all physical sensors and still make use of a digital twin?

Manzoor Tiwana: Of course, you can have both, all your data, it could come from your physical sensors, but as I said before, it’s not always possible to have those sensors there. And it’s usually not very cost effective way to put a lot of sensors inside. One example is that some of our customer, they have these brownfield applications, where they have these large applications where they have built 10 years, 20 years ago, and they want to now employ digital twins there. And to do this, they have to put new sensors inside. So the cost of putting these sensors inside is really great, it’s prohibitive. And with the help of some sensors, like, say, you can put some sensor as a flow sensor, and with the help of those sensor and using these digital twins, you can predict how your system is going to behave. So you can use a subset of sensors, but still get the full accuracy.

Michael Alba: So, you’re talking about essentially simplifying the complex system and this is an important point, I think you bring it up often when talking about Twin Builder is this concept of model order reduction. Can you tell us about that and how it’s achieved in Twin Builder?

Manzoor Tiwana: So a reduced order model or a ROM, how we call it, it’s a simplification of 3D physics. As you know, the Ansys is the leading authority in the simulation of 3D physics. So if you have a 3D physics model and you want to reuse that in a system simulation perspective, what you can do is that you can simplify that model using this reduced order modeling technology, without losing essential accuracy. Most of these 3D physics, it takes a lot of time to simulate, so which that is not very feasible if you want to generate these insights in the real time or near real time perspective. So you can use these 3D physics model, you can simplify them without losing those accuracy. And you can use them in real time or near real time for generating those insights.

Michael Alba: And is that a process of reduction something that I would do in Twin Builder or do I reduce my model and then bring that into Twin Builder? Could you take me a bit about how it works that way?

Manzoor Tiwana: Yes, sure. So the process of generating these reduced order models is in the way that what you have to do, you can use either Ansys physics, but it’s not limited to Ansys physics, you can also use any third party simulation software. And you generate inputs and outputs, and you generate that data, how your system is behaving. And then you bring that data inside Twin Builder, and then you can create those reduced order model inside Twin Builder. But once you have created those model inside Twin Builder, you can export them as a twin, what we call a twin file, and you can then deploy those reduced order models with other system simulation capabilities. And then you can deploy them on the cloud or on the edge or on any IoT platform.

Michael Alba: What is a typical journey for your customers in order to implement a digital twin, to get started from zero, what do they usually do?

Manzoor Tiwana: We recommend a crawl-walk-run approach to our customer. The very first step is to identify an application that is a low-hanging fruit. That means that it has some economic impact, so it could be in two ways, either the asset itself is very expensive, so any wear or damage, it can be very costly. For example, a wind turbine, right, so wind turbine is extremely costly. And any damage or wear happened to that can be really expensive. Or a turbine, a water turbine, they are some few examples where the asset could be very costly and any wear or damage to that asset could be very costly. Another aspect or economic impact could be that the asset might not be very expensive, but the downtime is very costly. For example, if you have a pump that is pumping oil from undersea, if that fails, the economic impact of the downtime that is very huge.

The asset might be few hundred thousand dollars, I don’t know, but the downtime could be millions of dollar. So first step is to identify an asset or process that has an economic impact, a large economic impact either by the downtime or in terms of its own cost. And the second step is to do a quick POC. That means to work with Ansys to create a quick proof of concept that takes four to six weeks. And after this proof of concept, present that to management and also gather the lesson learned. And in the last, we would recommend that after those lesson learned, you scale that to other applications, to the whole department or other department in the company.

Michael Alba: Now, when you talk about this low-hanging fruit that the company should start with and you look broadly around at all the possible applications of digital twins, is there anything you see as really ripe for digital twins that nobody seems to have really taken advantage of yet?

Manzoor Tiwana: As I said, we have seen a lot of application in all the industries. One application which is catching my attention or I have seen a lot of adoption in the industry is around heating and cooling application, like the refrigeration, HVAC application, zero carbon and carbon capture. Navantia is one of our customer that have adopted this simulation-based digital twin that help them monitor the performance and also manage the maintenance of their HVAC system on big ships. And with the help of this digital twin, they’re increasing their productivity, they are reducing the downtime, and it’s also helped them with the maintenance of their equipment.

Michael Alba: Now Ansys launched Twin Builder back in 2018, so it’s been live for a few years now, and I’m sure you’ve refined it over those years. Could you tell us how Twin Builder has evolved since it’s launch and what lessons you’ve learned along the way?

Manzoor Tiwana: Yeah, so Ansys has, as you said, we launched in 2018 and Ansys has established its position as leader in this space in the past three, four years. One key aspect of our go-to-market was to establish a key partnership with the IoT leaders in the industry, the people like Microsoft, PTC, SAP, Rockwell Automation, to build those relationship with the infrastructure provider in terms of cloud infrastructure or asset infrastructure and develop those connector, those out of the box implementation that user can take these digital twins and deploy on the IoT platform what they already have invested in.

We are also working to develop those design patterns and best practices. Another thing what we have evolved to adopt is the data analytics. So in the upcoming release, we are calling this the hybrid data analytics or hybrid analytics that allows you to combine physics with the data. That means that you can get the best of the both worlds. With the help of these simulation models, you can combine the test results and you can tune and calibrate your model to match the outputs to the real world outputs.

Michael Alba: Manzoor, I have one more question for you, and I’m going to ask you to put on your future looking hat for this one. But if you look into the future with this concept of digital twins in hand, how far do you think it’s going to go? For instance, if I buy a car 10 years from now, am I going to get a file as well that has a digital twin of that car that I can keep an eye on or a house or something like that? Will digital twins start to accompany most or all of the physical things that we make in the world, in your opinion?

Manzoor Tiwana: The immediate application or immediate, as I said, the low-hanging fruit, is on the side of, with the things which have greater economic impact. For example, the asset is expensive, downtime is expensive, so those are the low-hanging fruits or those are the things which are first to adopt these technologies. But as we are going to evolve, also, as you mentioned, the cars, as the cars going to evolve to become more EV centered, more getting the batteries inside, so you have to monitor those batteries, you have to monitor how is your asset is performing. So as we going to evolve, all these technologies is going to be infiltrate also into the secondary application, secondary assets. And you will see that you will have digital twins of almost everything of interest, so you can monitor how your asset is performing in the field.

Michael Alba: And just a quick add on to that. What about people? I know Ansys isn’t focusing on this, but some companies have explored this area. Do you think there’ll be a digital twin of you and I, at some point in the future?

Manzoor Tiwana: So actually, we do explore these kinds of things, where we have some of the medical applications of the heart model and things like that, where we are on the experimental side, we are working with some bio companies too, to explore this, so to model how your heart is pumping, to model how your arteries are functioning. So all these things could be very, very interesting, in the future, before a surgeon can do a surgery, so he can do a surgery on the digital twin side. So it’s still in the future, but I’m looking forward to that.

Michael Alba: Me too. I can’t wait to meet my digital twin someday. Manzoor, thanks so much for coming on the show. It was great speaking with you today.

Manzoor Tiwana: Yeah. Thanks very much for having me. It was really great talking to you.

Michael Alba: And thanks to you for tuning in. We’ll see you next time.

The post Finding Success With Digital Twins appeared first on Engineering.com.

]]>