The next wave of AI-powered networks

Features - Design & Automation

Nnaisense CEO and Co-founder Faustino Gomez discusses promising developments in deep learning and AI, the next level of productivity and human-level intelligence in machines.

Subscribe
June 17, 2021

Deep learning provides better accuracy by learning what features matter directly from your data and can be inexpensively deployed with high throughput on conventional hardware. 
 
All photos courtesy of Nnaisense

Artificial intelligence (AI) has played a critical role within Industry 4.0 since the beginning, and it’s key to unlocking the full potential of intelligent automation and smart manufacturing. Computing power and algorithms have applied these technologies to real industrial applications, allowing manufacturers to address complex challenges, take predictive and corrective action, and improve quality.

A thread underlying that ability for powerful AI is reinforcement learning.

Since its launch in 2014, research company Nnaisense has worked with reinforcement learning and AI, building customized large-scale neural networks for inspection, modeling, and control processes. Their goal: allow users to leverage cross-domain knowledge and better solve current and future challenges.

Faustino Gomez, CEO and co-founder of Nnaisense, shares his insights about the power of advanced AI and explains how his company is using reinforcement learning to expand capabilities and further power machines and processes to teach themselves.


Nnaisense developed a solution for autonomous parking when they partnered with Audi Electronics Venture. For the first time, they demonstrated a reinforcement-learned, recurrent neural network controller that performs autonomous parking using only local sensors.

Today’s Motor Vehicles (TMV): Working with Audi on autonomous vehicles, how did you incorporate reinforcement learning?

Faustino Gomez (FG): Reinforcement learning (RL) was used for the first time to automatically design the control solution of the Audi Electronics Venture model car. The RL used was neuroevolution (NE), where neural network controllers are trained using evolutionary algorithms: a population of networks is evolved by evaluating each network on the control task (the car needed to learn to self-park) and assigning a fitness to each that measures how well it performed the task. The most fit networks are then used as parents to generate new and hopefully better offspring networks by applying genetic operators that exchange code between the parents. The new population is then evaluated, and the cycle continues over many generations until a solution that solves the task is found that’s analogous to natural selection. For Audi, the solution evolved in a simulator that was also learned [from data measured from the car; a digital twin (DT)] and then transferred to the actual car.

TMV: What makes this approach different?

FG: Reinforcement learning allows control strategies to be learned instead of programmed. Many challenging, real-world control problems (i.e., engine and chassis management) are characterized by many sensors and actuators that interact in complex ways that make them difficult to control optimally using a control theory-based approach. RL can learn these complex relationships to find an optimal control strategy, just by learning to maximize a reward signal it receives from the environment.

Data sampled from the actual process dynamics is used to learn a predictive model that is tailored to the application.

What makes NE different from standard RL is that it uses a population which adapts collectively, whereas standard RL uses a single learning agent whose parameters are adjusted. The advantage: NE doesn’t require gradient information, backpropagation (the computationally expensive calculus that’s used to train neural networks), and it can be massively parallelized because each network in the population can be evaluated independently.

TMV: How do you see AI adoption progressing?

FG: Currently, most AI is of the passive type where models are trained to detect patterns in data and make predictions. This is very valuable but limited in that it can’t learn to make its own decisions. Active AI, where an artificial agent learns to make decisions not because of a new field, but because control (especially control for physical systems, as opposed to virtual ones) is generally more challenging than pattern recognition. We’re just starting to see it enter the industrial space as companies such as ours begin to deploy RL-based solutions. Irrespective of whether it’s passive or active, adoption of AI in industrial processes is important because it has the potential to dramatically increase efficiency and quality, lower costs, and make full use of the unprecedented volumes of data that can now be captured by Industrial Internet of Things (IIoT) devices.

TMV: What are the most promising developments in deep learning?

FG: The most promising is probably the emerging field of geometric deep learning which extends deep networks beyond image-like data (which they’re most known for) to structured data, or data that can be represented by a graph of such molecules or social networks. Basically, users can look at different graphs, see how they’re similar or make predictions. They provide capabilities that you wouldn’t be able to achieve with a standard neural network.

To fully unlock the hidden potential for increased productivity, controllers must be learned. This means applying deep reinforcement learning to adapt neural network controllers through safe and efficient interaction with a learned process model.

TMV: How is Nnaisense’s deep learning technology supporting AI-driven DTs?

FG: Traditionally a DT is a virtual representation of a physical system (typically a 3D mechanical object; ex: a car engine) that allows the user to monitor the operation of the physical counterpart at a distance.

A 2nd gen DT uses custom simulation software augmented with data captured from the physical counterpart to make the simulation more faithful. These twins can make predictions about how the physical system will behave under different control conditions and control inputs.

A 3rd generation DT (3GDT) is one in which the dynamics of the system are learned directly from data collected from the physical process being twinned. If enough data are available, a 3GDT can capture intricate dynamics, and because the twin is implemented as a neural network, it can be much more efficient making predictions.

A good example of a 3GDT is the process model we developed for additive manufacturing (meta 3D printing) for EOS GmbH. It models heat distribution on the next layer being printed so that anomalies can be detected by comparing the actual heat map measured while printing with the one predicted by the model. Detecting anomalies is important because they may result in structural part defects. This complex model couldn’t have been built using a multi-physics simulator, so it had to be learned directly from data recorded from numerous printing jobs.

Nnaisense
https://nnaisense.com

About the author: Michelle Jacobson is the assistant editor of TMV. She can be reached at mjacobson@gie.net or 216.393.0323.

Linear electric actuator

LDL electric actuators are cost competitive thanks to options including:

  • Printed-coil technology with integrated SMAC encoder reader head
  • Laser-cut parts for housing, end cap process times ranging from 5 sec.-to-30 sec.
  • 1-piece multi-pole neodymium magnets
  • Precision linear guide life tested to 500 million cycles
  • Automatic assembly
  • Built-in very low cost (VLC) controller amplifier with communication protocols

The LDL is now offered in 12mm and 25mm (0.5" and 1.0") widths. Other standard sizes are being added and the use of laser cutting technology results in quick development of special shapes.

SMAC Moving Coil Actuators
https://www.smac-mca.com

Watch now: To see the 6-axis IRB robot in action, visit https://youtu.be/oWFjqOXhy0I.

6-axis IRB robot for harsh, cleanroom applications

The 6-axis IRB 1300 industrial robot includes protection elements for tough industrial applications and contamination-free production processes, providing increased productivity, improved product quality, and reduced cycle times.

Launched in 2020, the IRB 1300 is now available in cleanroom ISO 4, IP67, and Foundry Plus 2 versions, expanding its use in tough environments with high levels of liquids and dust. Designers prevent intrusion by sealing all electrical components.

For added protection in metals applications including metal die casting, sand casting, forging, and machining, the Foundry Plus 2 version includes stainless steel on the end effector to prevent rusting that can occur when liquids are applied to wash away dust particles and metallic debris.

ABB Robotic Systems
https://global.abb/group/en