Train in the cloud, not at the Edge
Feature

Train in the cloud, not at the Edge

Joe Speed_ADLINK_16_9.jpg

When you capture data at the edge, you have to be able to analyse the data whenever it is convenient, simply due to its sheer volume. Joe Speed, ADLINK field CTO peels back the layers of edge AI with Abigail Opiah, as attention begins to mount and cloud providers ready themselves for a piece of the action.

When talking about edge, Joe Speed, ADLINK field CTO, defines it as “compute that has been on or near the thing that generates the data”. He mentions that edge has a lot of attention now and AWS, Microsoft, Google and many others have since realised that this is an important area, however ADLINK has been doing it for a long time – since way before anyone thought it was cool or interesting.

“We make things that talk to data centres and things that talk to clouds. We manufacture with Intel and NVIDIA, so when we started having edge AI, what you are really talking about is running these machine-learning models, but to get them to run fast enough, especially when you’re dealing with high bandwidth sensor vision, you need hardware infrastructure,” Speed adds.

“We are a big fan of what you would call ‘train in the cloud, not at the edge’. An example of this would be I put a camera on a conveyor belt inspecting parts and we take all those images and send them to the cloud where someone can use software from AWS for example, to develop the machine learning model.

“Then when they are happy with the model, they press the button and the model deploys from the cloud back down into the camera. Now the camera can identify everything it sees. The reason you do it that way is committing a model is computationally expensive, and you do it occasionally and inference is computationally cheap in comparison to training the model and inferences.”

Trends

Edge AI is no longer at the draft phase, as it has entered into mainstream adoption and is bearing witness to exponential growth.

According to a report by Tractica, Artificial intelligence (AI) processing today is mostly done in a cloud-based data centre. The majority of AI processing is dominated by training of deep learning models, which requires heavy compute capacity. In the last six years, the industry has experienced a 300,000X growth in compute requirements, with graphics processing units (GPUs) providing most of that horsepower.

“There are some interesting things going on around people figuring out how to take machine learning models and optimise them to run without any hardware installation,” says Speed.

“People are taking machine learning models and figuring out how to run them in small, low power compute systems. TinyML is an interesting one – it does not really sit for things like video analytics because the data rates are too high, but for things like migration and other kinds of sensor data – it is very helpful.”

Covid-19 outlook

In terms of Covid, Speed says that video and image analytics is exploding – it is an incredibly popular application.

“Think of a camera like an ultimate general purpose sensor, observing and diagnosing everything that you see and do – whatever we can detect with the human eye, we can train a camera to recognise the same thing,” he explains.

“It was video that was the wakeup call for the clouds because the physics and economics and security issues of taking all the video and shunting it to the cloud is very difficult. You do not have the bandwidth, the cost to do large scale video analytics in the cloud can be astronomical.

“There was a university that published a study where they tested edge analytics versus cloud analytics using AWS technology, Microsoft Azure technology and what they found is using AWS and Microsoft tools, doing it at the edge was eight times less expensive and by order of magnitude, faster.

“My assertion about Covid-19 is that it changes nothing – the trends are what they are, but what it does is it screws with the timelines. It compresses or elongates the trends. The trends are the same, but they are happening faster or they are happening slower.

“Autonomous cargo delivery, autonomous cleaning, and autonomous resupply in hospitals – all of these things are radically accelerating. We have customers that we are working with on things like a two-year program, and because of Covid, they've come back and said, ‘this thing that we planned to do in two years, we want to complete it in six months instead’.”

AWS and ADLINK worked together around integrating its hardware and software with AWS’ cloud, so that instead of streaming data to the cloud, its customers stream information; instead of sending photos, its customers send the result of video analytics; and instead of sending raw vibration data, they send the analysis of the health of the machine.

“One of the differences between the pairing is latency. Sending large amounts of data to the cloud is slow – we can analyse the data in place much faster than you can send it to the cloud and analyse it,” he highlights.

“In terms of reaction times, let's say you're home and you notice that something is burning on the kitchen stove, if you were to send a picture of that to the cloud, for somebody to analyse it, to then say, ‘Oh, no, we better turn that burner off’, you are too late. The dish has been burned.

“These reaction times matter. Our software is in the middle of things, like 500 megawatt gas turbine, power generation centres that if something starts to go wrong, you have to react in milliseconds and in some cases, microseconds, and it's not possible to do that with the cloud.”

Speed explains that in a manufacturing realm, if someone gets in the way of dangerous machinery, reacting in the moment is key, and the cloud has a place.

“Think of the cloud as where I manage and monitor macro trends. A machine figures out its own health and tells the cloud and the cloud is connected to thousands of these machines, and in the cloud, you can do the analytics to figure out what your maintenance schedule should be for all of these machines,” says Speed.

“It becomes interesting when you make the things mobile. I have a lot of history in automotive and autonomous vehicles and we do a lot around autonomous driving. We are seeing a huge upswing in this whole topic of visual inspection.”

New projects

Recently, the company partnered with Tier IV, an autonomous driving software developer and system integrator of Autoware and the Industrial Technology Research Institute (ITRI), a not-for-profit R&D organisation, to help enable autonomous driving for all.

With a focus on open-source technology, the new alliance will combine edge AI, self-driving software and industrial research to collaborate on Proof of Concepts (POCs) with local government. ADLINK, Tier IV and ITRI hope to expand Autoware adoption, research and development to make autonomous for all a reality.

ADLINK has also launched the ROScube-I with Intel, providing a real-time ROS 2 robot controller for advanced robotic applications. The ADLINK ROScube-I Series is a ROS 2-enabled robotic controller based on Intel® Xeon E, 9th Gen Intel Core i7/i3 and 8th Gen Intel Core i5 processors, and features exceptional I/O connectivity supporting a wide variety of sensors and actuators to meet the needs of a wide range of robotic applications.

Gift this article