Understand artificial intelligence once: cloud, edge and terminal artificial intelligence

0 16
Avatar for Venom
Written by
3 years ago

Artificial intelligence is a widely used technology. What is the difference between cloud artificial intelligence, edge artificial intelligence and terminal artificial intelligence that are frequently mentioned recently? This article will explain and further analyze its advantages and limitations.

Machine learning is a subset of artificial intelligence

Artificial intelligence (AI) mainly refers to machine learning (ML), which is a subset of AI and includes machine learning from data. Generally speaking, the larger the amount of data available for learning, the more meaningful and useful AI can make meaningful inferences. From consumer devices to health care, logistics and smart manufacturing, AI applications generate hundreds of thousands of gigabytes of data every day , The key is where the data should be processed after the data is generated. Arm defines three categories of artificial intelligence in terms of computing scope: cloud, edge and terminal.

We can use ML to process any of these categories of data, but not the category with the strongest computing performance is the most suitable category, which will be explained one by one below.

  • Cloud AI

Cloud AI refers to AI processing carried out in powerful cloud data centers. For a long time, cloud AI has been the preferred computing platform for processing huge data. If people didn't transfer data from the edge and terminal to the cloud server for ultra-efficient processing, the application of AI today would not be so mature.

Based on the reliability, cost-effectiveness, and concentration of computing in the cloud, most of the more onerous AI computing may be executed in the cloud, especially when ML algorithm training is performed on historical data that does not require emergency response. The "wisdom" of many consumer smart devices needs to rely on the cloud: for example, today's smart speakers give people the illusion of a smart terminal, but the only smart terminal these speakers have is listening to the trigger word (recognizing keywords).

The ability of cloud AI to use ML to solve complex problems is unquestionable, but as the use of ML gradually grows to include many critical tasks and real-time applications, the key to the success or failure of these systems depends on the speed of decision-making. And when data must be transmitted from the device to a data center thousands of miles away, no one can guarantee whether the data will still be useful after receiving the data, completing the calculation and responding.

Security-critical applications, such as automation, vehicle self-driving, medical imaging, and manufacturing, require near-instant data response within milliseconds. In many cases, the delay caused by the processing of these data in the cloud may cause data loss. The value is greatly reduced. For this reason, many companies want to process AI on computing infrastructure other than the cloud, and move computing to a place closer to the data.

  • Edge AI

Edge AI refers to moving AI and ML processing from the cloud to powerful servers at the edge of the network, such as offices, 5G base stations, and other physical locations very close to connected terminal devices. Moving AI operations closer to the data can eliminate delays and ensure that the value of all data is preserved. In a world where the value of data is measured in milliseconds, it can avoid the threat or damage caused by the delay in transmitting data to the cloud, especially for the most widely used application scenario of artificial intelligence: the Internet of Things (IoT) .

For example, basic devices such as network bridges and switches have been gradually replaced by powerful edge servers that add data center-level hardware to the gateway between the terminal and the cloud (gateway). These powerful new edge servers powered by AI are driven by new platforms such as Arm Neoverse, designed to increase computing power and reduce power consumption, creating a lot of opportunities for cities, factories, farmland and the environment to improve efficiency , Safety and productivity.

Edge AI can bring benefits to data and network infrastructure. At the network level, it can be used to analyze data traffic for network prediction and network function management. At the same time, edge AI can make decisions on its own based on the data, greatly reducing the time to return to the cloud and comprehensively improving overall security , Reliability and efficiency.

Another key function of edge AI is sensor fusion: combining data from several sensors to create complex images related to processes, environments, or situations. For example, an edge AI device for industrial applications combines data from multiple sensors in a factory to predict when a mechanical failure will occur. This edge AI device must learn the interaction between each sensor and how a sensor Affect other sensors and apply the learned results instantly.

The benefits of moving sensitive data to the edge are also key security and resilience: as more data is moved to a centralized location, the risk of data integrity being compromised increases. As the nature of computing changes, the edge plays an increasingly important role in supporting systems with various energy consumption and performance requirements. In order to achieve large-scale service level agreements for enterprises, the edge must adopt cloud-native software principles.

Arm achieves this goal through Project Cassini, an open, collaborative, standards-based project that aims to achieve a cloud-native software experience through a secure Arm edge ecosystem.

  • Terminal AI

Arm defines terminal devices as physical devices connected to network bridges, which include sensors, smart phones, and more.

Since a large amount of data is generated on the terminal, terminal AI means that the terminal device has more ability to think and process the collected data on its own. By giving it processing power, the collected data can be maximized without moving the collected data elsewhere. Insights gained from data. Because of its powerful hardware, smart phones have always been an experimental platform for testing terminal AI. The smart phone’s camera is the best example: it has evolved from being able to take fuzzy and granular rough selfies to being capable of biometric verification. , And safe computational photography functions-such as adding virtual backgrounds to selfies instantly.

This technology has now been applied to smaller IoT devices, namely the Intelligent Internet of Things (AIoT). In February 2020, Arm announced the addition of artificial intelligence to the smallest IoT devices powered by Arm. The combination of Arm Cortex-M55 CPU and Arm Ethos-U55 micro-neural network processor (microNPU) enables things based on Arm architecture. The performance of the Internet of Things (IoT) solution has been increased by nearly 500 times, but it can still maintain energy saving and cost-effectiveness. If higher performance is required, you can consider the Ethos-U65 released by Arm with Cortex-A products. TinyML is a newly emerging subfield of terminal AI or AIoT. It allows the smallest terminal devices to perform ML processing. These devices have built-in microcontrollers no more than a rice grain and consume only a few milliwatts of energy.

Of course, terminal AI still has its limitations: these devices have far more limitations in terms of performance, power, and storage than edge AI and cloud AI devices. The data collected by a single terminal AI sensor is of limited value, because it is difficult to get a complete picture without the top-down view of the sensor fusion of other data streams on the edge.

Cloud AI, edge AI and terminal AI each have their own advantages and disadvantages. Arm’s various heterogeneous computing IPs can be extended to a complete computing spectrum. Regardless of the AI ​​workload, it can ensure efficient processing by placing the smart computing power at the most suitable point. Most importantly, Arm's technology ensures that the data processed by AI is safe from the cloud to the edge to the terminal. Arm’s Platform Security Architecture (PSA) provides a platform that follows industry exemplary practices and can maintain consistent hardware and firmware hierarchical design; PSA certification (PSA Certified) ensures that the IoT devices created by device manufacturers are safe. In the Arm processor, the Arm TrustZone security technology simplifies the security of the Internet of Things and provides a platform that can build PSA certified devices.

Emerging high-performance computing

High-Performance Computing (HPC) refers to the ability to process data or execute instructions at a high speed, especially an operating system that has more than one trillion floating-point operations per second (teraFLOPS). Machine learning is an application of HPC the way. In the battle against COVID-19, supercomputers around the world have invested in research. HPC analyzes the structure of the virus to understand how the virus attaches to human cells and injects its DNA. Scientists use molecular models to test therapies, drugs, and anti-viruses. Viral drugs have the opportunity to develop vaccines relatively quickly.

In the future, with the assistance of supercomputers, people will have the opportunity to understand viruses at the atomic level. Once molecular images of the virus are available, it is possible to design specific viruses based on basic principles by combining algorithms with experts in specific medical fields. Drugs, greatly reducing drug development time.

HPC can be used not only in medical treatment, but also in a variety of research applications, such as modeling super materials for storing hydrogen in vehicles, Isambard with built-in Arm core processors, and recently announced Isambard 2 with built-in 72 Fujitsu A64FX processors Supercomputers are all examples of using HPC to help solve global challenges. We also observed that AWS started to provide HPC-based cloud services, and the Japanese Institute of Physics and Chemistry, which developed the Fugaku supercomputer, also announced that it would provide cloud services. Both are based on the Arm Neoverse architecture. We may see more in the future. As HPC migrates to the cloud, how to ensure security during computing will become more important.

Arm Edge Computing Online Summit Forum

In order to provide system architecture standards, security options and reference implementations that IoT devices and applications can follow, Arm launched the Project Cassini project. At 3 pm on February 24, 2021, Arm will hold the "Edge Computing Summit Forum". In addition to introducing Project Cassini, we also invited technical experts from Arm ecosystem partners Nuvoton Technology and Advantech to help you master the popular AIoT usage scenarios and the challenges and solutions faced by edge computing.

1
$ 0.00
Avatar for Venom
Written by
3 years ago

Comments