WIMI Hologram Academy: Mobile Edge Intelligence and Computing for Internet of Vehicles

health news

HONG KONG, Sept. 15, 2022 (GLOBE NEWSWIRE) — WIMI Hologram Academy, working in partnership with the Holographic Science Innovation Center, has written a new technical article describing their research on mobile edge intelligence and computing for Internet of Vehicles. This article follows below:

The Internet of Vehicles is an emerging paradigm, driven by the latest developments in vehicle communications and networking. Vehicles are rapidly increasing in performance and intelligence and will potentially support a host of exciting new applications that integrate fully autonomous vehicles, the Internet of Things (IoT) and the environment. These trends will bring about an era of smart vehicle networking, which will, at the same time, rely heavily on communication, computing and data analysis technologies. For the massive amount of data generated by smart vehicles, in-vehicle processing and cloud computing are not enough due to resource/power constraints and communication overhead/latency limitations.

Scientists from WIMI Hologram Academy of WIMI Hologram Cloud Inc.(NASDAQ: WIMI), have been working to improve the performance of smart vehicles by deploying storage and compute resources at the edge of wireless networks (e.g., wireless access points). The Edge Information System (EIS), which includes edge computing and edge artificial intelligence, will play a key role in the future of intelligent vehicle networking, providing not only low-latency content delivery and computing services, but also localized data collection, aggregation and processing.

For more than a century, the automotive industry has been one of the main economic sectors, with an expanding economic and social impact. Information and communication technologies are considered as promising tools to revolutionize vehicle networks. Vehicle networks will have the ability to communicate, process, store and learn. In particular, with IoV, vehicles will be able to utilize resources such as cloud storage and computing. In addition to vehicle movement and safety, the IoT will facilitate urban traffic management, vehicle insurance, road infrastructure construction and maintenance, and logistics and transportation. As a special case of IoT, IoT needs to be integrated with other systems such as smart cities.

Thanks to recent advances in embedded systems, navigation, sensors, data collection and dissemination, and big data analytic, we are witnessing the increasing intelligence of the automobile. It started with assisted driving technology, known as advanced driver assistance systems (ADAS), which includes emergency braking, reversing cameras, adaptive cruise control and automatic parking systems. Globally, the number of ADAS systems grew from 90 million units in 2014 to about 140 million units in 2016, a 50% increase in just two years. According to the Society of Automotive Engineers International (SAE) definition of self-driving cars, the systems mentioned above are mainly in automation levels 1 and 2. Predictions vary, but many predict that Level 4 and Level 5 self-driving cars will be available within 10 years.

The upcoming Smart IoV will require support from various domains, including automotive, transportation, wireless communications, networking, security and robotics, as well as regulators and policymakers. This paper will examine smart vehicles from an information and communication technology perspective. In particular, the integration of storage, communication, computation, and data analytic at the edge of wireless networks (e.g., radio access points) provides an effective framework for addressing the data collection, aggregation, and processing challenges of smart vehicle networking. Next, the big data challenges facing smart vehicle networking and the need for an edge information system EIS inspired by it are specified.

1 Big Data in Smart IoV

Advances in information technology, including communications, sensing, data processing and control, are transforming transportation systems from traditional technology-driven systems to more powerful data-driven intelligent transportation systems. This popular movement will generate a huge amount of data. For the past 20 years, the wireless industry has been grappling with the mobile data explosion brought on by smartphones. However, such competition will be dwarfed by the massive amounts of data expected to be generated by smart vehicles. Smart vehicles are equipped with multiple cameras and sensors, including radar, LiDAR (light detection and ranging) sensors, sonar and global navigation satellite systems (global navigation satellite systems, GNSS). It is expected that in the future there will be more than 200 sensors in a single vehicle, and the total sensor bandwidth will reach 3Gb/s (1.4TB/h) to 40Gb/s (19TB/h). It is estimated that each self-driving car will generate about 4,000 GB of data per day, equivalent to the mobile data of nearly 3,000 people. Assuming there are only 1 million self-driving cars in the world, autonomous driving is equivalent to the data of 3 billion people.

The big data generated by the smart car network will put an unprecedented strain on communications, storage and computing infrastructure. While in-vehicle computing and storage capabilities are growing rapidly, they are still limited compared to the scale of data being stored and processed. Processing data through multiple deep neural networks (deep neural networks, DNNs), however, can translate to approximately 250 trillion operations per second (TOPS). At the same time, in order to achieve a higher level of safety than the best human drivers act within 100-150ms, the autonomous driving system must be able to handle real-time traffic conditions within a delay of 100ms, which requires a lot of computing power. Power-hungry gas pedals such as graphics processing units (GPUs) can provide low-latency computing, but their high power consumption, further amplified by the cooling load to meet thermal constraints, can significantly reduce vehicle range and fuel efficiency.
There are many suggestions for using cloud computing to help smart vehicles, some of which have already been implemented, such as cloud-based software updates or training powerful deep learning models. Cloud computing platforms are certainly important proponents of the connected car, but they are not enough. Cost and power consumption are the main constraints for in-vehicle computing, while long latency and massive data transfer are bottlenecks for cloud processing. The round-trip time from the mobile client to the cloud center can easily exceed 100ms. and this latency depends heavily on wireless channel conditions, network bandwidth and traffic congestion, with no guarantee of real-time processing and reliability. Considering the latency requirements, to offload the speech recognition task of an assisted driving system, the server must be located at a nearby base station (BS), i.e., the edge of the wireless network. This is in line with the recent trend to deploy computing resources at the edge of the wireless network.

1.1 Edge Resource Deployment

Deploying resources at the edge of wireless networks has received a lot of attention from academia and industry in order to overcome the limitations of on-board computation, communication, storage and energy, while avoiding excessive latency in the cloud. Popular content such as video files, which dominate mobile data traffic, is likely to be requested repeatedly by different users, and such requests are predictable. Therefore, deploying storage units at the edge of wireless networks and caching popular content, i.e., wireless edge caching, is a promising solution for efficient content delivery. At the same time, the renaissance of AI and the emergence of smart mobile applications require platforms that can support computationally intensive and latency-sensitive mobile computing. Mobile edge computing (MEC) is an emerging technology that has the potential to combine telecommunications with cloud computing to deliver cloud services directly from the edge of the network and to support latency-critical mobile applications. This is accomplished by placing computer servers at BSs or radio access points. Edge AI is further enabled by edge caching and computing platforms, which train and deploy powerful machine learning models on edge servers and mobile devices, and are seen as a key supporting technology for the Internet of Things. Edge AI is changing the landscape of the semiconductor industry. In 2018, revenue from shipments of edge AI reached $1.3 billion, and by 2023, this figure is expected to reach $23 billion. In this paper, these platforms are collectively referred to as EIS.

EIS is ideally suited for the intelligent vehicle network. It can assist in key functions of smart vehicles, from data acquisition (for situational and environmental awareness), data processing (for navigation and path planning), to driving (maneuver control). Processing data at the edge of the network can save significant communication bandwidth and also meet the low latency requirements for critical tasks. The content of IoV is usually highly spatially localized, e.g., road and map information is mainly used locally, while temporally localized, e.g., traffic conditions in the morning have little relevance to the evening. In addition, vehicles are only interested in the content itself, not its source. These key characteristics make cache-assisted content-centered dissemination and delivery of IoVs very effective. On the other hand, smart vehicles face a huge computational burden due to their large sensing data. For example, computational power remains a bottleneck that prevents vehicles from benefiting from the high system accuracy brought by high-resolution cameras. Among others, the convolutional task for visual perception and the feature extraction task for visual localization in powerful convolutional neural networks (CNNs) are highly complex. Transferring these computationally intensive tasks to a neighboring MEC server will enable powerful machine learning approaches to assist in the critical tasks of intelligent vehicles.

Smart IoV requires the utilization of different information processing platforms.

1. On-board processing is used for high latency sensitive tasks such as real-time decision making for vehicle control and pre-processing of sensed data to reduce communication bandwidth.
2. Edge servers are for latency-sensitive and computationally intensive tasks such as positioning and mapping, as well as for aggregating and storing local information such as regional HD maps.
3. Cloud computing is for training powerful deep learning models with large data sets, acting as a non-real-time aggregator of wide-area information, and storing valuable historical data for continuous learning.

EIS will play an important and unique role in the smart vehicle information infrastructure. This gap is expected to be filled by the comprehensive and in-depth introduction of on-board EIS for smart vehicles.

2 Edge Information System for IoV

2.1 Edge Information System

EIS helps smart vehicles to acquire, aggregate and process data. In IoV, it acts as an intermediary platform between the on-board processor and the remote cloud data center. Generally, EIS consists of the following main components: (1) Edge servers: computer servers deployed in BSs or roadside units (RSUs) equipped with storage units (e.g., SSDs) and computational units, such as GPUs or edge tensor processing units (TPUs). (2) Vehicles: Smart vehicles are equipped with various sensors, communication modules and on-board units with computation and storage capabilities, and smart cars are powerful nodes. (3) User devices: A wide variety of user devices, such as passenger smartphones, wearable devices, etc. (4) V2X communication: Communication module is an important part of EIS.

The subsequent paper considers two different scenarios based on the role of vehicles, vehicle as client (VaaC) and vehicle as server (VaaS). (1) VaaC: First, vehicles can access the edge resources of RSUs or BSs as clients. The key idea is to perform data acquisition and processing in parallel. The edge servers act as anchor nodes for data collection and then process the data for local applications. For example, they can collect map data from passing vehicles, build and update HD maps, and can proactively monitor local road conditions and traffic conditions. These applications are highly relevant to IoV. (2) VaaS: Vehicles can also act as mobile service providers for vehicle passengers, third-party recipients, and other vehicles, enhancing user experiences such as personalized driving experiences through driver identification, and rich infotainment applications. VaaS is less mobile than edge server-based approaches. In addition, it allows collaboration between neighboring vehicles, such as collaborative sensing and collaborative driving.

2.2 Key tasks for smart vehicles

Smart vehicles are based on the ability to understand their environment. Different on-board sensors are used for different perception tasks, such as target detection/tracking, traffic sign detection/classification, lane detection, etc. A priori knowledge is also utilized, such as a priori maps. Based on the sensed data and perceptual output, localization and mapping algorithms are applied to compute the global and local position of the vehicle and map the environment. The results of these tasks are then used for other functions, including decision making, planning and vehicle control. Here, the focus is on perception, high-definition mapping, and simultaneous localization and mapping (SLAM) as the main tasks of intelligent vehicles.

1.Perception

There are a wide variety of on-board sensors with different characteristics serving different sensing tasks. Each type of sensor has its limitations. Cameras and stereo vision are computationally expensive compared to active sensors (e.g., LIDAR and radar), while LIDAR and radar are very close in classification and (<2m); and sonar has poor angular resolution. Perception of smart cars faces some major challenges, such as perception in bad weather and light conditions, or in complex urban environments, and limited perception range. By utilizing sensing data from different sensors, techniques such as sensor fusion can be used to compensate for the lack of individual sensors. However, this would significantly increase the on-board computation. By providing additional approximate computational and storage resources, EIS can improve perception capabilities. It can help improve the sensing accuracy of cameras and stereo vision, for example, through powerful deep learning techniques, and enable complex multi-sensor fusion by shifting computationally intensive sub-tasks to edge servers. In addition, co-awareness can significantly improve sensing robustness and accuracy and extend sensing range by sharing on-board sensing and computing capabilities, aided by V2V and V2I communication, and coordinated through edge servers.

2.High Definition Map

Maps are the foundation of any mobile robotics application, and they are especially important for autonomous driving. The HD map simulates the road surface with an accuracy of 10-20 cm. It contains a 3D representation of all key aspects of the road, such as slope and curvature, lane marking types and roadside objects. HD map positioning can reach centimeter-level accuracy. It uses on-board sensors to compare the vehicle-perceived environment with the corresponding HD map. This can overcome the limitations of GNSS-based methods (e.g., GPS) such as low positioning accuracy and availability variations. It is expected that HD map-based localization will become a common approach for Level 4 and Level 5 autonomous driving systems. Much effort has been made by the industry in producing HD maps. However, there are difficulties in practical implementation. Producing HD maps is very time consuming. To generate HD maps, a vehicle with a specially mapped mobile mapping system (MMS) is required, and the process consists of three steps: data acquisition for acquiring mapping data, data accumulation for accumulating features collected by the mapping vehicle, and data validation for manually refining and confirming the map. Moreover, HD maps are dynamic and need to be updated in time for changes. Some HD map suppliers cooperate with car manufacturers to acquire the latest map data from smart cars, but this will greatly increase the on-board processing burden of the vehicles.

The size of HD map data is very large due to its high accuracy and rich geometric information and semantics, creating difficulties in transmitting and storing HD maps. They are usually provided by cloud services, and small areas near the map are downloaded to the vehicle. The amount of data that needs to be downloaded for a 3D HD map with centimeter-level accuracy can reach 3-4 Gb/km. This not only imposes a delay in downloading data from the cloud, but also places a heavy burden on the backbone network. Due to its inherent geographic localization, EIS will play an important role in smart vehicle HD mapping. Different edge-assisted methods can be used. Edge caching can help with HD map dissemination and map data aggregation. By leveraging locally cached data, edge computing can help with map construction and map change detection. Edge servers can also coordinate vehicles for crowd-sourced mapping through the region. In this way, more efficient HD mapping can be achieved by storing and processing data locally and constructing maps where they are needed.

3.Simultaneous localization and map construction

Map-based positioning is effective for driving on roads that do not change frequently. However, if drastic changes occur, the loss of accuracy may affect driving safety. SLAM includes simultaneous estimation of vehicle state and construction of a map of the environment. It does not rely on a priori information, allowing the vehicle to continuously observe the environment and easily adapt to new situations. To achieve full autonomy, intelligent vehicles must be able to perform accurate SLAM in their environment. SLAM is considered a key implementation technology for autonomous driving, and SLAM-based approaches have been used by vehicles with urban challenges in 2007. While many SLAM algorithms have been developed, they are primarily for indoor, highly structured environments. Self-driving cars need to operate in outdoor, light-variant road environments, so faster and more efficient algorithms are needed. In particular, the computational demands of SLAM for autonomous driving will be highly intensive; 1 hour of drive time can generate 1tb of data, while interpretation of 1Tb of collected data using high computational power will require 2 days to obtain usable navigation data. In addition, for real-time execution, the latency must be less than 10ms, which puts a lot of pressure on the on-board computation.

Although cloud-based SLAM algorithms have been proposed to reduce the computational burden on vehicles, their propagation latency cannot meet the real-time execution requirements. This challenge can be solved by edge computing platforms, which can help handle some of the computationally intensive subroutines. Multi-vehicle SLAM with inter-vehicle cooperation also helps to improve the performance of SLAM.

Founded in August 2020, WIMI Hologram Academy is dedicated to holographic AI vision exploration and researches basic science and innovative technologies, driven by human vision. The Holographic Science Innovation Center, in partnership with WIMI Hologram Academy, is committed to exploring the unknown technology of holographic AI vision, attracting, gathering, and integrating relevant global resources and superior forces, promoting comprehensive innovation with scientific and technological innovation as the core, and carrying out basic science and innovative technology research.

Contacts
Holographic Science Innovation Center
Email: pr@holo-science. com