Categories: FinancialNewsRobotics

WIMI Hologram Academy: Virtual Reality-based Intelligent Monitoring System for Robot’s working Conditions

HONG KONG, July 26, 2022 (GLOBE NEWSWIRE) — WIMI Hologram Academy, working in partnership with the Holographic Science Innovation Center, has written a new technical article describing their exploration of VR-based intelligent monitoring system for robot’s working conditions. This article follows below:

Accurate monitoring of robot’s working status can improve robot’s free operation. In this paper, we design a virtual reality-based intelligent monitoring system for robot’s working status. The information of the robot and its working environment is collected by the on-site monitoring module, and the system transmits the collected information to the remote terminal monitoring module through the communication network. The remote terminal monitoring module uses the template matching algorithm to locate the elements in the scene and build a 3D virtual scene model. The system uses relevant software to render and bake the established model, imports the model into professional software, and then uses the software’s SDK secondary development package to realize human-computer interaction. The intelligent monitoring unit acquires images in the 3D virtual scene model and implements pre-processing, extracts texture features from the pre-processed images, and inputs the extracted features into a support vector machine to achieve intelligent monitoring of the robot’s working status. The system test results show that the system can use the established 3D virtual environment model of the robot and its working environment to achieve intelligent monitoring of the robot’s working status with 99% accuracy. Scientists from WIMI Hologram Academy of WIMI Hologram Cloud Inc.(NASDAQ: WIMI), discussed in detail the VR-based technology adopted in intelligent monitoring system for robot’s working conditions.

Robotics is a combination of many technologies such as computers, electronics, control theory, sensor technology, artificial intelligence, and mechanics. The birth of robotics reflects the direction of development of production systems as well as machine evolution. Robotics has become one of the rapidly developing and extremely widely used high-tech technologies. Robots are now widely used in many fields such as education, aviation, and industry. People’s requirements for robots are gradually increasing. The autonomous decision-making capability of robots is the focus of their intelligence research, and intelligent robots can help humans to accomplish unattainable limits and dangerous tasks, and real-time monitoring of the robot’s working status is extremely important.

The robot monitoring information contains a large amount of real-time data. The data is huge and complex, which makes it difficult for the monitor to obtain the working status of the robot. In this study, virtual reality technology is applied to robot’s working status monitoring. Realistic 3D images are used to show the dynamic information of robot’s working status, to realize real-time monitoring of robot’s working status, to improve the effect of remote intelligent monitoring of robot, and to reduce the workload of robot’s working status monitors.

Virtual reality technology uses computers to create 3D virtual worlds, simulating real-world scenes through computer simulation of human hearing, vision, touch and other sensory perceptions, giving users an immersive feeling. Virtual reality technology combines multimedia technology, human-computer interaction technology, simulation technology and many other real-time technologies, opening up new research areas for simulation systems and human-computer interaction.

In recent years, there have been numerous studies on intelligent robots, and there are designs and developments of home health monitoring systems based on intelligent robot platforms, which apply intelligent robots in home health monitoring with high applicability. There is also research on intelligent picking robot automation systems based on interactive video and audio technology, which uses interactive audio and video to control intelligent picking robots and complete robot picking work. The virtual reality-based robot’s working state intelligent monitoring system studied in this paper applies virtual reality technology to robot’s working state intelligent monitoring, using the established 3D virtual model of the robot’s working environment to intelligently monitor the robot’s working state, avoiding the defects of low monitoring accuracy caused by noise and environmental interference during robot work. The system uses the intelligent monitoring results to improve the applicability of robot work.

1.VR-based intelligent monitoring system for robot’s working conditions

1.1 General system structure

Basically, this study is a virtual reality-based intelligent monitoring system for robot’s working status, which can collect information about the robot and its working environment in real time. The monitor can use the virtual display interface to access the virtual scene robot and realize the intelligent monitoring and control of the robot’s working status. The system mainly includes three parts: on-site monitoring module, data transmission and remote monitoring terminal module. The on-site monitoring module is used to collect images of the robot and its working environment. The system uses the communication network to transmit information about the robot, its working environment, the site monitoring module, and the remote monitoring terminal module to realize the image and information transmission of the robot’s working site. The remote monitoring terminal module can interact with the robot and the operator by establishing a three-dimensional virtual scene, and use the interactive interface to obtain accurate and real-time information about the robot and its working environment, and collect images of the robot and its working environment to realize robot’s working state discrimination by the support vector machine in the machine learning algorithm, and obtain intelligent monitoring results of the robot’s working state, and the monitor can give control commands based on the robot’s working state monitoring results. The monitor gives control commands based on the results of robot’s working state monitoring.

The system selects the intelligent robot as the control object, the main control computer as the main structure of the robot, and the robot uses CAN bus to process the circuit engineering. The robot is a 6-degree of freedom robot, using shoulder swing, shoulder rotation, elbow swing, elbow rotation, wrist rotation, wrist swing and hand claw opening and closing to complete all the work movements, the robot has voice interaction, binocular vision and other artificial intelligence technology.

1.2 Virtual scene modeling

The template matching algorithm is used to locate the robot and various elements of the working environment to establish a 3D virtual scene model, and the established model is imported into professional software for rendering and baking, and the completed processed model is imported into relevant software to realize 3D scene interaction design, and the software SDK secondary development package is used to realize human-computer interaction extension through the programming language of VC++ development platform.

The establishment of 3D virtual scene model is the foundation and core part of the virtual reality robot’s working state intelligent monitoring system. The image effect of the 3D virtual scene model of the robot and its working environment affects the interactivity as well as the realism of the robot’s working state intelligent monitoring interface. Professional software has strong performance in appearance processing, while other software has the advantages of realistic image quality and support for third-party software model import.

When professional software is used to build the model, it is necessary to fully streamline and optimize the model, give the 3D model materials, implement baking and rendering processing of the built model, so that the built model can present virtual scene shadows and lighting effects, complete the processing of the robot and its working environment 3D virtual scene model into the relevant software, the software realizes the scene real-time roaming, using VC++ development platform to interact with the underlying data and The software realizes real-time scene roaming, uses VC++ development platform to interact with the underlying data and virtual reality scenes, and dynamically displays the virtual model.

1.2.1 Virtual scene element positioning

A robot work scene model library is established, which contains the 3D model information of different elements in the work scene, and the models are stored in the database. Using the completed model library, the work scene elements are localized based on the template matching algorithm. The template is considered as a known image, and the template matching algorithm is used to find targets with the same orientation and dimensional characteristics as the template within the scene to determine the specific location information of the template. Using the template matching algorithm to search the binocular camera left-eye and right-eye projection images within the same scene difference projection points, to obtain the image bit difference, using the camera internal reference and bit difference data to obtain the robot work scene 3D spatial field points and camera distance, to obtain 3D virtual field point cloud information. Combining different elements in the image area, we obtain the scale information and 3D position of different elements of the robot work scene, and send the real-time information of each element to the computer to realize the real-time synchronization of different elements of the robot scene.

1.2.2 Virtual scene interaction

Using the secondary development interface to achieve virtual scene interaction, the SDK function library can set the software running logic, the window is set through the external space accordingly, the 3D scene and script using the SDK program body call to achieve interactive control. The system embeds the 3D display window into the system, and the 3D model property control is realized by the function in the SDK, which can realize the operations by the event callback function. The robot and state information is scanned by the upper computer software and uploaded using the wireless communication network. After obtaining the robot state information, the VRP virtual scene is invoked and its motion position is transmitted to the upper computer, and the mobile camera script is invoked using the SDK program to realize the virtual scene human-computer interaction.

1.3 Intelligent monitoring algorithm for robot’s working status

The system’s intelligent monitoring unit acquires images of the robot’s working status in the 3D virtual scene, implements pre-processing such as denoising and binarization of the acquired images, and extracts the texture features of the completed pre-processed images. The extracted features are input into the support vector machine classifier to achieve intelligent monitoring of the robot’s working state, which can be run in real time. The classification result is a real-time warning message when the robot’s working state is abnormal, which facilitates the monitor to make quick decisions for the robot’s working state. The support vector machine can use linear mapping to map the sample vectors to a high-dimensional feature space to achieve accurate classification of samples.

2. Testing and analysis of system performance

A 6-degree freedom picking robot is selected as the experimental object. The operating environment of this robot is outdoor, and the outdoor environment is highly variable and easily affected by complex environments such as light and noise, which enhance the difficulty of monitoring the working state of the robot. The system in this paper is used to establish a 3D virtual scene of the robot and its working environment, render 3D scene images of different scale sizes in real time, and evaluate the rendering effect of the system by two parts: image texture operation waiting time and texture increase waiting time.

It is easy to predict that the system can achieve real-time rendering of 3D scenes with less waiting time and reduce the cost of 3D virtual scene construction. The system can maintain a stable speed of 3D virtual scene image texture rendering, and maintain a superior rendering effect even if the 3D scene screen size changes. The system can build the robot and its working scene model with strong rendering effect through virtual reality technology, which can provide a good basis for intelligent monitoring of robot’s working status. 

In this paper, support vector machine is used as an intelligent method to monitor the working state of the robot. The kernel function selection of the support vector machine is extremely important for the accuracy of the robot’s working state monitoring results, which can effectively improve the robot’s working state monitoring performance. The output error and output time of the support vector machine with three kernel functions for different number of training steps are calculated. When the kernel function is a radial-based kernel function, the output error and output time of the support vector machine are the lowest for different training steps. The best results are obtained when the kernel function is radial basis kernel function.

The radial basis kernel function is selected as the kernel function of the support vector machine used in this paper. The results of the intelligent monitoring of the robot’s working status are very close to the actual working status of the robot within 30 minutes of the robot work, which shows that the system can effectively monitor the robot’s working status. The monitor can make corresponding decisions based on the robot’s working state monitoring results and improve the robot’s working performance. The monitoring accuracy of the 800 times of intelligent monitoring of robot’s working state using this paper system is counted, and the system of this paper is compared with the literature system and the literature system. It can be seen that the monitoring accuracy of this paper’s system for monitoring the robot’s working state in different monitoring times is higher than the other two systems. The monitoring accuracy of the system in this paper detects the working state of the robot is higher than 99%. In fact, the system in this paper has the advantages of low convergence speed, short training time, strong generalization ability and high monitoring accuracy. The virtual reality technology is used to establish a 3D virtual scene to obtain robot and work site information, and the collected information is used to achieve accurate monitoring of robot’s working state through support vector machines with high intelligent monitoring performance. In this paper, the system can realize the accurate monitoring of robot’s working status, fast alerting in case of robot abnormality, and the monitor can find the robot’s working status abnormality in the first time, make adjustment strategy in time, save a lot of human and material resources, and improve the intelligence and automation performance of robot work.

3.Conclusion

Applying virtual reality technology to intelligent monitoring of robot’s working state can realize accurate monitoring of robot’s working state. By using virtual reality technology, establishing a model of robot and its working environment scene, extracting robot’s working state information, extracting robot’s working state by using support vector machine for accurate classification, and outputting robot final working state results, intelligent monitoring of robot’s working state can be realized. Virtual reality technology is insensitive to external interference, and even in the case of serious external interference, it still has high performance of intelligent monitoring of robot’s working state and high monitoring effectiveness.

Founded in August 2020, WIMI Hologram Academy is dedicated to holographic AI vision exploration, and conducts research on basic science and innovative technologies, driven by human vision. The Holographic Science Innovation Center, in partnership with WIMI Hologram Academy, is committed to exploring the unknown technology of holographic AI vision, attracting, gathering and integrating relevant global resources and superior forces, promoting comprehensive innovation with scientific and technological innovation as the core, and carrying out basic science and innovative technology research.

Contacts

Holographic Science Innovation Center

Email: pr@holo-science. com

Staff

Recent Posts

Sona’s Therapy Shows Significant Preclinical Efficacy in Second Cancer

Halifax, Nova Scotia--(Newsfile Corp. - June 20, 2024) - Sona Nanotech Inc. (CSE: SONA) (OTCQB:…

17 mins ago

Tandem Diabetes Care Appoints Jean-Claude Kyrillos as Chief Operating Officer

SAN DIEGO--(BUSINESS WIRE)--#TandemDiabetes--Tandem Diabetes Care, Inc. (NASDAQ: TNDM), a leading insulin delivery and diabetes technology…

1 hour ago

BSIM Healthcare Services Selects Innovaccer’s Healthcare AI Platform as Foundation for Scaling Value-Based Care Model

Leveraging Innovaccer's healthcare AI platform, BSIM Healthcare Services aims to improve care delivery for underserved…

1 hour ago

EndoSound Vision System™ Granted Transitional CMS Pass-Through Code

PORTLAND, Ore.--(BUSINESS WIRE)--EndoSound®, Inc., is thrilled to announce that the Centers for Medicare & Medicaid…

1 hour ago

Canadian Institute of Actuaries Welcomes New President and Board Members

Ottawa, Ontario--(Newsfile Corp. - June 20, 2024) - The Canadian Institute of Actuaries (CIA) has…

1 hour ago

NervGen Pharma to Present at the 2nd Annual Spinal Cord Injury Investor Symposium

Vancouver, British Columbia--(Newsfile Corp. - June 20, 2024) - NervGen Pharma Corp. (TSXV: NGEN) (OTCQB:…

1 hour ago