市場調查報告書
商品編碼
1482386
中國自動駕駛模擬產業(2024年)Autonomous Driving Simulation Industry Report, 2024 |
2023年11月17日,工業及資訊化部等三部會發佈了《關於智慧網聯汽車試駕及道路交通的通知》。截至目前,多家整車廠已獲得高速公路或城市L3級自動駕駛測試牌照,包括BYD、BMW、IM、Mercedes-Benz、Deepal、Avatr、ARCFOX、AITO、Jiyue、GAC Aion等。目前,以城市NOA為代表的高級智慧駕駛功能的使用正在加速,L3級及以上的自動駕駛系統設計得足夠安全且魯棒,能夠應對城市地區無數的邊緣/長尾情況。 。
L3智慧駕駛系統的實際落地需要超過10億公里的試駕,實際駕駛測試成本高、耗時長、用例覆蓋率低。然而,模擬測試可以在短時間內、低成本地解決這個問題。對於Xpeng來說,除了車主每天提供的道路數據外,Xpeng還致力於在虛擬空間中建立極端場景並結合模擬,供智慧駕駛系統學習和理解。截至2023年底,Xpeng模擬里程已達1.22億公里。
在智慧駕駛的 "三大支柱" 測試方法中,模擬測試透過虛擬環境模擬各種交通場景、路況、天氣照度、異常情況,對自動駕駛系統在各種情況下的功能、反應和判斷進行評估。的能力。
上圖中,自動駕駛仿真平台包含交通場景模擬(靜態場景重建與動態場景模擬)、環境感知感測器模擬(攝影機、光達、雷達、GPS/IMU等感測器的建模與模擬)、需支援仿真等,驗證從感知到控制的模擬測試。根據測試對象的不同,自動駕駛模擬平台可以用作MIL(Model in the Loop)、SIL(Software in the Loop)、HIL(Hardware in the Loop)、DIL(Driver in the Loop)、VIL(Vehicle in the Loop)啟用循環測試。目前,仿真測試公司的能力參差不齊。
趨勢一:自動駕駛模擬測試進入高保真、高還原精度模擬階段。
在整個感知-預測-判斷-規劃-控制關係中,感知對應的是收集車輛外部環境資訊的各類感測器,如車流、路況、天氣、光照、異常等。主要包括攝影機仿真、光達仿真、雷達仿真、定位仿真(GPS、IMU)。
目前,許多公司都在從事細粒度的模擬工程實踐,例如忠實地模擬真實的道路環境、動態交通場景、車輛和行人的行為,透過詳細的物理現象和動態感測器性能來快速驗證自動駕駛系統的性能。
對於PilotD Automotive,基於PilotD PlenRay物理射線技術的完整物理感測器模型提供了電磁波的多路徑反射、折射和乾擾等詳細的物理現象,以及檢測丟失率、目標分辨率、不確定性檢測和 "重影" " 可以模擬物理現像等動態感測器性能,以實現感測器模型所需的高保真度。迄今為止的模擬還原率接近 95%。
本報告針對中國自動駕駛模擬產業進行調查分析,提供模擬技術、國內外解決方案供應商、模擬測試趨勢等資訊。
Autonomous Driving Simulation Research: Three Trends of Simulation Favoring the Implementation of High-level Intelligent Driving.
On November 17, 2023, the Ministry of Industry and Information Technology and other three ministries issued the Notice on Piloting Access and On-road Passage of Intelligent Connected Vehicles. Up to now, many OEMs including BYD, BMW, IM, Mercedes-Benz, Deepal, Avatr, ARCFOX, AITO, Jiyue and GAC Aion have obtained highway or urban L3 autonomous driving test licenses. At present, the application of high-level intelligent driving functions represented by urban NOA is being accelerated, and L3 and above autonomous driving systems should be safe and robust enough to deal with countless edge/long tail cases in urban areas.
The commercialization of L3 intelligent driving systems needs more than one billion kilometers of test mileage, and actual road tests are costly and time-consuming, with low use case coverage. However, simulation tests can quickly solve this problem in a short time and at low cost. In Xpeng's case, in addition to the roadside data provided by car owners every day, Xpeng is working to build extreme scenarios in virtual space combining simulation for the intelligent driving system to learn and understand. By the end of 2023, the simulation mileage of Xpeng had reached 122 million kilometers.
In the "three-pillar" test approach for intelligent driving, in simulation tests, different traffic scenes, road conditions, weather illumination and abnormalities are simulated through the virtual environment to evaluate the functions, response and decision capabilities of the autonomous driving systems in various circumstances.
In the above figure, the autonomous driving simulation platform should support traffic scene simulation (static scene restoration and dynamic scene simulation), environment-aware sensor simulation (modeling and simulation of sensors such as camera, LiDAR, radar and GPS/IMU), vehicle dynamics simulation, etc., so as to verify simulation tests ranging from perception to control. According to tested objects, the autonomous driving simulation platform enables in-the-loop tests such as: model in the loop (MIL), software in the loop (SIL), hardware in the loop (HIL), driver in the loop (DIL) and vehicle in the loop (VIL). At present, simulation test companies vary in capabilities, as shown in the table below.
Trend 1: Autonomous driving simulation tests have entered the precise simulation stage with high fidelity and high reduction.
In the whole link of "perception-prediction-decision-planning-control", perception corresponds to all kinds of sensors which collect the external environment information of the vehicle, such as traffic flow, road conditions, weather, illumination and abnormalities. It mainly involves camera simulation, LiDAR simulation, radar simulation, and positioning simulation (GPS, IMU).
At present, many companies are working on fine-grained simulation engineering practices, for example, high fidelity simulation of real road environment, dynamic traffic scene and vehicle/pedestrian behavior, and accurate restoration of detailed physical phenomena and dynamic sensor performance, so as to quickly verify the performance of autonomous driving systems and provide comprehensive test and verification reports.
In PilotD Automotive's case, the full physical sensor model based on PilotD PlenRay physical ray technology can simulate detailed physical phenomena such as multi-path reflection, refraction and interference of electromagnetic waves, or dynamic sensor performance such as detection loss rate, target resolution, uncertainty detection and "ghost" physical phenomena, so as to obtain the high fidelity required by the sensor model. Up to now, the simulation reduction rate is close to 95%.
NVIDIA DRIVE Sim(TM) supported by NVIDIA Omniverse is an end-to-end simulation platform, physically based multi-sensor simulation with high fidelity. It can generate numerous real-world digital twin scenes. At present, the LiDAR models of HESAI and RoboSense have been integrated into NVIDIA DRIVE Sim to simulate the performance of LiDAR in various aspects such as beam control, user-defined scanning modes and resolution, and generate synthetic data sets. Users such as OEMs or autonomous driving solution providers can directly call the LiDAR models for R&D or testing through DRIVE Sim.
Similar to NVIDIA, the 3D scene and high-precision physical sensor simulation of dSPACE AURELION generates highly realistic raw data for radar, LiDAR and cameras in real time. AURELION optimizes the impact of materials on radar echoes and multi-path ray tracing technology to ensure real-world-like measurement effects (e.g. ghost spot effects). For radar, dSPACE offers DARTS, that is, over-the-air simulation of radar echoes in real time.
In the vehicle's perception of environmental information, it is easy to ignore the simulation of the interaction between vehicles and pedestrians. For example, the Qianxing simulation platform of RisenLighten has added rich and realistic pedestrian models to support the functions of self-defining the micro-trajectory of pedestrians and generating pedestrians in batches. When editing scenes, users can simulate the crowded and sparse pedestrian distribution in reality as needed, and can also build complex long-tail scenes such as pedestrians walking randomly, sudden pedestrians in front, courtesy to people and cars, and dispute on right of way to test the comprehensive performance of the autonomous driving system. The platform also provides different pedestrian behavior style models, covering scenarios such as human-vehicle interaction, crossing the road and crossing the intersection obliquely, to simulate an intelligent pedestrian traffic flow. In addition, with the diverse driver behaviors, the platform models three driving styles of drivers (conservative drivers, conventional drivers and radical drivers), and refines each parameter through a certain probability distribution, so that the driving behaviors of vehicles in the environment are diversified and randomized.
In addition, multi-sensor concurrent simulation tests greatly improve the R&D and testing efficiency of perception algorithms. In terms of engineering practice, in May 2023 51Sim and VCARSYSTEM cooperated to successfully fulfill the closed loop of domain controllers from SIL to HIL in China's autonomous driving tests, and realized the thorough localization of domain controller in-the-loop simulation tests. In this domestic domain controller in-the-loop solution based on Journey 5, the intelligent driving data reinjection system independently developed by VCARSYSTEM supports simultaneous injection of sensor data from multiple high-definition cameras, LiDAR, radar, ultrasonic radar, GNSS&IMU and so on, and easily reproduces specific scenes and environments via 51Sim-One, an autonomous driving simulation test platform, thereby greatly improving the R&D and testing efficiency of perception algorithms.
Trend 2: Automatic generation and scene generalization are essential.
At present, how to build a corner case scene is a big challenge for the industry. It is the significance of simulation tests to reproduce scenes such as high-risk working conditions, extreme weather conditions, complex traffic environment and edge events, which are difficult to cover in actual road tests. Especially for large-scale tests of safety-critical scenes, automated simulation technology based on AI technology is needed to cover more scenarios.
Coverage-based quality is a more detailed and comprehensive autonomous driving safety test method. It focuses on the quality of test coverage, that is, whether the system has experienced various possible situations and scenarios. By defining a range of test cases and test scenes, this method can ensure that autonomous driving systems can be tested in various road conditions, traffic conditions and abnormalities. The quality of coverage can involve changes in road conditions, traffic behaviors, special weather conditions, emergencies and more.
Moveover, AI technology and large language models are gradually integrated into simulation tests, playing an increasingly important role in automatic scene generation, automatic annotation, accelerating the construction of scene libraries, reducing the cost of simulation tests, lowering the threshold of simulation test technology and shortening the vehicle development cycle.
In the case of natural language interaction, 51WORLD's AIGC-Scenario Copilot supports fully natural language interaction. Without tedious manual editing and code, it only needs scene descriptions, for example, "add an action, first change the lane to the right, and then slow down to 0". By using an AI large language model, an autonomous driving simulation test scene conforming to the OpenSCENARIO standard can be generated, and an unknown dangerous scene can also be generated to expand the boundary of the simulation test.
In addition, Huawei Pangu Models feature the following in generation of autonomous driving scenes:
Autonomous driving scene understanding replaces manual annotation and classification, and 10,000 video clips are processed in minutes.
Autonomous driving scene generation, vehicle type change, lane change, scene combination rendering and other applications are realized through NeRF technology.
Autonomous driving pre-annotation replaces manual annotation, and supports 2D, 3D and 4D automatic annotation, with an accuracy rate of over 90%.
Autonomous driving multimodal retrieval supports multi-dimensional retrieval capabilities such as searching for pictures by text and searching for pictures by pictures, and realizes minute-level retrieval of millions of pictures.
In addition to simulation platforms and scene library generalization capabilities mentioned above, simulation evaluation systems for autonomous driving testing are also essential in autonomous driving technology commercialization. Simulation evaluation refers to the evaluation and optimization of all aspects of an autonomous driving system by means of simulation testing to ensure its safe, reliable and efficient operation on actual roads. Simulation evaluation mainly includes autonomous driving system evaluation and simulation test system evaluation, of which simulation test system evaluation includes the evaluation of scenario coverage, scene realness, scene effectiveness and simulation efficiency.
Trend 3: Capitalization and sharing of scene library data help to drag down costs and improve efficiency of high-level autonomous driving training and testing.
In simulation tests, in addition to automatic scene generation based on road data (dSPACE Autera, NI data collection solution, VI-Grade AutoHawk, etc.), an all-scenario synthetic data simulation material library can help developers keep training, testing and verifying autonomous driving systems in massive driving scenes, especially safety-critical scenes, to improve algorithm iteration efficiency and closed-loop test efficiency and depth.
For example, the all-scenario synthetic data simulation library of OASIS DATA, Oasis' autonomous driving data platform, embraces common traffic participants, obstacles, road facilities and other traffic environment elements. Combined with physical sensor simulation models, it further generates multimodal, high-fidelity, and accurately annotated simulation materials on a large scale. In terms of synthetic data generation efficiency, it can generate up to 100,000 frames per day, saving more than 90% of data collection and annotation costs.
In addition, 51Sim DataOne has powerful data-driven capabilities, including Dataverse and Synthverse. Wherein, Dataverse, a data platform, is capable of data cleaning, data calculation, data management, data visualization, data statistics, etc. and enables a data-driven simulation closed loop; Synthverse, a synthetic data platform, can automatically generate 3D scenes based on HD maps, restore specific scenes with high fidelity through 3D reconstruction technology, and generalize dynamic and static elements of 3D reconstructed scenes via scene editing generalization tools.
In November 2023, 51Sim, together with Volcengine, TZTEK and MXNAVI, the ecosystem partners of Horizon Robotics, launched the industry's first full-chain data-driven closed-loop ecosystem solution to accelerate the mass production and application of autonomous vehicles. This solution provides chips, domain controllers, data acquisition, data processing, algorithm training, refeed testing, and simulation software and hardware integrated testing, aiming to solve common problems in the intelligent driving industry such as low data utilization and difficult data-driven closed loop construction, and promote the mass production and application of high-level autonomous driving. 51Sim provides a full set of virtual simulation test capabilities for the data-driven closed-loop ecosystem.
Amid surging demand for training and test data of autonomous driving systems, it is difficult to collect diverse and high-quality long-tail scenes on a large scale, and filter the required scenes. In view of this, some simulation solution providers such as SYNKROTRON, IAE and 51Sim have begun to work on the data assetization of simulation scene libraries (including standard regulatory scenes, accident scenes, natural driving scenes, dangerous/extreme scenes, and reconstruction scenes). They have also made a positive response to the "Three-Year Action Plan for 'Data Elements x" (2024-2026), and assisted the intelligent connection industry with assetization of data, which have been traded on Shenzhen Data Exchange, Shanghai Data Exchange, Suzhou Big Data Exchange, and Northern Big Data Trading Center.
In the aspect of simulation data sharing, it is necessary to handle vehicle-cloud cooperation. For example, in the conventional stand-alone R&D environment, the data silo within a team has been a serious problem. How to achieve an efficient working mode among XIL test engineers, tool chain R&D engineers, algorithm training engineers and algorithm test engineers has been tackled. For example, in April 2024, 51SimOne officially released the cooperative version of the "cloud+terminal" integrated product. Through centralized storage and integrated design, it seamlessly connects the client and the cloud, and supports multi-person cooperation, so that data is fully shared among a team. The client supports local integration and debugging of R&D tasks, and the cloud bolsters phased large-scale automated testing in algorithm development, so that a platform can meet various needs and greatly accelerate the iteration and optimization of autonomous driving algorithms.