封面
市場調查報告書
商品編碼
1583748

自動駕駛數據閉環(2024)

Autonomous Driving Data Closed Loop Research Report, 2024

出版日期: | 出版商: ResearchInChina | 英文 323 Pages | 商品交期: 最快1-2個工作天內

價格
簡介目錄

隨著軟體2.0和端對端技術融入自動駕駛,智慧駕駛的開發模式將從基於規則的子任務模組演進到AI 2.0和通用人工智慧(AGI)的數據驅動階段或AI正在逐步向3.0發展。

SenseAuto在2024年中國國際車展上發佈了下一代自動駕駛技術DriveAGI的預覽。 DriveAGI基於大規模多模態模型,用於改進和升級端到端智慧駕駛解決方案。 DriveAGI將自動駕駛基礎設施模型從數據驅動演變成認知驅動,超越駕駛員的概念,加深對世界的理解,擁有更高的推理、判斷和互動能力。自動駕駛是目前最接近人類思考模式、最能理解人類意圖、最能應付困難駕駛場景的技術方案。

資料閉環對於AI 1.0之後的自動駕駛研發至關重要,但AI在自動駕駛應用的不同階段對資料閉環的各個環節的要求卻截然不同。

智慧駕駛系統全端模型開發會為資料閉環帶來哪些變化?

1.採集車輛的資料擷取模式從大規模採集轉向量產車輛的長尾場景收集,並專注於高品質資料。

從資料流向來看,目前智慧駕駛資料收集方式包括專用採集車採集、量產車資料收集與回傳、道路資料收集與整合、低空無人機採集等。種,例如模擬合成數據,要達到最大的覆蓋範圍、最泛的場景、最全的數據類型,最終達到數據的三要素:數量、完整性、精確度。使用量產車輛收集數據是這裡的主流。

主機廠不斷從量產車中累積大量智慧駕駛數據,並提取有效、高品質的數據來訓練人工智慧演算法。例如,理想汽車對超過80萬車主的駕駛行為進行了評分,其中約3%的人得分在90分以上,被稱為 "老司機" 。來自車隊經驗豐富的駕駛員的駕駛數據是訓練端到端模型的燃料。到2024年底,理想汽車的端到端模型學習里程預計將超過500萬公里。

那麼,在數據充足的情況下,如何充分提取有效場景數據,挖掘更高品質的訓練數據?您可以從下面的例子中瞭解。

在資料壓縮方面,車輛收集的資料往往來自車輛系統和各種感測器的環境感知資料。資料在用於分析或模型訓練之前必須經過嚴格的預處理和清理,以確保其品質和一致性。車輛數據來自各種感測器和設備,每個設備都有自己獨特的數據格式。以RAW格式儲存的高解析度智慧駕駛場景資料(即未經ISP演算法處理的原始攝影機資料)將是未來高品質場景資料的趨勢。對於Vcarsystem來說,其 "基於相機的RAW資料壓縮和擷取解決方案" 不僅提高了資料擷取效率,而且最大限度地保證了原始資料的完整性,為後續的資料處理和分析提供了可靠的基礎。與傳統的ISP後壓縮資料播放相比,RAW壓縮資料播放可以避免ISP處理過程中的資訊遺失,更準確地恢復原始影像數據,提高演算法訓練的準確性和智慧駕駛系統的性能。

在資料探勘方面,基於離線3D點雲基礎模型的資料探勘案例值得關注。例如,QCraft可以挖掘高品質的3D數據,並基於離線點雲模型不斷提高物體辨識能□□力。不僅如此,QCraft 還建立了基於文字到圖像的創新多模態模型。該模型僅需要自然語言的文本描述,無需監督即可自動獲取相應的場景圖像,可以挖掘出許多使用普通數據很難找到、日常生活中很難遇到的長尾場景。例如,當輸入 "夜間雨中行駛的大卡車" 或 "路邊躺著一個人" 等文字描述時,可以自動回饋相應的場景,可以進行專門的分析和支援學習。

本報告針對中國汽車產業進行調查分析,提供自動駕駛數據閉環發展資訊。

目錄

第1章自動駕駛數據閉環概述

  • 資料閉環演化
  • 建構自動駕駛數據閉環的困難點
  • 解決方案案例1
  • 解決方案案例2
  • 自動駕駛數據閉環產業鏈圖
  • 資料閉環的基礎:資料安全

第2章資料收集

  • 各種智慧駕駛資料擷取方式彙總
  • 典型的資料收集/資料壓縮解決方案

第3章資料標註

  • 摘要:智慧資料標註平台比較(一)
  • 摘要:智慧資料標註平台比較(2)
  • Haitian Ruisheng
  • MindFlow
  • DataBaker Technology
  • Molar Intelligence
  • Magic Data
  • Jinglianwen Technology
  • Appen
  • Scale AI

第 4 章資料處理

  • 自動駕駛資料閉環處理流程
  • 自動駕駛資料的分類分級
  • 資料合規性
  • 資料傳輸
  • 智慧運算中心
  • 資料閉環雲平台

第5章資料閉環技術供應商

  • 摘要:資料閉環技術供應商對比(一)
  • 摘要:資料閉環技術供應商對比(二)
  • 摘要:資料閉環技術供應商對比(3)
  • 摘要:資料閉環技術供應商比較 (4)
  • 摘要:資料閉環技術供應商比較 (5)
  • JueFX Technology
  • QCraft
  • Zhuoyu
  • Haomo.ai
  • SenseAuto
  • Momenta
  • Freetech
  • Nullmax
  • DeepRoute.ai
  • Bosch
  • EXCEEDDATA
  • Yoocar
  • Mxnavi
  • NavInfo

第6章典型OEM資料閉環

  • 摘要:OEM資料閉環能力(1)
  • 摘要:OEM資料閉環能力(2)
  • BYD
  • Chery
  • Great Wall Motor
  • Geely
  • Li Auto
  • Xpeng
  • NIO

第7章資料閉環發展趨勢

簡介目錄
Product Code: FZQ015

Data closed loop research: as intelligent driving evolves from data-driven to cognition-driven, what changes are needed for data loop?

As software 2.0 and end-to-end technology are introduced into autonomous driving, the intelligent driving development model has evolved from the rule-based sub-task module to the data-driven stage AI 2.0, and is gradually developing towards artificial general intelligence (AGI), namely, AI 3.0.

At the Auto China 2024, SenseAuto showcased its next-generation autonomous driving technology: preview of DriveAGI, which is based on large multimodal models for improvement and upgrade of end-to-end intelligent driving solutions. DriveAGI is the evolution of autonomous driving foundation models from data-driven to cognition-driven, beyond the concept of driver, deepening understanding of the world, and boasting greater reasoning, decision and interaction capabilities. In autonomous driving, it is currently the technical solution that is closest to human thinking patterns, can understand human intentions best, and has the strongest ability to cope with difficult driving scenarios.

Data closed loop is indispensable to autonomous driving R&D after AI 1.0, but at different stages of AI application in autonomous driving, the requirements for each link of data closed loop vary greatly.

What changes will the full-stack model development of intelligent driving systems bring to the data closed loop?

1. The data collection mode has shifted from large-scale collection by collection vehicles to long-tail scenario collection by production vehicles, with more emphasis on high-quality data.

From the perspective of data flow, there are currently many ways to collect intelligent driving data, including collection by special collection vehicles, data collection and backhaul by production vehicles, roadside data collection and fusion, traffic data collection by drones at low altitudes, and simulated synthetic data, in a bid to achieve the maximum coverage, the most generalized scenarios, and the most complete data types, and ultimately fulfill the three elements of data: mass, completeness, and accuracy. Wherein, data collection by production vehicles is the mainstream mode.

As can be seen from the above table, OEMs keep accumulating massive amounts of intelligent driving data with production vehicles, and extracting effective and high-quality data to train AI algorithms. For example, Li Auto has scored the driving behaviors of more than 800,000 car owners, about 3% of which are scored above 90 and can be called "experienced drivers." The driving data of the experienced drivers of fleets is the fuel for training end-to-end models. It is estimated that by the end of 2024, Li Auto's end-to-end model is expected to learn over 5 million kilometers.

So, with sufficient enough data, how can we fully extract effective scene data and mine higher-quality training data? You can get to know from the following examples:

In terms of data compression, the data collected by vehicles often comes from the environmental perception data of vehicle systems and various sensors. Before being used for analysis or model training, the data must be preprocessed and cleaned strictly to ensure its quality and consistency. The vehicle data may come from different sensors and devices, and each device may have its own specific data format. High-definition intelligent driving scene data stored in RAW format (i.e., raw camera data that has not been processed by the ISP algorithm) will become a trend of high-quality scene data in the future. In Vcarsystem's case, its "camera-based RAW data compression and collection solution" not only improves the efficiency of data collection, but also maximizes the integrity of the raw data, providing a reliable foundation for subsequent data processing and analysis. Compared with the traditional ISP post- compressed data replay, RAW compressed data replay avoids the information loss in the ISP processing process, and can restore the raw image data more accurately, improving the accuracy of algorithm training and the performance of the intelligent driving system.

As for data mining, data mining cases based on offline 3D point cloud foundation models deserve attention. For example, based on offline point cloud foundation models, QCraft can mine high-quality 3D data and continuously improve object recognition capabilities. Not only that, QCraft has also built an innovative multimodal model based on text to image. Just with natural language text descriptions, the model can automatically retrieve corresponding scene images without supervision and mine many long-tail scenes that are difficult to find in ordinary data use and hard to encounter in life, thereby improving the efficiency of mining long-tail scenes. For example, as text descriptions such as "a large truck traveling in the rain at night" and "people lying at the roadside" are inputted, the system can automatically give a feedback on the corresponding scene, favoring special analysis and training.

2. Data labeling is heading in the direction of AI-automated high-precision labeling, and will tend to be used less or no longer needed in the future.

As foundation models find broad application and deep learning technology advances, the demand for data labeling makes explosive growth. The performance of foundation models depends heavily on the quality of input data. So the requirements for the accuracy, consistency, and reliability of data labeling become increasingly higher. To meet the high demand for data labeling, many data labeling companies have begun to develop automatic labeling functions to further improve data labeling efficiency. Examples include:

Based on the automation capabilities of foundation models, DataBaker Technology has launched 4D-BEV, a new labeling tool which supports the processing of hundreds of millions of pixel point clouds. It helps to quickly and accurately perceive and understand the surroundings of the vehicle, and combines static and dynamic perception tasks for multi-perspective, multi-sequential labeling of objects such as vehicles, pedestrians and road signs, providing more accurate information like object location, speed, posture and behavior. It can also provide interactive information of different objects in the scene, helping the autonomous driving system to better understand the traffic conditions on the road, so as to make more accurate decisions and control. To improve the efficiency and accuracy of labeling, DataBaker Technology adds machine vision algorithms to 4D-BEV to automatically complete complex labeling work, enabling high-quality recognition of lane lines, curbs, stop lines, etc.

MindFlow's SEED data labeling platform supports all types of 2D, 3D, and 4D labeling in autonomous driving and other scenarios, including 2/3D fusion, 3D point cloud segmentation, point cloud sequential frame overlay, BEV, 4D point cloud lane lines and 4D point cloud segmentation, and covers all labeling sub-scenarios of autonomous driving. In addition, its AI algorithm labeling model incorporates AI intelligent segmentation based on the SAM segmentation model, static road adaptive segmentation, dynamic obstacle AI preprocessing, and AI interactive labeling. It improves the average efficiency of data labeling in typical autonomous driving scenarios by more than 4-5 times, and by more than 10-20 times in some scenarios. In addition, MindFlow's data labeling foundation model is based on weak supervision and semi-supervised learning, and uses a small amount of manually labeled data and a mass of unlabeled data for efficient detection, segmentation, and recognition of scene objects.

Additionally, on July 27, 2024, NIO officially announced NWM (NIO World Model), China's first intelligent driving world model. As a multivariate autoregressive generative model, it can fully understand information, generate new scenes, and predict what may happen in the future. It is worth noting that as a generative model, NWM can use a 3-second driving video as Prompt to generate a 120-second video. Through the self-supervision process, NWM can need no data labeling and becomes more efficient.

3. Simulation testing is becoming increasingly important in development of intelligent driving. High accuracy and high restoration capabilities are the key to improving the quality of scene coverage.

High-level intelligent driving needs to be tested in various complex and diverse scenarios, which requires not only high precision sensor perception and restoration capabilities, but also powerful 3D scene reconstruction capabilities and scene coverage generalization capabilities.

PilotD Automotive's full physical-level sensor model can simulate detailed physical phenomena, for example, multi-path reflection, refraction, interference and multi-path reflection of electromagnetic waves, or dynamic sensor performance such as detection loss rate, object resolution and measurement inaccuracy, and "ghost" physical phenomena, so as to obtain high fidelity required by the sensor model. The full physical-level sensor model based on PilotD Automotive's PlenRay physical ray technology currently boasts a simulation restoration rate of over 95%.

dSPACE's AURELION (high-precision simulation of 3D scenes and physical sensors) is a flexible sensor simulation and visualization software solution. Based on physical rendering by a game engine, it simulates pixel-level raw data of camera sensors. AURELION's radar module uses ray tracing technology to simulate the signal-level raw data of ray-type sensors. Considering the impacts of specific materials on LiDAR, the output point cloud contains reflectivity values close to real calculations. For each ray, it provides realistic motion distortion effects and configurable time offset values.

RisenLighten's Qianxing Simulation Platform adds rich and realistic pedestrian models, and supports customization of micro trajectories of pedestrians and batch generation of pedestrians. Moreover, the platform also provides different high-fidelity pedestrian behavior style models, covering such scenarios as human-vehicle interaction, crossing, and diagonal crossing at intersections. It models three types of drivers (conservative, conventional and aggressive), and refines parameters by probability distribution, so as to diversify and randomize driving behaviors of vehicles in the environment.

As a generative simulation model, NIO NSim can compare each trajectory deduced by NWM with the corresponding simulation results. Originally they could only be compared with the only trajectory in the real world. Yet adding NSim enables joint verification in tens of millions of worlds, providing more data for NWM training. This makes the output intelligent driving trajectory and experience safer, more reasonable, and more efficient.

In the field of autonomous driving, end-to-end solutions have a more urgent need of high-fidelity scenes. For the end-to-end system needs to cope with various complex scenarios, a lot of videos labeled with autonomous driving behaviors need to be put into autonomous driving training. With regard to 3D scene reconstruction, currently penetration and application of 3D Gaussian Splattering (3DGS) technology in the automotive industry accelerate. This is because 3DGS performs well in rendering speed, image quality, positioning accuracy, etc., fully making up for the shortcomings of NeRF. Meanwhile the reconstructed scene based on 3DGS can replicate the edge scenes (Corner Case) found in real intelligent driving. By dynamic scene generalization, it improves the ability of the end-to-end intelligent driving system to cope with corner cases. Examples include:

51Sim innovatively integrates 3DGS into traditional graphics rendering engines through AI algorithms, making breakthroughs in realism. 51Sim fusion solution has high-quality and real-time rendering capabilities. The high-fidelity simulation scene not only improves the training quality for the autonomous driving system, but also significantly improves the authenticity of simulation, making it almost indistinguishable to naked eyes, greatly improving the confidence of simulation, and making up for shortfalls of 3DGS in details and generalization capabilities.

In addition, Li Auto also uses 3DGS for simulation scene reconstruction. Li Auto's intelligent driving solution consists of three systems, namely, end-to-end (fast system) + VLM (slow system) + world model. Wherein, the world model combines two technology paths: reconstruction and generation. It uses 3DGS technology to reconstruct the real data, and the generative model to offer new views. In scene reconstruction, the dynamic and static elements are separated, the static environment is reconstructed, and the dynamic objects are reconstructed and a new view is generated. After re-rendering the scene, a 3D physical world is formed, in which the dynamic assets can be edited and adjusted arbitrarily for partial generalization of the scene. The generative model features greater generalization ability, and allows weather, lighting, traffic flow and other conditions to be customized to generate new scenes that conform to real laws, which are used to evaluate the adaptability of the autonomous driving system in various conditions.

In short, the scene constructed by combining reconstruction and generation creates a better virtual environment for learning and testing the capabilities of the autonomous driving system, enabling the system to have efficient closed-loop iteration capabilities and ensuring the safety and reliability of the system.

4. The rapid development of OEMs' full-stack self-development capabilities prompts data closed-loop technology providers to keep improving their service capabilities.

The data closed loop is divided into the perception layer and the planning and control layer, both of which have an independent closed loop process. In both aspects, data closed loop technology providers have the ability to improve their service capabilities, for example:

In terms of perception, in the project development process, the version of the autonomous driving system will be released regularly, integrating and packaging all the contents such as perception, planning and control, communication, and middleware. Some intelligent driving solution providers such as Nullmax will release the perception part separately first, and then test it through automatic tools and testers, output specific reports, and evaluate the fixing of the problems at the early stage. If there are problems with the perception version, there is still time to continue to modify and test it. This can greatly avoid the upstream perception problems from affecting the entire system, and is more conducive to problem location and system improvement, greatly improving the efficiency of system release and project development.

In terms of planning and control, in QCraft's case, its self-developed "joint spatio-temporal planning algorithm" takes into account both space and time to plan the trajectory, and solves the driving path and speed in three dimensions simultaneously, rather than solve the path separately first and then solve the speed based on the path to form the trajectory. Upgrading "horizontal and vertical separation" to "horizontal and vertical combination" means that both path and speed curves will be used as variables in the optimization problem to obtain the optimal combination of the two.

Data closed-loop technology providers generally provide complete data closed-loop solutions or separate data closed-loop products (i.e. modular tool services, e.g., annotation platform, replay tool and simulation tool) for OEMs and Tier1s. OEMs with great data governance capabilities often outsource tool modules that they are not good at, and integrate them into their own data processing platform systems; while OEMs with weak data governance capabilities will consider tightly coupled data closed-loop products or customized services, for example, FUGA, Freetech's new-generation tightly coupled data closed-loop platform product, has gathered more than 8 million kilometers of real mass production data, and experience in algorithm closed-loop iteration of over 100 production models, achieving more than 100-fold algorithm iteration efficiency and managing over 3,000 sets of high-value scene data fragments per month. At present, FUGA has been deployed and applied in production vehicle projects of multiple leading OEMs, supporting daily test data problem analysis, and weekly data cleaning and statistical report analysis.

Table of Contents

1 Overview of Autonomous Driving Data Closed Loop

  • 1.1 Evolution of Data Closed Loop
  • 1.2 Difficulties in Building An Autonomous Driving Data Closed Loop
  • 1.3 Solution Case 1
  • 1.4 Solution Case 2
  • 1.5 Autonomous Driving Data Closed Loop Industry Chain Map
  • 1.6 Foundation of Data Closed Loop: Data Security
    • 1.6.1 Status Quo of Automotive Data Security Standards
    • 1.6.2 Data Security Risks at All Autonomous Driving Levels
    • 1.6.3 Overview of Data Security Governance
    • 1.6.4 Data Security Governance Cases

2 Data Collection

  • 2.1 Summary of Diverse Intelligent Driving Data Collection Modes
    • 2.1.1 Case 1: Production Vehicle
    • 2.1.2 Case 2: Collection Vehicle
    • 2.1.3 Case 3: Drone
    • 2.1.4 Case 4: Roadside Data
    • 2.1.5 Case 5: Simulation Synthesis
  • 2.2 Typical Data Collection/Data Compression Solutions
    • 2.2.1 Case 1: TZTEK Technology
    • 2.2.2 Case 2: Kunyi Electronics
    • 2.2.3 Case 3: EXCEEDDATA

3 Data Annotation

  • Summary: Comparison between Intelligent Data Annotation Platforms (1)
  • Summary: Comparison between Intelligent Data Annotation Platforms (2)
  • 3.1 Haitian Ruisheng
    • 3.1.1 DOTS-AD Data Platform
    • 3.1.2 DOTS-LLM Service Platform
  • 3.2 MindFlow
    • 3.2.1 Autonomous driving AI data annotation solution
    • 3.2.2 SEED Data Service Platform
    • 3.2.3 Data Security Solution
  • 3.3 DataBaker Technology
    • 3.3.1 Autonomous Driving 2D Image Annotation Platform
    • 3.3.2 Autonomous Driving 3D Point Cloud Annotation Platform
    • 3.3.3 Autonomous Driving 4D-BEV Annotation
    • 3.3.4 AI Data Platform
  • 3.4 Molar Intelligence
    • 3.4.1 4D Annotation Tool V2.0
  • 3.5 Magic Data
    • 3.5.1 Annotator Intelligent Annotation Tool
  • 3.6 Jinglianwen Technology
    • 3.6.1 Data Annotation Service
  • 3.7 Appen
    • 3.7.1 MatrixGo(R) High-precision Data Annotation Platform
    • 3.7.2 Foundation Model Intelligent Development Platform
  • 3.8 Scale AI
    • 3.8.1 Annotation and Fine-tuning Services

4 Data Processing

  • 4.1 Autonomous Driving Data Closed-Loop Processing Process
    • 4.1.1 Case 1 of Autonomous Driving Data Closed-Loop Processing Process
    • 4.1.2 Case 2 of Autonomous Driving Data Closed-Loop Processing Process
  • 4.2 Classification and Grading of Autonomous Driving Data
    • 4.2.1 Classification of Autonomous Driving Data
    • 4.2.2 Grading of Autonomous Driving Data
    • 4.2.3 Case: Classification and Grading of Data from Some OEM
  • 4.3 Data Compliance
    • 4.3.1 Overview of Data Compliance
    • 4.3.2 List of Models That Meet Four Compliance Requirements for Automotive Data Security
    • 4.3.3 Data Compliance Solution Case 1
    • 4.3.4 Data Compliance Solution Case 2
  • 4.4 Data Transmission
    • 4.4.1 Case: EMQ
      • 4.4.1.1 EMQ Product Series
      • 4.4.1.2 EMQ Vehicle-Cloud Integrated Data Closed-Loop Platform
      • 4.4.1.3 EMQ vehicle-Cloud Cooperative Data Closed-Loop Application Case: Some OEM & Some Tier1
      • 4.4.1.4 EMQ Vehicle-Cloud Flexible Data Collection Solution
  • 4.5 Intelligent Computing Center
    • 4.5.1 Summary of Autonomous Driving Cloud Supercomputing Centers in China
    • 4.5.2 Intelligent Computing Case 1
    • 4.5.3 Intelligent Computing Case 2
  • 4.6 Data Closed-Loop Cloud Platform
    • 4.6.1 Overview of Cloud Service-Enabled Data Closed-Loop
    • 4.6.2 Case 1: Cloud Data Closed-Loop Tool SimCycle
    • 4.6.3 Case 2: Huawei Cloud-Enabled Data Closed-Loop
    • 4.6.4 Case 3: Jingwei Hirain's Intelligent Driving Data Closed-Loop Cloud Platform OrienLink
    • 4.6.5 Case 4: 51SimOne Cloud-Native Simulation Platform

5 Data Closed-Loop Technology Suppliers

  • Summary: Comparison between Data Closed-Loop Technology Suppliers (1)
  • Summary: Comparison between Data Closed-Loop Technology Suppliers (2)
  • Summary: Comparison between Data Closed-Loop Technology Suppliers (3)
  • Summary: Comparison between Data Closed-Loop Technology Suppliers (4)
  • Summary: Comparison between Data Closed-Loop Technology Suppliers (5)
  • 5.1 JueFX Technology
    • 5.1.1 Data Closed-Loop Solution
    • 5.1.2 Data Closed-Loop Solution (Urban NOA)
    • 5.1.3 Data Closed-Loop Solution (Highway NOA)
    • 5.1.4 BEV+Transformer Algorithm Mass Production Architecture Based on Data Closed-Loop
    • 5.1.5 Multimodal Automatic Annotation and Tool Chain
    • 5.1.6 Automatic Annotation Based on 4D Detection
  • 5.2 QCraft
    • 5.2.1 Data Closed-Loop Capabilities
    • 5.2.2 Joint Spatio-Temporal Planning Technology
    • 5.2.3 Driven-by-QCraft New Mid-to-high-level Intelligent Driving Solution Based on Journey(R) 6
    • 5.2.4 Latest Dynamics
  • 5.3 Zhuoyu
    • 5.3.1 Technology Route
    • 5.3.2 4D Vision-only Automatic Annotation Technology
    • 5.3.3 Intelligent Driving Chip Compute Optimization (1) - Model Optimization
    • 5.3.4 Intelligent Driving Chip Compute Optimization (2) - Computing Acceleration (Heterogeneous Computing)
    • 5.3.5 Intelligent Driving Chip Compute Optimization (2) - Computing Acceleration (Model Reasoning Optimization)
    • 5.3.6 Intelligent Driving Chip Compute Optimization (2) - Computing Acceleration (Operator Optimization)
    • 5.3.7 Intelligent Driving Chip Compute Optimization (3) - System Optimization
  • 5.4 Haomo.ai
    • 5.4.1 Intelligent Driving Data Progress Table
    • 5.4.2 HPilot Series
    • 5.4.3 DriveGPT
  • 5.5 SenseAuto
    • 5.5.1 New Embedded Model Piccolo2
    • 5.5.2 UniAD True End-to-end Perception and Decision Integrated Foundation Model
    • 5.5.3 DriveAGI & SenseNova 5.0
    • 5.5.4 ADNN Chip Heterogeneous Computing Platform
    • 5.5.5 Deployment of Native Large Multimodal Model on Vehicles
    • 5.5.6 Latest Dynamics
  • 5.6 Momenta
    • 5.6.1 Data Closed Loop
    • 5.6.2 Mapless Intelligent Driving Algorithm and High-level Intelligent Driving Solution
    • 5.6.3 Latest Dynamics
  • 5.7 Freetech
    • 5.7.1 Data Closed-Loop Platform Product - FUGA
  • 5.8 Nullmax
    • 5.8.1 One-stop Data-in-the-loop Platform
    • 5.8.2 Multimodal End-to-end + Secure Brain-inspired Intelligence
    • 5.8.3 Full Automated Data Process
    • 5.8.4 Growable Algorithm Platform
  • 5.9 DeepRoute.ai
    • 5.9.1 End-to-end
    • 5.9.2 End-to-end High-level Intelligent Driving Platform DeepRoute IO
    • 5.9.3 Deeproute-Driver
    • 5.9.4 D-PRO
    • 5.9.5 D-AIR
  • 5.10 Bosch
    • 5.10.1 Data Closed Loop
    • 5.10.2 High-level Intelligent Driving
  • 5.11 EXCEEDDATA
    • 5.11.1 Vehicle-Cloud Data Base
    • 5.11.2 Vehicle-Cloud Data Base - Flexible Data Collection
    • 5.11.3 Vehicle-Cloud Data Base - Flexible Data Warehouse
    • 5.11.4 Vehicle-Cloud Data Base - Application in Scenarios
    • 5.11.5 Vehicle-Cloud Integrated Tool Chain
      • 5.11.5.1 Vehicle-Cloud Integrated Tool Chain (1)
      • 5.11.5.2 Vehicle-Cloud Integrated Tool Chain (2)
      • 5.11.5.3 Vehicle-Cloud Integrated Tool Chain (3)
      • 5.11.5.4 Vehicle-Cloud Integrated Tool Chain (4)
      • 5.11.5.5 Vehicle-Cloud Integrated Tool Chain (4)
      • 5.11.5.6 Vehicle-Cloud Integrated Tool Chain (4)
      • 5.11.5.7 Vehicle-Cloud Integrated Tool Chain (4)
    • 5.11.6 Application Case of Vehicle-Cloud Integrated Tool Chain
  • 5.12 Yoocar
    • 5.12.1 Business Layout
    • 5.12.2 Connection Solution
    • 5.12.3 Autonomous Driving Data Closed-Loop Tool Chain Platform
  • 5.13 Mxnavi
    • 5.13.1 Profile
    • 5.13.2 Development History
    • 5.13.3 Crowd-sourced Map Solution
    • 5.13.4 Crowd-sourced Map System Architecture
    • 5.13.5 Crowd-sourced Map System: Mapping Process
    • 5.13.6 Crowd-sourced Map System: Map Elements
    • 5.13.7 Crowd-sourced Map System: Intelligent Driving Function Scenarios
    • 5.13.8 Crowd-sourced Automated Production System
    • 5.13.9 Crowd-sourced Map System: Map Engine Architecture
    • 5.13.10 Crowd-sourced Map System: Multi-source Fusion Location Solution Based on Visual Perception
    • 5.13.11 Crowd-sourced Map System: Data Compliance Architecture
    • 5.13.12 Partners
  • 5.14 NavInfo
    • 5.14.1 Data Compliance Closed Loop
    • 5.14.2 One Map Data Platform
    • 5.14.3 Lightweight Map Product - HD Lite
    • 5.14.4 Lightweight Version of NOP System - NOP Lite
    • 5.14.5 NI in Car Intelligent Integrated Solution
    • 5.14.6 AutoChips' Chip Series
    • 5.14.7 Pachira's DeepThinking Foundation Model
    • 5.14.8 Sixents Technology's Orion
    • 5.14.9 "Vehicle-Road-Cloud Integration" Solution
    • 5.14.10 Latest Dynamics

6 Data Closed Loop of Typical OEMs

  • Summary: Data Closed Loop Capabilities of OEMs (1)
  • Summary: Data Closed Loop Capabilities of OEMs (2)
  • 6.1 BYD
    • 6.1.1 "Vehicle Intelligence" Strategy
    • 6.1.2 Data Accumulation Capabilities
    • 6.1.3 Data Closed Loop - Algorithm Capabilities
    • 6.1.4 Data Closed Loop - Computing Capabilities
    • 6.1.5 "Eyes of God" High-level Intelligent Driving System
  • 6.2 Chery
    • 6.2.1 ZDrive.ai - Profile
    • 6.2.2 ZDrive.ai - Data Closed-Loop Capabilities
    • 6.2.3 ZDrive.ai - Zhuojie Joint Innovation Center
    • 6.2.4 ZDrive.ai - Latest Dynamics
  • 6.3 Great Wall Motor
    • 6.3.1 Intelligent Driving System
    • 6.3.2 SEE End-to-End Intelligent Driving Foundation Model
    • 6.3.3 Supercomputing Center
  • 6.4 Geely
    • 6.4.1 Zeekr Haohan Intelligent Driving 2.0 All-Scenario End-to-End
    • 6.4.2 SuperVision Solution of Zeekr NZP
    • 6.4.3 Xingrui Intelligent Computing Center
    • 6.4.4 Intelligent Driving Cloud Data Factory
    • 6.4.5 Intelligent Driving Closed-Loop System
    • 6.4.6 ROBO Galaxy Tool Chain Solution Process Solution
    • 6.4.7 Data Production Modes
    • 6.4.8 Self-developed Algorithm Underlying Software Abstraction
    • 6.4.9 Intelligent Driving Self-development SOA Design
    • 6.4.10 Fully Self-developed Cockpit Operating System
    • 6.4.11 Global Platform Operation System
  • 6.5 Li Auto
    • 6.5.1 Large Multimodal Cognitive Model
    • 6.5.2 Intelligent Driving End-to-end Solution
    • 6.5.3 Algorithm Architecture of Intelligent Driving 3.0
    • 6.5.4 Mapless NOA
    • 6.5.5 Intelligent Laboratory
    • 6.5.6 Progress in Self-developed Chips
  • 6.6 Xpeng
    • 6.6.1 Adjustment of Organizational Structure of Autonomous Driving Department
    • 6.6.2 End-to-end System
    • 6.6.3 Evolution of XNGP
    • 6.6.4 XNGP's Closed-Loop Data Iteration System
    • 6.6.5 Self-developed Chips
    • 6.6.6 Fuyao Intelligent Computing Center
  • 6.7 NIO
    • 6.7.1 Intelligent Driving World Model
    • 6.7.2 New Intelligent Driving Architecture
    • 6.7.3 Swarm Intelligence
    • 6.7.4 Self-developed Chips

7 Data Closed Loop Development Trends

  • 7.1 Trend 1
  • 7.2 Trend 2
  • 7.3 Trend 3
  • 7.4 Trend 4
  • 7.5 Trend 5
  • 7.6 Trend 6
  • 7.7 Trend 7
  • 7.8 Trend 8
  • 7.9 Trend 9