封面
市場調查報告書
商品編碼
1482383

中國智慧駕駛融合演算法產業(2024年)

China Intelligent Driving Fusion Algorithm Research Report, 2024

出版日期: | 出版商: ResearchInChina | 英文 380 Pages | 商品交期: 最快1-2個工作天內

價格
簡介目錄

從2023年8月馬斯克先生現場試駕FSD V12 Beta到2024年3月FSD V12監督30天免費試用8個月,像城市NOA這樣的先進智慧駕駛開始成為各大主機廠的焦點,端對端演算法、BEV Transformer演算法、基於AI的模型演算法的應用越來越多。

1.稀疏演算法提高效率,降低智慧駕駛成本。

目前,大多數 BEV 演算法都是密集的,並且消耗大量的運算能力和儲存空間。實現每秒 30 幀或更高的流暢度需要昂貴的運算資源,例如 NVIDIA A100。不過,僅支援 5-6 個 2MP 相機。 8MP 相機需要非常昂貴的資源,例如多個 H100 GPU。

我們的現實世界具有稀疏特徵。稀疏化有助於感測器降低雜訊並提高穩健性。而且,隨著距離的增加,網格變得稀疏,只能在50公尺左右的範圍內維持密集的網路。透過減少查詢和特徵交互,稀疏感知演算法加快了計算速度,減少了所需的儲存空間,顯著提高了感知模型的計算效率和系統性能,減少了系統延遲,提高了感知精度範圍並降低了車輛速度的影響。

因此,從 2021 年開始,學術界正在轉向稀疏目標層級演算法,而不是基於密集網格的演算法。透過長期活動,稀疏目標層級演算法的表現幾乎與基於網格的密集演算法一樣好。業界也不斷迭代稀疏演算法。近年來,地平線機器人公司開源了Sparse4D。 Sparse4D 是一種視覺特定演算法,在 nuScenes 視覺特定 3D 檢測和 3D 追蹤中排名第一。

Sparse4D是一組針對長時間序列中的稀疏3D目標偵測的演算法,屬於多視點時間融合感知技術的範圍。面對稀疏感知的產業發展趨勢,Sparse4D建構了純粹的稀疏融合感知框架,使感知演算法更有效率且準確,簡化了感知系統。與稠密BEV演算法相比,Sparse4D降低了計算複雜度,打破了感知範圍內的算力限制,在感知有效性和推理速度上優於稠密BEV演算法。

稀疏演算法的另一個重要優勢是,它們透過減少對感測器的依賴並消耗更少的運算能力來降低智慧駕駛解決方案的成本。例如,曠視科技透過優化BEV演算法、降低算力、去除高清地圖、RTK、LiDAR、統一演算法框架、自動標註等多種措施,對PETR系列的稀疏演算法進行了改進。比,其智慧駕駛解決方案的成本降低了20%至30%。

2. 4D演算法提供更高的精度,增加智慧駕駛的可靠性。

過去三年,智慧駕駛功能和應用場景不斷增加,從 OEM 感測器配置來看,現在配備的感測器數量比以往任何時候都多。大多數城市 NOA 解決方案包括 10-12 個攝影機、3-5 個雷達、12 個超音波雷達和 1-3 個雷射雷達。

隨著感測器的增加,產生的感知數據比以往任何時候都多。如何提高數據的利用,對於整車廠和演算法提供者來說也是一個課題。雖然每家公司的演算法細節可能略有不同,但目前主流的BEV Transformer解決方案的整體思路基本上相同。

本報告針對中國智慧駕駛融合演算法產業進行調查分析,並總結各公司的解決方案、應用實例以及研發趨勢。

目錄

第一章智慧駕駛融合演算法概述

  • 智慧駕駛演算法:辨識、決策、行動 (1)
  • 智慧駕駛演算法:辨識、決策、行動 (2)
  • 智慧駕駛演算法:辨識、決策、行動 (3)
  • 智慧駕駛演算法:辨識、決策、行動 (4)
  • 智慧駕駛演算法:辨識、決策、行動 (5)
  • 智慧駕駛演算法:迭代歷史
  • 智慧駕駛辨識演算法-視覺識別
  • 智慧駕駛融合演算法(一)
  • 智慧駕駛融合演算法(二)
  • 智慧駕駛融合演算法(三)
  • 智慧駕駛融合演算法(4)
  • OEM融合演算法應用範例
  • OEM融合演算法模型比較
  • Tier 1融合演算法模型比較
  • 智慧駕駛演算法供應模型
  • 智慧駕駛融合演算法發展趨勢

第2章端對端演算法

  • 端到端智慧駕駛成為長期共識
  • 佔用的網絡
  • 端對端演算法應用範例

第三章 BEV Transformer 基本模型演算法

  • 從小型號到基本型號
  • BEV+Transformer演算法
  • OEM BEV+Transformer 演算法比較
  • 一級供應商BEV+Transformer演算法對比

第四章資料是融合演算法的基礎

  • 資料是融合演算法的基礎
  • 智慧駕駛資料集對比
  • 主要資料訓練集供應商及其產品
  • 資料集在智慧駕駛中的應用範例

第五章晶片廠商演算法

  • Huawei
  • Horizon Robotics
  • Black Sesame Technologies
  • Mobileye
  • Qualcomm Arriver
  • NXP
  • NVIDIA

第 6 章第 1 層與第 2 層供應商演算法

  • Momenta
  • Nullmax
  • ArcSoft
  • JueFX Technology
  • StradVision
  • iMotion
  • EnjoyMove Technology
  • Haomo.AI
  • In-driving Tech
  • Valeo

第 7 章新興汽車製造商和 OEM 演算法

  • Tesla
  • NIO
  • Li Auto
  • Xpeng
  • Leapmotor
  • ZEEKR
  • BMW
  • SAIC
  • GM

第八章L4智慧駕駛機器人軸演算法

  • Baidu Apollo
  • Pony.ai
  • WeRide
  • DeepRoute.ai
  • QCraft
  • UISEE
  • Didi Autonomous Driving
  • Waymo
簡介目錄
Product Code: ZXF008

Intelligent Driving Fusion Algorithm Research: sparse algorithms, temporal fusion and enhanced planning and control become the trend.

China Intelligent Driving Fusion Algorithm Research Report, 2024 released by ResearchInChina analyzes the status quo and trends of intelligent driving fusion algorithms (including perception, positioning, prediction, planning, decision, etc.), sorts out algorithm solutions and cases of chip vendors, OEMs, Tier1 & Tier2 suppliers and L4 algorithm providers, and summarizes the development trends of intelligent driving algorithms.

Since the period of eight months from Musk's live test drive of FSD V12 Beta in August 2023 to the 30-day free trial of FSD V12 Supervised in March 2024, advanced intelligent driving such as urban NOA has begun to become the arena of major OEMs, and there have been ever more application cases for end-to-end algorithms, BEV Transformer algorithms, and AI foundation model algorithms.

1. Sparse algorithms improve efficiency and reduce intelligent driving cost.

At present, most BEV algorithms are dense and consume considerable computing power and storage. The smoothness of more than 30 frames per second requires expensive computing resources such as NVIDIA A100. Even so, only 5 to 6 2MP cameras can be supported. For 8MP cameras, extremely expensive resources like multiple H100 GPUs are needed.

Our real world has sparse features. Sparsification helps sensors reduce noise and improve robustness. In addition, as distance increases, grids are bound to be sparse, and a dense network can only be maintained within about 50 meters. By reducing queries and feature interactions, sparse perception algorithms speed up calculations and lower storage requirements, greatly improve the computing efficiency and system performance of the perception model, shorten the system latency, expand the perception accuracy range, and ease the impact of vehicle speed.

Therefore, the academia has shifted to sparse target-level algorithms rather than dense grid-based algorithms since 2021. With long-term efforts, sparse target-level algorithms can perform almost as well as dense grid-based algorithms. The industry also keeps iterating sparse algorithms. Recently, Horizon Robotics has open-sourced Sparse4D, its vision-only algorithm which ranks first on both nuScenes vision-only 3D detection and 3D tracking lists.

Sparse4D is a series of algorithms moving towards long-time-sequence sparse 3D target detection, belonging to the scope of multi-view temporal fusion perception technology. Facing the industry development trend of sparse perception, Sparse4D builds a pure sparse fusion perception framework, which makes perception algorithms more efficient and accurate and simplifies perception systems. Compared with dense BEV algorithms, Sparse4D reduces the computational complexity, breaks the limit of computing power on the perception range, and outperforms dense BEV algorithms in perception effect and reasoning speed.

Another significant advantage of sparse algorithms is to cut down the cost of intelligent driving solutions by reducing dependence on sensors and consuming less computing power. For example, Megvii Technology mentioned that taking a range of measures, for example, optimizing the BEV algorithm, reducing computing power, removing HD maps, RTK and LiDAR, unifying the algorithm framework, and automatic annotation, it has lowered the costs of its intelligent driving solutions based on PETR series sparse algorithms by 20%-30%, compared with conventional solutions on the market.

2. 4D algorithms offer higher accuracy and make intelligent driving more reliable.

As seen from the sensor configurations of OEMs, in recent three years ever more sensors have been installed, with increasing intelligent driving functions and application scenarios. Most urban NOA solutions are equipped with 10-12 cameras, 3-5 radars, 12 ultrasonic radars and 1-3 LiDARs.

With the increasing number of sensors, ever more perception data are generated. How to improve the utilization of the data is also placed on the agenda of OEMs and algorithm providers. Although the algorithm details of companies are a little different, the general ideas of the current mainstream BEV Transformer solutions are basically the same: conversion from 2D to 3D and then to 4D.

Temporal fusion can greatly improve the algorithm continuity, and the memory of obstacles can handle occlusion and allows for better perception the speed information. The memory of road signs can improve the driving safety and the accuracy of vehicle behavior prediction. The fusion of information from historical frames can improve the perception accuracy of the current object, while the fusion of information from future frames can verify the object perception accuracy, thereby enhancing the algorithm reliability and accuracy.

Tesla's Occupancy Network algorithm is a typical 4D algorithm.

Tesla adds the height information to the vector space of 2D BEV+ temporal information output by the original Transformer algorithm to build the 4D space representation form of 3D BEV + temporal information. The network runs every 10ms on the FSD, that is, it runs at 100FPS, which greatly improves the speed of model detection.

3. End-to-end algorithms integrating perception, planning and control enable more anthropomorphic intelligent driving.

Mainstream intelligent driving algorithms have adopted the "BEV+Transformer" architecture, and many innovative perception algorithms have emerged. However, rule-based algorithms still prevail among planning and control algorithms. Some OEMs face technical and practical challenges in both perception and planning & control systems, which are sometimes in a "split" state. In some complex scenarios, the perception module may fail to accurately recognize or understand the environmental information, and the decision module may make incorrect driving decisions due to improper handling of the perception results or algorithm limitations. This restricts the development of advanced intelligent driving to some extent.

UniAD, an end-to-end intelligent driving algorithm jointly released by SenseTime, OpenDriveLab and Horizon Robotics, was rated as the Best Paper in CVPR2023. UniAD integrates three main tasks (perception, prediction and planning) and six sub-tasks (target detection, target tracking, scene mapping, trajectory prediction, grid prediction and path planning) into a unified end-to-end network framework based on Transformer for the first time to attain a general model of full-stack task-critical driving. Under the nuScenes real scene dataset, UniAD performs all tasks best in the field, especially in terms of the prediction and planning results far better the previous best solution.

The basic end-to-end algorithm enables direct inputs from sensors and predictive control outputs, but it is difficult to optimize, because of lacking effective feature communication between network modules and effective interaction between tasks and needing to output results in phases. The decision-oriented perception and decision integrated design proposed by the UniAD algorithm uses token features for deep fusion according to the perception-prediction-decision process, so that the indicators of all tasks targeting decision are consistently improved.

In terms of planning and control algorithms, Tesla adopts an approach of interactive search + evaluation model to enable a comfortable and effective algorithm that combines conventional search algorithms with artificial intelligence:

Firstly, candidate objects are obtained according to lane lines, occupancy networks and obstacles, and then decision trees and candidate object sequences are generated.

The trajectory for reaching the above objects is constructed synchronously using conventional search and neural networks;

The interaction between the vehicle and other participants in the scene is predicted to form a new trajectory. After multiple evaluations, the final trajectory is selected. During the trajectory generation, Tesla applies conventional search algorithms and neural networks, and then scores the generated trajectory according to collision check, comfort analysis, the possibility of the driver taking over and the similarity with people, to finally decide the implementation strategy.

XBrain, the ultimate architecture of Xpeng's all-scenario intelligent driving, is composed of XNet 2.0, a deep vision neural network, and XPlanner, a planning and control module based on a neural network. XPlanner is a planning and control algorithm based on a neural network, with the following features:

Rule algorithm

Long time sequence (minute-level)

Multi-object (multi-agent decision, gaming capability)

Strong reasoning

The previous advanced algorithms and ADAS functional architectures were separated and consisted of many small logic planning and control algorithms for sub-scenes, while XPlanner has a unified planning and control algorithm architecture. XPlanner is supported by a foundation model and a large number of extreme driving scenes for simulation training, thus ensuring that it can cope with various complex situations.

Table of Contents

1 Overview of Intelligent Driving Fusion Algorithms

  • 1.1 Intelligent Driving Algorithms: Perception, Decision, Actuation (1)
  • 1.1 Intelligent Driving Algorithms: Perception, Decision, Actuation (2)
  • 1.1 Intelligent Driving Algorithms: Perception, Decision, Actuation (3)
  • 1.1 Intelligent Driving Algorithms: Perception, Decision, Actuation (4)
  • 1.1 Intelligent Driving Algorithms: Perception, Decision, Actuation (5)
  • 1.2 Intelligent Driving Algorithms: Iteration History
  • 1.3 Intelligent Driving Perception Algorithms - Visual Perception
    • 1.3.1 Visual Perception Algorithms (1)
    • 1.3.2 Visual Perception Algorithms (2)
    • 1.3.3 Intelligence Driving Perception Algorithms - LiDAR Perception (1)
    • 1.3.3 Intelligence Driving Perception Algorithms - LiDAR Perception (2)
    • 1.3.3 Intelligence Driving Perception Algorithms - LiDAR Perception (3)
    • 1.3.3 Intelligence Driving Perception Algorithms - LiDAR Perception (4)
    • 1.3.3 Intelligence Driving Perception Algorithms - LiDAR Perception (5)
    • 1.3.3 Intelligence Driving Perception Algorithms - LiDAR Perception (6)
    • 1.3.3 Intelligence Driving Perception Algorithms - LiDAR Perception (7)
    • 1.3.4 Intelligence Driving Perception Algorithms - Radar Perception
    • 1.3.5 Intelligent Driving Decision Algorithms
    • 1.3.6 Intelligent Driving Control Algorithms
  • 1.4 Intelligent Driving Fusion Algorithms (1)
  • 1.4 Intelligent Driving Fusion Algorithms (2)
  • 1.4 Intelligent Driving Fusion Algorithms (3)
  • 1.4 Intelligent Driving Fusion Algorithms (4)
    • 1.4.1 Temporal Fusion Algorithms
    • 1.4.2 DNN Algorithms
    • 1.4.3 CNN Algorithms
    • 1.4.4 YOLO V3 Algorithms
    • 1.4.5 RNN Algorithms
    • 1.4.6 3D Bounding Box Algorithms
    • 1.4.7 6D-Vision Algorithms
    • 1.4.8 VFM Algorithms
    • 1.4.9 Pseudo-LiDAR
    • 1.4.10 Algorithm Solutions Integrating Traditional Algorithms and Neural Networks
    • 1.4.11 DETR3D Algorithms
    • 1.4.12 Far3D Algorithms
    • 1.4.13 Sparse BEV Algorithms
    • 1.4.14 PETR Algorithms
    • 1.4.15 Sparse 4D Algorithms (1)
    • 1.4.15 Sparse 4D Algorithms (2)
    • 1.4.15 Sparse 4D Algorithms (3)
    • 1.4.15 Sparse 4D Algorithms (4)
  • 1.5 Application Cases of OEM Fusion Algorithms
    • 1.5.1 Application Cases of OEM Fusion Algorithms (1)
    • 1.5.2 Application Cases of OEM Fusion Algorithms (2)
    • 1.5.3 Application Cases of OEM Fusion Algorithms (3)
  • 1.6 Comparison among OEM Fusion Algorithm Models
  • 1.7 Comparison among Tier 1 Fusion Algorithm Models
  • 1.8 Intelligent Driving Algorithm Supply Models
  • 1.9 Development Trends of Intelligent Driving Fusion Algorithms
    • 1.9.1 Development Trends of Intelligent Driving Fusion Algorithms (1)
    • 1.9.2 Development Trends of Intelligent Driving Fusion Algorithms (2)
    • 1.9.3 Development Trends of Intelligent Driving Fusion Algorithms (3)
    • 1.9.4 Development Trends of Intelligent Driving Fusion Algorithms (4)
    • 1.9.5 Development Trends of Intelligent Driving Fusion Algorithms (5)
    • 1.9.6 Development Trends of Intelligent Driving Fusion Algorithms (6)
    • 1.9.7 Development Trends of Intelligent Driving Fusion Algorithms (7)
    • 1.9.8 Development Trends of Intelligent Driving Fusion Algorithms (8)
    • 1.9.9 Development Trends of Intelligent Driving Fusion Algorithms (9)

2 End-to-end Algorithms

  • 2.1 End-to-end Intelligent Driving Becomes a Long-Term Consensus
    • 2.1.1 How to Build an End-to-end Neural Network Foundation Model of Intelligent Driving?
    • 2.1.2 End-to-end Algorithms (1)
    • 2.1.3 End-to-end Algorithms (2)
    • 2.1.4 End-to-end Algorithms (3)
    • 2.1.5 End-to-end Algorithms (4)
  • 2.2 Occupancy Networks
    • 2.2.1 Occupancy Networks (1)
    • 2.2.2 Occupancy Networks (2)
    • 2.2.3 Occupancy Networks (3)
    • 2.2.4 Occupancy Networks (4)
    • 2.2.5 Occupancy Networks (5)
    • 2.2.6 Occupancy Networks (6)
  • 2.3 Application Cases of End-to-end Algorithms
    • 2.3.1 Application Cases of End-to-end Algorithms (1)
    • 2.3.2 Application Cases of End-to-end Algorithms (2)
    • 2.3.3 Application Cases of End-to-end Algorithms (3)
    • 2.3.4 Application Cases of End-to-end Algorithms (4)
    • 2.3.5 Application Cases of End-to-end Algorithms (5)
    • 2.3.6 Application Cases of End-to-end Algorithms (6)
    • 2.3.7 Application Cases of End-to-end Algorithms (7)
    • 2.3.8 Application Cases of End-to-end Algorithms (8)

3 BEV Transformer Foundation Model Algorithms

  • 3.1 From Small Models to Foundation Models
    • 3.1.1 BEV Perception Systems
    • 3.1.2 Three Common Transformers
    • 3.1.3 BEV Det
    • 3.1.3 BEV Stereo
    • 3.1.3 SOLOFusion
    • 3.1.3 VideoBEV
    • 3.1.4 Inverse Perspective Mapping
    • 3.1.4 BEV Former
  • 3.2 BEV+Transformer Algorithms
    • 3.2.1 BEV + Transformer Foundation Models (1)
    • 3.2.2 BEV + Transformer Foundation Models (2)
    • 3.2.3 BEV + Transformer Foundation Models (3)
  • 3.3 Comparison among OEM BEV+Transformer Algorithms
    • 3.3.1 Progress of OEM BEV+Transformer Algorithms
    • 3.3.2 Cases of OEM BEV+Transformer Algorithms (1)
    • 3.3.3 Cases of OEM BEV+Transformer Algorithms (2)
    • 3.3.4 Cases of OEM BEV+Transformer Algorithms (3)
  • 3.4 Comparison among BEV+Transformer Algorithms of Tier 1 Suppliers
    • 3.4.1 Cases of Tier 1 BEV+Transformer Algorithms (1)
    • 3.4.2 Cases of Tier 2 BEV+Transformer Algorithms (1)
    • 3.4.3 Cases of Tier 3 BEV+Transformer Algorithms (1)
    • 3.4.4 Cases of Tier 4 BEV+Transformer Algorithms (1)

4 Data Is the Cornerstone of Fusion Algorithms

  • 4.1 Data Is the Cornerstone of Fusion Algorithms
    • 4.1.1 Datasets: How to Collect
    • 4.1.2 Datasets: Evolution from Single-vehicle Intelligence to Vehicle-city Integration
    • 4.1.3 Datasets: From Perception to Prediction and Planning
    • 4.1.4 Datasets: Multimodal, End-to-end
    • 4.1.5 Next-generation Datasets
  • 4.2 Intelligent Driving Dataset Comparison
    • 4.2.1 Intelligent Driving Dataset Comparison (1)
    • 4.2.2 Intelligent Driving Dataset Comparison (2)
    • 4.2.3 Intelligent Driving Dataset Comparison (3)
    • 4.2.4 Intelligent Driving Dataset Comparison (4)
    • 4.2.5 Intelligent Driving Dataset Comparison (5)
    • 4.2.6 Intelligent Driving Dataset Comparison (6)
  • 4.3 Major Data Training Set Suppliers and Their Products
    • 4.3.1 Major Data Training Set Suppliers and Their Products (1)
    • 4.3.2 Major Data Training Set Suppliers and Their Products (2)
    • 4.3.3 Major Data Training Set Suppliers and Their Products (3)
    • 4.3.4 Major Data Training Set Suppliers and Their Products (4)
    • 4.3.5 Major Data Training Set Suppliers and Their Products (5)
  • 4.4 Application Cases of Datasets in Intelligent Driving
    • 4.4.1 Application Cases of Datasets in Intelligent Driving (1)
    • 4.4.2 Application Cases of Datasets in Intelligent Driving (2)
    • 4.4.3 Application Cases of Datasets in Intelligent Driving (3)
    • 4.4.4 Application Cases of Datasets in Intelligent Driving (4)
    • 4.4.5 Application Cases of Datasets in Intelligent Driving (5)
    • 4.4.6 Application Cases of Datasets in Intelligent Driving (6)
    • 4.4.7 Application Cases of Datasets in Intelligent Driving (7)
    • 4.4.8 Application Cases of Datasets in Intelligent Driving (8)
    • 4.4.9 Application Cases of Datasets in Intelligent Driving (9)

5 Algorithms of Chip Vendors

  • 5.1 Huawei
    • 5.1.1 Intelligent Automotive Solution (IAS) Business Unit (BU)
    • 5.1.2 Cooperation Modes
    • 5.1.3 Intelligent Driving Full Stack Solutions (1)
    • 5.1.4 Intelligent Driving Full Stack Solutions (2)
    • 5.1.5 Intelligent Driving Perception Algorithms: GOD 2.0&RCR 2.0
    • 5.1.6 Intelligent Driving Perception Algorithms: Occupancy
    • 5.1.7 Intelligent Driving Perception Algorithms: Transfusion
  • 5.2 Horizon Robotics
    • 5.2.1 Profile
    • 5.2.2 Cooperation Modes
    • 5.2.3 Automotive Computing Platforms and Monocular Front View Solution Algorithms
    • 5.2.4 Intelligent Driving Perception Algorithm Design (1)
    • 5.2.4 Intelligent Driving Perception Algorithm Design (2)
    • 5.2.4 Intelligent Driving Perception Algorithm Design (3)
    • 5.2.5 Core Algorithm Libraries (1)
    • 5.2.5 Core Algorithm Libraries (2)
    • 5.2.5 Core Algorithm Libraries (3)
    • 5.2.6 NOA Solutions and Super Driving Solution Algorithms
    • 5.2.7 Open Software Platforms
    • 5.2.8 Official Open Source Sparse4D Algorithms
    • 5.2.9 Algorithm Planning
    • 5.2.10 Recent Dynamics in Cooperation
  • 5.3 Black Sesame Technologies
    • 5.3.1 Profile
    • 5.3.2 Visual Perception Algorithms
    • 5.3.3 4D Radar and Visual Perception Fusion Algorithms
    • 5.3.4 LiDAR DSP
    • 5.3.5 PointPillars Algorithms
    • 5.3.6 Parking Visual Perception Algorithms
    • 5.3.7 Driving Visual Perception Algorithms
    • 5.3.8 Shanhai Toolchain
    • 5.3.9 Partners
    • 5.3.10 Recent Dynamics in Cooperation
  • 5.4 Mobileye
    • 5.4.1 Profile
    • 5.4.2 Full Stack Intelligent Driving Solutions
    • 5.4.3 Object Recognition Technology
    • 5.4.4 Chip Algorithm Development Process
    • 5.4.5 Vision Algorithms
    • 5.4.6 Recent Dynamics in Cooperation
  • 5.5 Qualcomm Arriver
    • 5.5.1 Profile
    • 5.5.2 Visual Perception Algorithms
  • 5.6 NXP
    • 5.6.1 Profile
    • 5.6.2 ADAS Software and Hardware Solutions
    • 5.6.3 Object Detection Algorithms
    • 5.6.4 CNN Algorithms for Object Detection
  • 5.7 NVIDIA
    • 5.7.1 Profile
    • 5.7.2 Cooperation Mode
    • 5.7.3 Intelligent Vehicle Software Stacks
    • 5.7.4 DRIVE Perception Algorithms (1)
    • 5.7.4 DRIVE Perception Algorithms (2)
    • 5.7.4 DRIVE Perception Algorithms (3)
    • 5.7.5 Perception Algorithm End-to-end Models: PiloNet to NVRadarNet
    • 5.7.6 Recent Dynamics in Cooperation
    • 5.7.7 Automotive Partner Technology Exhibition and Ecological Cooperation at CES 2024

6 Algorithms of Tier 1 & Tier 2 Vendors

  • 6.1 Momenta
    • 6.1.1 Profile
    • 6.1.2 Core Algorithms
    • 6.1.3 Algorithm Application
    • 6.1.4 Mapless Intelligent Driving Algorithms
    • 6.1.5 DDLD Lane Line Recognition Algorithm
    • 6.1.6 DDPF Location Fusion Algorithm
    • 6.1.7 DLP Planning and Control Algorithm
    • 6.1.8 Algorithm Development Route
    • 6.1.9 Recent Dynamics in Cooperation
  • 6.2 Nullmax
    • 6.2.1 Profile
    • 6.2.2 Algorithms and Modules
    • 6.2.3 Core Algorithms (1)
    • 6.2.3 Core Algorithms (2)
    • 6.2.3 Core Algorithms (3)
    • 6.2.4 Application Process of Algorithm Products
    • 6.2.5 Recent Dynamics in Cooperation
  • 6.3 ArcSoft
    • 6.3.1 Profile
    • 6.3.2 Intelligent Driving Technology (1)
    • 6.3.3 Intelligent Driving Technology (2)
    • 6.3.4 One-stop Automotive Vision Solution: VisDrive
    • 6.3.5 Recent Dynamics and Development Planning
  • 6.4 JueFX Technology
    • 6.4.1 Profile
    • 6.4.2 Visual Feature Fusion Positioning Solutions
    • 6.4.3 BEV Perception Technology
    • 6.4.4 BEV+Transformer Algorithms (1)
    • 6.4.4 BEV+Transformer Algorithms (2)
    • 6.4.4 BEV+Transformer Algorithms (3)
    • 6.4.5 LiDAR Fusion Positioning Solutions
    • 6.4.6 Architecture of Highway NOA Solutions with Low-weight Maps
    • 6.4.7 Real-time Online Mapping
    • 6.4.8 Automatic Annotation Systems
    • 6.4.9 Multi-sensor Fusion Positioning Algorithms (1)
    • 6.4.9 Multi-sensor Fusion Positioning Algorithms (2)
    • 6.4.9 Multi-sensor Fusion Positioning Algorithms (3)
    • 6.4.10 Different Fusion Algorithm Solutions Based on LiDAR
    • 6.4.11 Perception Foundation Model Algorithms Based on Data Closed Loop
    • 6.4.12 Cooperation Ecology
  • 6.5 StradVision
    • 6.5.1 Profile
    • 6.5.2 Intelligent Driving Algorithms (1)
    • 6.5.2 Intelligent Driving Algorithms (2)
    • 6.5.3 Next-generation "3D Perception Network"
    • 6.5.4 Development Dynamics of Vision Products
  • 6.6 iMotion
    • 6.6.1 Profile
    • 6.6.2 Core Intelligent Driving Algorithms
    • 6.6.3 Mass Production
  • 6.7 EnjoyMove Technology
    • 6.7.1 Profile
    • 6.7.2 Intelligent Driving Software
    • 6.7.3 Recent Dynamics
  • 6.8 Haomo.AI
    • 6.8.1 Profile
    • 6.8.2 Product Matrix
    • 6.8.3 Status Quo of Intelligent Driving
    • 6.8.4 MANA System
    • 6.8.5 Perception Module of MANA System
    • 6.8.5 Cognitive Module of MANA System
    • 6.8.6 Intelligent Computing Center
    • 6.8.7 Perception Algorithm Optimization
    • 6.8.8 Cognitive Algorithm Optimization
  • 6.9 In-driving Tech
    • 6.9.1 Profile
    • 6.9.2 Intelligent Driving Algorithms (1)
    • 6.9.3 Intelligent Driving Algorithms (2)
    • 6.9.4 Algorithm Achievements and Planning
  • 6.10 Valeo
    • 6.10.1 Profile
    • 6.10.2 Typical Algorithm Models (1)
    • 6.10.2 Typical Algorithm Models (2)

7 Algorithms of Emerging Automakers and OEMs

  • 7.1 Tesla
    • 7.1.1 Profile
    • 7.1.2 End-to-end Algorithms
    • 7.1.3 Multi-camera Fusion Algorithms
    • 7.1.4 Environment Perception Algorithms
    • 7.1.5 Computing Power Development Planning
  • 7.2 NIO
    • 7.2.1 Profile
    • 7.2.2 Intelligent Driving System Evolution
    • 7.2.3 Comparison between Pilot System and NAD System
  • 7.3 Li Auto
    • 7.3.1 Profile
    • 7.3.2 Intelligent Driving Route
    • 7.3.3 Algorithm Evolution
    • 7.3.4 Intelligent Driving Algorithm Architecture of AD Max 3.0
    • 7.3.5 Layout in Intelligent Driving
    • 7.3.6 Future Automotive Development Plan
  • 7.4 Xpeng
    • 7.4.1 Profile
    • 7.4.2 Intelligent Driving System and Algorithm Evolution
    • 7.4.3 Intelligent Driving Algorithm Architecture
    • 7.4.4 New Perception Architecture (1)
    • 7.4.4 New Perception Architecture (2)
    • 7.4.4 New Perception Architecture (3)
    • 7.4.5 Recent Cooperation Dynamics and Development Planning
  • 7.5 Leapmotor
    • 7.5.1 Profile
    • 7.5.2 Global Independent R&D
    • 7.5.3 Intelligent Driving Technology Planning
  • 7.6 ZEEKR
    • 7.6.1 Profile
    • 7.6.2 ZEEKR & Mobileye Intelligent Driving Solution
    • 7.6.3 ZEEKR & Waymo Intelligent Driving Solution
  • 7.7 BMW
    • 7.7.1 Profile
    • 7.7.2 Intelligent Driving
    • 7.7.3 Intelligent Driving Implementation and Development Planning
    • 7.7.4 Dynamics in Recent Intelligent Driving
  • 7.8 SAIC
    • 7.8.1 Intelligent Driving Layout
    • 7.8.2 Profile of Z-One
    • 7.8.3 Computing Platform of Z-One
    • 7.8.4 SAIC AI LAB
  • 7.9 GM
    • 7.9.1 Intelligent Driving Layout
    • 7.9.2 Profile and Recent Dynamics of Cruise
    • 7.9.3 Perception Algorithms of Cruise
    • 7.9.4 Decision Algorithms of Cruise
    • 7.9.5 Intelligent Driving Development Toolchain of Cruise
    • 7.9.6 Development Planning of Cruise

8 Robtaxi Algorithms of L4 Intelligent Driving

  • 8.1 Baidu Apollo
    • 8.1.1 Profile
    • 8.1.2 Architecture of Apollo 9.0
    • 8.1.3 Perception Algorithms (1)
    • 8.1.3 Perception Algorithms (2)
    • 8.1.3 Perception Algorithms (3)
    • 8.1.4 CVIS Solutions
    • 8.1.5 The Latest Intelligent Driving Solutions (1)
    • 8.1.5 The Latest Intelligent Driving Solutions (2)
    • 8.1.6 Intelligent Driving Solutions (1)
    • 8.1.6 Intelligent Driving Solutions (2)
  • 8.2 Pony.ai
    • 8.2.1 Profile
    • 8.2.2 Main Businesses and Business Models
    • 8.2.3 Core Technology and the Latest Intelligent Driving System Configuration
    • 8.2.4 Sensor Fusion Solutions
    • 8.2.5 Intelligent Driving Solutions
    • 8.2.6 Recent Dynamics in Cooperation
  • 8.3 WeRide
    • 8.3.1 Profile
    • 8.3.2 Intelligent Driving Platform
    • 8.3.3 WeRide One Algorithm Module
    • 8.3.4 Recent Dynamics in Cooperation
  • 8.4 DeepRoute.ai
    • 8.4.1 Profile
    • 8.4.2 Full Stack Solutions for L4 Autonomous Driving
    • 8.4.3 Self-developed Algorithms
    • 8.4.4 Intelligent Driving Solutions
    • 8.4.5 Recent Dynamics in Cooperation
  • 8.5 QCraft
    • 8.5.1 Profile
    • 8.5.2 Intelligent Driving Solutions
    • 8.5.3 Hyper-converged Perception Solutions
    • 8.5.4 Prediction Algorithms
    • 8.5.5 Planning Algorithms
    • 8.5.6 Classic Algorithm Models
  • 8.6 UISEE
    • 8.6.1 Profile
    • 8.6.2 Intelligent Driving System
    • 8.6.3 Vision Positioning Technology
    • 8.6.4 The Latest Algorithm
    • 8.6.5 Recent Cooperation Dynamics and Partners
  • 8.7 Didi Autonomous Driving
    • 8.7.1 Profile
    • 8.7.2 Intelligent Driving Technology
    • 8.7.3 Application of Intelligent Driving Technology
  • 8.8 Waymo
    • 8.8.1 Profile
    • 8.8.2 Sensor Matrix
    • 8.8.3 Intelligent Driving Algorithms
    • 8.8.4 Behavior Prediction Algorithms
    • 8.8.5 Recent Dynamics