封面
市場調查報告書
商品編碼
1613812

中國的汽車多模態互動開發(2024年)

China Automotive Multimodal Interaction Development Research Report, 2024

出版日期: | 出版商: ResearchInChina | 英文 270 Pages | 商品交期: 最快1-2個工作天內

價格
簡介目錄

1. 語音辨識將主導座艙交互,並與多種方式結合創造新的交互體驗。

目前座艙互動應用中,語音互動是智慧座艙應用最廣泛、使用頻率最高的。根據水清木華研究中心最新統計,2024年1月至8月,自動語音系統安裝量約1,100萬輛,較去年同期成長10.9%,安裝率達83%。百度Apollo智慧座艙業務總經理李濤表示, "人們使用座艙的頻率最初是每天三到五次,但現在已經增加到兩位數,在語音互動技術方面處於領先地位。" 模型,它達到了近三位數。

語音辨識功能的頻繁使用不僅極大優化了使用者的互動體驗,也推動了與觸控、臉部辨識等其他互動方式融合的發展趨勢。例如,蔚來榕樹2.4.0的全座艙記憶功能是基於臉部識別,NOMI會主動向記錄資訊的乘員打招呼(例如 "豆豆早安" )。 Zeekr 7X整合了語音辨識和眼神交流,讓駕駛者能夠看到和說話,並傾斜頭部以語音控制汽車。

2.比亞迪推出手掌靜脈認證,Sterra車載健康監測首次亮相。

相較於語音辨識、人臉辨識等成熟的互動方式,指紋、靜脈、心率等生物辨識技術仍處於探索和發展的早期階段,但正在逐步量產和使用。例如,比亞迪於2024年推出手掌靜脈認證功能,方便車輛解鎖。捷恩斯和賓士分別在2025年捷恩斯GV70和2025年賓士EQE BEV上安裝了指紋認證系統,使用者只需使用指紋即可執行身分識別、啟動車輛、支付等各種操作。此外,Exeed Sterra還在新款ET車型中採用了虹軟視覺識別技術,實現了車內智慧健康監測功能,包括五種車內健康監測:心率、血壓、血氧飽和度、呼吸頻率、輸出包含五個關鍵身體指標的健康報告。

生物辨識認證技術的引進,不僅提高了駕駛便利性,也大大提高了汽車的安全防護性能,有效防範疲勞駕駛、汽車被竊等安全隱患。未來,這些生物辨識技術將更廣泛地融入智慧網聯汽車的發展中,為駕駛者提供更安全、更個人化的出行體驗。

案例一:創世紀2025 GV70的指紋辨識系統可讓使用者透過指紋辨識快速套用個人化設定(座椅、位置等),並輔助啟動/駕駛車輛。它還具有便利操作、指紋支付、停車服務員模式等個人化整合功能。

範例2:比亞迪手掌靜脈認證系統透過攝影機讀取手掌靜脈數據,並在距離8到20公分、水平360度、垂直15度的情況下進行辨識。專業影像擷取模組擷取靜脈紋路影像,透過演算法擷取特徵並存儲,最終實現識別和識別。未來可能會先搭載於高階品牌陽王車型上。

案例三:Exeed Sterra ET車型搭載DHS智慧健康監測功能。基於先進的視覺多模態演算法,從體表即時分析您的健康狀況,測量心率、血壓、血氧飽和度、呼吸頻率、心率變異性五項主要身體指標,並提供健康報告。

本報告對中國汽車產業進行了研究分析,提供了多模態互動的主流方式、2024年將發布的車輛互動方式的運用情況、各整車廠/供應商的解決方案以及發展趨勢等資訊.

目錄

第1章 駕駛座多模態互動概要

  • 多模態互動定義
  • 多模態互動開發系統
  • 多模態互動產業鏈
  • 多模態互動政策環境

第2章 駕駛座單一模態互動

  • 駕駛座模態互動系統的裝載數
  • 觸覺互動
  • 聽覺互動
  • 視覺互動
  • 嗅覺互動
  • 其他的生物識別功能

第3章 OEM的駕駛座多模態互動解決方案

  • SAIC
  • BYD
  • Changan Automobile
  • GAC
  • Geely
  • NIO
  • Xpen Motors
  • Li Auto
  • Leapmotor
  • Xiaomi Auto
  • BMW
  • Mercedes-Benz
  • Volkswagen

第4章 供應商的駕駛座多模態互動解決方案

  • Desay SV
  • Joyson Electronics
  • SenseTime
  • iFLYTEK
  • ThunderSoft
  • AISpeech
  • Huawei
  • Baidu
  • Tencent
  • NavInfo
  • Continental
  • MediaTek

第5章 車型的基準的多模態互動解決方案的應用案例

  • 老舖子品牌的案例
    • Yangwang U9
    • IM L6
    • Geely Galaxy E8
    • Zeekr 7X
    • Jiyue 07
    • Changan UNI-Z
    • Changan Deepal G318
    • Avatr 07
    • Dongfeng e-phi-007
    • ARCFOX aS5
    • Exeed Sterra ET
  • 新興品牌的案例
    • Xiaomi SU7
    • Luxeed R7
    • STELATO S9
    • Li Auto MEGA Ultra
    • Xpeng MONA 03
    • ONVO L60
    • Leapmotor C16
  • 合資企業品牌的案例
    • Volvo EX30
    • Lotus EMEYA
    • 2024 Buick E5
    • 2025 BMW i4
    • 2025 Mercedes-Benz All-electric EQE
    • 2025 Genesis GV70

第6章 多模態互動的摘要與開發趨勢

  • 智慧型駕駛座的多模態互動的融合用途
  • 趨勢1
  • 趨勢2(1):座艙交互載具擴大,交互範圍延伸至車外
  • 趨勢2(2)
  • 趨勢2(3)
  • 趨勢3(1)
  • 趨勢3(2)
  • 趨勢3(3)
  • 趨勢4
簡介目錄
Product Code: LYX011

Multimodal interaction research: AI foundation models deeply integrate into the cockpit, helping perceptual intelligence evolve into cognitive intelligence

China Automotive Multimodal Interaction Development Research Report, 2024 released by ResearchInChina combs through the interaction modes of mainstream cockpits, the application of interaction modes in key vehicle models launched in 2024, and the cockpit interaction solutions of OEMs/suppliers, and summarizes the development trends of cockpit multimodal interaction fusion.

1. Voice recognition dominates cockpit interaction, and integrates with multiple modes to create a new interaction experience.

Among current cockpit interaction applications, voice interaction is used most widely and most frequently in intelligent cockpits. According to the latest statistics from ResearchInChina, from January to August 2024, the automate voice systems were installed in about 11 million vehicles, a year-on-year increase of 10.9%, with an installation rate of 83%. Li Tao, General Manager of Baidu Apollo's intelligent cockpit business, pointed out that "the frequency of people using cockpits has increased from 3-5 times a day at the beginning to double digits today, and has even reached nearly three digits on some models with leading voice interaction technology."

The frequent use of voice recognition function not only greatly optimizes user interactive experience, but also promotes the development trend of fusing with other interactive modes such as touch and face recognition. For example, the full-cabin memory function of NIO Banyan 2.4.0 is based on face recognition, and NOMI actively greets occupants who have recorded information (e.g., "Good morning, Doudou"); Zeekr 7X integrates voice recognition with eye contact to enable the driver to see and speak to control, and tilt his/her head to control the car via voice.

2. BYD launched palm vein recognition, and Sterra in-cabin health monitoring debuted

Compared with the mature interaction modes such as voice and face recognition, biometric technologies such as fingerprint, vein, and heart rate are still in the early stage of exploration and development, but they are gradually being mass-produced and applied. For example, BYD launched a palm vein recognition function in 2024, which can realize convenient vehicle unlocking; Genesis and Mercedes-Benz introduced fingerprint recognition systems in the 2025 Genesis GV70 and 2025 Mercedes-Benz EQE BEV respectively, allowing users to complete a range of operations such as identification, vehicle start and payment only with fingerprints; in addition, Exeed Sterra still uses visual perception technology provided by ArcSoft in new ET model, realizing in-cabin intelligent health monitoring function, and outputting health reports for users including five major physical indicators, i.e., heart rate, blood pressure, blood oxygen saturation, respiratory rate and heart rate variability.

Introduction of biometric technology not only improves driving convenience, but also significantly enhances the safety protection performance of vehicles, effectively preventing potential safety hazards such as tired driving and car theft. In the future, these biometric technologies will be more widely integrated into the development of intelligent and connected vehicles, providing drivers with a safer and more personalized mobility experience.

Case 1: Fingerprint recognition system of Genesis 2025 GV70 allows users to quickly apply personalized settings (seats, positions, etc.) through fingerprint authentication, and also supports vehicle start/drive. In addition, there are personalized linkage functions such as easy to use, fingerprint payment, and valet mode.

Case 2: BYD's palm vein recognition system uses a camera to read palm vein data for recognition at a distance of 8-20cm, 360 degrees horizontally and 15 degrees vertically. It uses professional image acquisition module to obtain images of vein patterns, extracts characteristics through algorithms and stores them, and finally realizes identification and recognition. In the future, it may be first installed in high-end brand Yangwang models.

Case 3: Exeed Sterra ET model is equipped with DHS intelligent health monitoring function. Based on advanced visual multimodal algorithm, it can analyze health status in real time according to the surface of the human body, measure the five major physical indicators of heart rate, blood pressure, blood oxygen saturation, respiratory rate and heart rate variability, and output a health report.

3. AI foundation models lead cockpit interaction innovation, and perceptual intelligence evolves into cognitive intelligence

China Society of Automotive Engineers clearly defines and classifies intelligent cockpits in its jointly released white paper. The classification system is based on capabilities achieved by intelligent cockpits, comprehensively considers the three dimensions of human-machine interaction capabilities, scenario expansion capabilities, and connected service capabilities, and subdivides intelligent cockpits into five levels from L0 to L4.

With the wide adoption of AI foundation models in intelligent cockpits, HMI capabilities have crossed the boundary of L1 perceptual intelligence and entered a new stage of L2 cognitive intelligence.

Specifically, in the stage of perceptual intelligence, intelligent cockpit mainly relies on the in-cabin sensor system, such as cameras, microphones and touch screens, to capture and identify the behavior, voice and gesture information of driver and passengers, and then convert the information into machine-recognizable data. However, limited by established rules and algorithm framework, the cockpit interaction system in this stage still lacks the capability of independent decision and self-optimization, which is mainly reflected in the passive response to input information.

After entering the cognitive intelligence stage, intelligent cockpits can comprehensively analyze multiple data types such as voice, vision and touch by virtue of powerful multimodal processing capabilities of foundation model technology. This feature makes intelligent cockpits highly intelligent and humanized, able to actively think and serve, as well as keenly perceive actual needs of the driver and passengers, providing users with personalized HMI services. perceives

Case 1: SenseAuto introduced an intelligent cockpit AI foundation model product, A New Member For U, at the 2024 SenseAuto AI DAY. It can be regarded as the "Jarvis" on the vehicle, which can weigh up occupants' words and observe their expressions, actively think, serve, and plan. For example, on the road, it can actively turn up the air conditioner temperature and lower music volume for the sleeping children in the rear seat, and adjust the chassis and driving mode to the comfort mode to create a more comfortable sleeping environment. In addition, it can actively detect the physical condition of occupants, find the nearest hospital for the sick ones, and plan the route.

Case 2: NOMI Agents, NIO's multi-agent framework, uses AI foundation models to reconstruct NOMI's cognition and complex task processing capabilities, allowing it to learn to use tools, for example, calling search, navigation, and reservation services. Meanwhile, according to complexity and time span of the task, NOMI is able to perform complex planning and scheduling. For example, among NOMI's six core multi-agent functions, "NOMI DJ" recommends a playlist that suits the context to users based on their needs, and actively creates an atmosphere; "NOMI Exploration" understands based on spatial orientation, matches map data and world knowledge, and answers children's questions, for example, "what is the tower on the side?".

Table of Contents

1 Overview of Cockpit Multimodal Interaction

  • 1.1 Definition of Multimodal Interaction
  • 1.2 Multimodal Interaction Development System
  • 1.3 Multimodal Interaction Industry Chain
    • 1.3.1 Multimodal Interaction Industry Chain - Chip Vendors
    • 1.3.2 Multimodal Interaction Industry Chain - Algorithm Providers
    • 1.3.3 Multimodal Interaction Industry Chain - System Integrators
  • 1.4 Multimodal Interaction Policy Environment
    • 1.4.1 Summary of Laws and Regulations Related to Network Data Security of Intelligent Connected Vehicle
    • 1.4.2 Multimodal Interaction Laws and Regulations (1)
    • 1.4.2 Multimodal Interaction Laws and Regulations (2)
    • 1.4.2 Multimodal Interaction Laws and Regulations (3)

2 Cockpit Single-modal Interaction

  • 2.1 Installation of Cockpit Modal Interaction System
    • 2.1.1 Installations & Installation Rate of In-vehicle Voice Recognition, 2024
    • 2.1.2 Installations & Installation Rate of In-vehicle Voiceprint Recognition, 2024
    • 2.1.3 Installations & Installation Rate of Exterior Voice Recognition, 2024
    • 2.1.4 Installations & Installation Rate of In-vehicle Gesture Recognition, 2024
    • 2.1.5 Installations & Installation Rate of In-vehicle Face Recognition (FACE ID), 2024
    • 2.1.6 Installations & Installation Rate of In-vehicle DMS, 2024
    • 2.1.7 Installations & Installation Rate of In-vehicle OMS, 2024
  • 2.2 Haptic Interaction
    • 2.2.1 Haptic Interaction Development Route
    • 2.2.2 Application Cases of Haptic Interaction in Vehicle Models
    • 2.2.3 Haptic Feedback Technology
    • 2.2.4 Summary of Haptic Interaction Suppliers
  • 2.3 Auditory Interaction
    • 2.3.1 Voice Recognition Development Route
    • 2.3.2 Application Cases of Voice Recognition in Vehicle Models
    • 2.3.3 Application Cases of Voiceprint Recognition in Vehicle Models
    • 2.3.4 Application Cases of External Voice Recognition in Vehicle Models
    • 2.3.5 Summary of Voice Interaction Suppliers
  • 2.4 Visual Interaction
    • 2.4.1 Gesture Recognition Development Route
    • 2.4.2 Application Cases of Gesture Recognition in Vehicle Models
    • 2.4.3 Facial Recognition Development Route
    • 2.4.4 Application Cases of Face Recognition in Vehicle Models
    • 2.4.5 Application Case of Line of Sight Recognition Vehicle Models
    • 2.4.6 Application Case of Lip Movement Recognition Vehicle Models
    • 2.4.7 Summary of Visual Interaction Suppliers (1) - Gesture Recognition
    • 2.4.7 Summary of Visual Interaction Suppliers (2) - Face Recognition
    • 2.4.7 Summary of Visual Interaction Supplier (3) - Lip Movement Recognition
  • 2.5 Olfactory Interaction
    • 2.5.1 Olfactory Interaction Development Route
    • 2.5.2 Application Cases of Olfactory Interaction in Vehicle Models
    • 2.5.3 Summary of Automotive Smart Fragrance/Air Purification Suppliers
  • 2.6 Other Biometric Functions
    • 2.6.1 Iris Recognition Development Route
    • 2.6.2 Application Case of Iris Recognition Vehicle Models
    • 2.6.3 Iris Recognition AR/VR Applications
    • 2.6.4 Solutions of Iris Recognition Suppliers
    • 2.6.5 Summary of Iris Recognition Suppliers
    • 2.6.6 Fingerprint Recognition Development Route
    • 2.6.7 Application Cases of Fingerprint Recognition in Vehicle Models
    • 2.6.8 Summary of Fingerprint Recognition Suppliers
    • 2.6.9 Vein Recognition Development Route
    • 2.6.10 Application Cases of Vein Recognition in Vehicle Models
    • 2.6.11 Summary of Vein Recognition Suppliers
    • 2.6.12 Heart Rate Recognition Development Route
    • 2.6.13 Application Case of Heart Rate Recognition Vehicle Models
    • 2.6.14 Summary of Heart Rate Recognition Suppliers
    • 2.6.15 Electromyography Recognition Development Route
    • 2.6.16 Introduction to Electromyography Recognition Equipment
    • 2.6.17 Application of Electromyography Recognition Vehicle Models
    • 2.6.18 Summary of Electromyography Recognition Suppliers

3 Cockpit Multimodal Interaction Solutions of OEMs

  • 3.1 SAIC
    • 3.1.1 Z-ONE Galaxy Full-stack Solution
    • 3.1.2 Rising Intelligent Cockpit Solution
    • 3.1.3 IM Intelligent Cockpit Solution
    • 3.1.4 IM Generative Foundation Model
    • 3.1.5 Multimodal Interaction OTA Content Summary (1): Rising Auto
    • 3.1.5 Multimodal Interaction OTA Content Summary (2): IM Motors
  • 3.2 BYD
    • 3.2.1 Intelligent cockpit Solution
    • 3.2.2 In-cabin Unique Multimodal Interactive Applications
    • 3.2.3 Xuanji AI Foundation Model
    • 3.2.4 Multimodal Interaction OTA Content Summary (1): BYD Dynasty & Ocean
    • 3.2.4 Multimodal Interaction OTA Content Summary (2): Denza
    • 3.2.4 Multimodal Interaction OTA Content Summary (3): Fangchengbao & Yangwang
  • 3.3 Changan Automobile
    • 3.3.1 Changan Intelligent Cockpit Solution
    • 3.3.2 Nevo Intelligent Cockpit Solution
    • 3.3.3 Deepal Intelligent Cockpit Solution
    • 3.3.4 Avatr Intelligent Cockpit Solution
    • 3.3.5 Automotive Foundation Model: Xinghai Model
    • 3.3.6 Multimodal Interaction OTA Content Summary (1): Changan
    • 3.3.6 Multimodal Interaction OTA Content Summary (2): Avatr
    • 3.3.6 Multimodal Interaction OTA Content Summary (3): Deepal
  • 3.4 GAC
    • 3.4.1 Intelligent Cockpit Solution
    • 3.4.2 ADiGO SENSE AI Foundation Model
    • 3.4.3 Multimodal Interaction OTA Content Summary
  • 3.5 Geely
    • 3.5.1 Geely Intelligent Cockpit Solution
    • 3.5.2 Zeekr Intelligent Cockpit Solution
    • 3.5.3 Jiyue Intelligent Cockpit Solution
    • 3.5.4 Xingrui AI Foundation Model
    • 3.5.5 Kr AI Foundation Model
    • 3.5.6 Multimodal Interaction OTA Content Summary (1): Geely
    • 3.5.6 Multimodal Interaction OTA Content Summary (2): Zeekr
    • 3.5.6 Multimodal Interaction OTA Content Summary (3): Jiyue
  • 3.6 NIO
    • 3.6.1 Intelligent Cockpit Solution
    • 3.6.2 ONVO Intelligent Cockpit Solution
    • 3.6.3 In-cabin Unique Multimodal Interactive Applications
    • 3.6.4 Multimodal Perception Model: NOMI GPT
    • 3.6.5 Multimodal Interaction OTA Content Summary
  • 3.7 Xpeng Motors
    • 3.7.1 Intelligent Cockpit Solution
    • 3.7.2 In-cabin Unique Multimodal Interactive Applications
    • 3.7.3 Automotive Large Language Model: XGPT
    • 3.7.4 Multimodal Interaction OTA Content Summary
  • 3.8 Li Auto
    • 3.8.1 Intelligent cockpit Solution
    • 3.8.2 In-cabin Unique Multimodal Interactive Applications
    • 3.8.3 Intelligent Cockpit
    • 3.8.4 Multimodal Interaction OTA Content Summary
  • 3.9 Leapmotor
    • 3.9.1 Intelligent Cockpit Solution (1)
    • 3.9.1 Intelligent Cockpit Solution (2)
    • 3.9.2 Voice Foundation Model: Tongyi
    • 3.9.3 Multimodal Interaction OTA Content Summary
  • 3.10 Xiaomi Auto
    • 3.10.1 Intelligent Cockpit Solution
    • 3.10.2 Car-side Large Model: MiLM
    • 3.10.3 Sound Foundation Model is Installed in Cars
    • 3.10.4 Multimodal Interaction OTA Content Summary (1)
    • 3.10.4 Multimodal Interaction OTA Content Summary (2)
  • 3.11 BMW
    • 3.11.1 Intelligent Cockpit Solution (1)
    • 3.11.1 Intelligent Cockpit Solution (2)
    • 3.11.2 In-cabin Unique Multimodal Interactive Applications
  • 3.12 Mercedes-Benz
    • 3.12.1 Intelligent Cockpit Solution
    • 3.12.2 In-cabin Unique Multimodal Interactive Applications
    • 3.12.3 Cooperation Dynamics of Cockpit Foundation Model
  • 3.13 Volkswagen
    • 3.13.1 Intelligent Cockpit Solution
    • 3.13.2 Upgrade Trends of Haptic Interaction System
    • 3.13.3 Upgrade Trends of Voice Interaction System

4 Cockpit Multimodal Interaction Solutions of Suppliers

  • 4.1 Desay SV
    • 4.1.1 Profile
    • 4.1.2 Multimodal Interaction Solution (1)
    • 4.1.2 Multimodal Interaction Solution (2)
  • 4.2 Joyson Electronics
    • 4.2.1 Profile
    • 4.2.2 Evolution of Joynext Intelligent Cockpit
    • 4.2.3 Multimodal Interaction Layout
    • 4.2.4 Features of Joynext Intelligent Cockpit Interaction (1)
    • 4.2.4 Features of Joynext Intelligent Cockpit Interaction (2)
  • 4.3 SenseTime
    • 4.3.1 I Profile
    • 4.3.2 SenseAuto Intelligent Cockpit Product System
    • 4.3.3 SenseAuto Intelligent Cockpit Products
    • 4.3.4 SenseNova Model Empowers Cockpit Interaction
    • 4.3.5 SenseAuto Multimodal Interaction Application Case
  • 4.4 iFLYTEK
    • 4.4.1 Profile
    • 4.4.2 Full-Stack Intelligent Interaction Technology
    • 4.4.3 Features of Multimodal Perception System
    • 4.4.4 Spark Cognitive Foundation Model
    • 4.4.5 Spark Foundation Model Enables Cockpit Interaction
    • 4.4.6 Multimodal Interaction Becomes the Key Direction of iFlytek Super Brain 2030 Plan
  • 4.5 ThunderSoft
    • 4.5.1 Profile
    • 4.5.2 Cockpit Interaction Features
    • 4.5.3 Rubik Model Enables Cockpit Interaction
    • 4.5.4 Vehicle Operating System
  • 4.6 AISpeech
    • 4.6.1 Profile
    • 4.6.2 Features of Multimodal Interaction Solution
    • 4.6.3 Multimodal Interaction Products
    • 4.6.4 Language Foundation Model
  • 4.7 Huawei
    • 4.7.1 Profile
    • 4.7.2 Multimodal Interaction History
    • 4.7.3 Harmony OS 4.0 Intelligent cockpit
    • 4.7.4 New-generation HarmonySpace cockpit
    • 4.7.5 HarmonySpace Interaction Features (1)
    • 4.7.5 HarmonySpace Interaction Features (2)
    • 4.7.5 HarmonySpace Interaction Features (3)
    • 4.7.5 HarmonySpace Interaction Features (4)
    • 4.7.5 HarmonySpace Interaction Features (5)
    • 4.7.5 HarmonySpace Interaction Features (6)
    • 4.7.6 HarmonyOS NEXT Interaction Features
    • 4.7.7 Pangu Foundation Model
  • 4.8 Baidu
    • 4.8.1 Profile
    • 4.8.2 Interaction Features of AI Native Operating System
    • 4.8.3 ERNIE Bot Empowers Baidu Smart Cabin
    • 4.8.4 Interaction Features of Baidu Smart Cabin Model 2.0
  • 4.9 Tencent
    • 4.9.1 Profile
    • 4.9.2 Cockpit Interaction Features (1)
    • 4.9.2 Cockpit Interaction Features (2)
  • 4.10 NavInfo
    • 4.10.1 Profile
    • 4.10.2 Cockpit Interaction Features
    • 4.10.3 Introduction to AutoChips
    • 4.10.4 Intelligent Cockpit Domain Control SoC Chip of AutoChips
    • 4.10.5 Application of AutoChips In-cabin Monitoring Function
  • 4.11 Continental
    • 4.11.1 Profile
    • 4.11.2 Multimodal Product Layout
    • 4.11.3 Cockpit Interaction Features
    • 4.11.4 Multimodal Interaction Products (1)
    • 4.11.4 Multimodal Interaction Products (2)
  • 4.12 MediaTek
    • 4.12.1 Profile
    • 4.12.2 Cockpit Interaction Features

5 Application Cases of Multimodal Interaction Solutions in Benchmarking Vehicle Models

  • 5.1 Cases of Traditional Brands
    • 5.1.1 Yangwang U9
    • 5.1.2 IM L6
    • 5.1.3 Geely Galaxy E8
    • 5.1.4 Zeekr 7X
    • 5.1.5 Jiyue 07
    • 5.1.6 Changan UNI-Z
    • 5.1.7 Changan Deepal G318
    • 5.1.8 Avatr 07
    • 5.1.9 Dongfeng e-phi-007
    • 5.1.10 ARCFOX aS5
    • 5.1.11 Exeed Sterra ET
  • 5.2 Cases of Emerging Brands
    • 5.2.1 Xiaomi SU7
    • 5.2.2 Luxeed R7
    • 5.2.3 STELATO S9
    • 5.2.4 Li Auto MEGA Ultra
    • 5.2.5 Xpeng MONA 03
    • 5.2.6 ONVO L60
    • 5.2.7 Leapmotor C16
  • 5.3 Cases of Joint Venture Brands
    • 5.3.1 Volvo EX30
    • 5.3.2 Lotus EMEYA
    • 5.3.3 2024 Buick E5
    • 5.3.4 2025 BMW i4
    • 5.3.5 2025 Mercedes-Benz All-electric EQE
    • 5.3.6 2025 Genesis GV70

6 Summary and Development Trends of Multimodal Interaction

  • 6.1 Fusion Application of Multimodal Interaction in Intelligent Cockpits
  • 6.2 Trend 1
  • 6.3 Trend 2 (1): Cockpit Interaction Carriers Expand, and Interaction Range Extends outside the Vehicle
  • 6.3 Trend 2 (2)
  • 6.3 Trend 2 (3)
  • 6.4 Trend 3 (1)
  • 6.4 Trend 3 (2)
  • 6.4 Trend 3 (3)
  • 6.5 Trend 4