市場調查報告書
商品編碼
1518884
DJI Automotive自動駕駛業務分析(2024年)Analysis on DJI Automotive's Autonomous Driving Business, 2024 |
DJI Automotive的調查:以獨特技術路線引領NOA市場。
2016年,DJI Automotive內部工程師將完整的立體感測器+視覺融合定位系統安裝到汽車上並成功駕駛。DJI Automotive在無人機領域累積的感知、定位、判斷、規劃等技術已成功轉移到智慧駕駛領域。
DJI Automotive的創辦人和管理階層幾乎全來自DJI無人機計畫。DJI Automotive剛成立時,成員約有10人,主要由當時從DJI Flight Control Department和Vision Department臨時調來的代表組成。
DJI創新是一家專門從事智慧機器人研究的公司,並聲稱無人機和自動駕駛汽車是不同形式的智慧機器人。DJI創新透過其專有技術路線,在NOA的量產和使用方面處於領先地位。DJI Automotive預計,到2025年,約200萬輛乘用車將配備DJI Automotive的智慧駕駛系統。
立體視覺感測器持續優化
DJI Automotive的核心技術之一是立體視覺。即使GPS等其他感測器故障,無人機也可以基於立體攝影機的視覺識別進行懸停、避障、測量速度等。
將立體視覺技術應用於自動駕駛汽車後,DJI Automotive根據不同自動駕駛等級的需求,不斷優化立體視覺感測器。
2023年,為了滿足NOA的需求,DJI Automotive推出了第二代慣導立體視覺系統。該系統透過添加客製化光學偏光片來消除整體鏡頭遮光罩,並使用更好的自校準演算法取消剛性連桿。這使得感測器的安裝變得很容易,兩個攝影機之間的距離可以從180mm到400mm靈活設定。取消剛性連桿是立體視覺感測器的一大進步,讓立體相機能夠應用於更多場景。
基於L3級自動駕駛的需求,DJI Automotive於2024年發布了集光達、立體感測器、長焦單攝影機和慣性導航於一體的光達視覺系統。與目前市場上常見的 "雷射雷達+前置攝影機" 解決方案相比,該系統實現了100%的性能並取代了所有功能,同時降低了30%至40%的成本。由於採用一體化設計, "LiDAR-vision" 解決方案還可以整合在整個駕駛室中,降低整體安裝成本。
"LiDAR-vision" 解決方案可以進一步增強車輛縱向控制的安全性。憑藉雷射雷達的精確測距能力和對光照的穩健性, "雷射雷達視覺" 解決方案可用於短距離切入、處理城市地區複雜的交通流、適應弱勢道路使用者(VRU)以及克服任意障礙物智慧駕駛系統可以進一步提高避障、繞道、夜間VRU等場景的安全性和舒適性。
利用無人機技術進行資料擷取與模擬
三種自動駕駛資料擷取方式中,以車輛擷取最為常見,但有效資料比例較低,容易幹擾周圍車輛的實際行為,且感測器盲區的資料無法取得記錄了。另一種方法是現場採集,但靈活性較差且不可靠。
根據亞琛工業大學汽車技術研究所fka和DJI Automotive近兩年來的深入研究,無人機航測資料收集具有明顯的優勢。無人機可以收集更豐富、更完整的場景資料,可以在沒有任何障礙物的情況下直接採集目標車輛盲點內所有車輛的空拍客觀照片,使其更加真實且無需人為幹擾,可以更有效率地採集特定路段和資料。
本報告針對DJI Automotive自動駕駛業務進行調查分析,提供公司核心技術、解決方案和發展趨勢等資訊。
Research on DJI Automotive: lead the NOA market by virtue of unique technology route.
In 2016, DJI Automotive's internal technicians installed a set of stereo sensors + vision fusion positioning system into a car and made it run successfully. DJI Automotive's technologies such as perception, positioning, decision and planning accumulated in the drone field have been successfully transferred to intelligent driving field.
Almost all founding and management team members of DJI Automotive came from DJI's drone projects. DJI Automotive had only about 10 members at the beginning, mainly composed of representatives temporarily transferred from the Flight Control Department and Vision Department of DJI at that time.
DJI claims that it is a company specializing in the research of intelligent robots, and drones and autonomous vehicles are different forms of intelligent robots. Relying on its unique technology route, DJI holds lead in the mass production and application of NOA. By DJI Automotive's estimates, around 2 million passenger cars taking to road will be equipped with DJI Automotive's intelligent driving systems in 2025.
Continuously optimize stereo vision sensors
One of the core technologies of DJI Automotive is stereo vision. Even when other sensors like GPS fail, based on visual perception of the stereo camera, drones can still enable hovering, obstacle avoidance, and speed measurement among others.
After applying stereo vision technology to autonomous vehicles, DJI Automotive continues to optimize stereo vision sensors according to requirements of different autonomous driving levels.
In 2023, to meet the needs of NOA, DJI Automotive launched the second-generation inertial navigation stereo vision system, which eliminates the overall lens hood by adding a customized optical polarizer and cancels the rigid connecting rod using a better self-calibration algorithm. This makes it easier to install the sensor, and the distance between two cameras can be flexibly configured from 180 mm to 400 mm. Elimination of the rigid connecting rod is a huge progress in stereo vision sensors, allowing stereo cameras to be applied in much more scenarios.
Based on the needs of L3 autonomous driving, in 2024 DJI Automotive introduced a LiDAR-vision system, which combines LiDAR, stereo sensor, long-focus mono camera and inertial navigation. Compared with the currently common "LiDAR + front camera" solution on the market, the system can reduce the costs by 30% to 40%, while enabling 100% performance and replacing all the functions. Thanks to the integrated design, the "LiDAR-vision" solution can also be built into the cabin as a whole, reducing the overall installation costs.
The "LiDAR-vision" solution can further enhance safety in vehicle longitudinal control. Thanks to LiDAR's precise ranging capabilities and robustness to illumination, the "LiDAR-vision" solution can further improve safety and comfort of intelligent driving system in such scenarios as cut-in at close range, complex traffic flow in urban areas, response to vulnerable road users (VRU), arbitrary obstacle avoidance, detour, and VRU at night.
Use drone technologies for data acquisition and simulation
Among the three autonomous driving data acquisition methods, acquisition by vehicles is the most common, but the proportion of effective data is low, and it is easy to interfere with real behaviors of surrounding vehicles, and it is unable to record data in blind spots of sensors. Another method is acquisition in field, with low flexibility and insufficient reliability, a result of angle skew and low image accuracy.
According to the in-depth research by fka, the automotive technology research institute of RWTH Aachen University, and DJI Automotive's own practices in the past two years, aerial survey data acquisition by drones has obvious advantages. Drones can collect richer and more complete scenario data, and can directly collect aerial objective shots of all vehicles in blind spots of the target vehicle without obstruction, reflecting more realistic and interference-free human driving behaviors, and more efficiently collecting data in specific road sections and special driving scenarios, for example, on/off-ramps and frequent cut-ins.
Why does the implementation of vision-only autonomous driving suddenly accelerate?
Why has the pace of implementing vision-only technology solutions suddenly quicken since 2024? The answer is foundation models. The research shows that a truly autonomous driving system needs at least about 17 billion kilometers of road verification before being production-ready. The reason is that even if the existing technology can handle more than 95% of common driving scenarios, problems may still occur in the remaining 5% corner cases.
Generally, learning a new corner case requires collecting more than 10,000 samples, and the entire cycle is more than 2 weeks. Even if a team has 100 autonomous vehicles conducting road tests 24 hours a day, the time required to accumulate data is measured in "hundred years" - which is obviously unrealistic.
Foundation models are used to quickly restore real scenarios and generate corner cases in various complex scenarios for model training. Foundation models (such as Pangu model) can shorten the closed-loop cycle of autonomous driving corner cases from more than two weeks to two days.
Currently, DJI Automotive, Baidu, PhiGent Robotics, GAC, Tesla and Megvii among others have launched their vision-only autonomous driving solutions. This weekly report summarizes and analyzes vision-only autonomous driving routes.