💥 Gate Square Event: #PTB Creative Contest# 💥
Post original content related to PTB, CandyDrop #77, or Launchpool on Gate Square for a chance to share 5,000 PTB rewards!
CandyDrop x PTB 👉 https://www.gate.com/zh/announcements/article/46922
PTB Launchpool is live 👉 https://www.gate.com/zh/announcements/article/46934
📅 Event Period: Sep 10, 2025 04:00 UTC – Sep 14, 2025 16:00 UTC
📌 How to Participate:
Post original content related to PTB, CandyDrop, or Launchpool
Minimum 80 words
Add hashtag: #PTB Creative Contest#
Include CandyDrop or Launchpool participation screenshot
🏆 Rewards:
🥇 1st
Tesla FSD 12 live broadcast debut! Only one intervention video in 45 minutes "feed" AI "driver"
Source: "Science and Technology Innovation Board Daily"
Editor Zheng Yuanfang
As previously promised, Musk used a Model S equipped with HW3 to show the Tesla FSD 12 beta live to the outside world last weekend.
In this 45-minute live broadcast, Musk, who was sitting behind the steering wheel and holding his mobile phone, only intervened in the behavior of the vehicle once. Choose the one with fewer cars among the two straight lanes.
About 20 minutes after the live broadcast started, Musk made the only intervention to take over the whole process. At that time, the Model S had to go straight, so it stopped and waited for the red light. But when the left-turn signal light turned green, the vehicle actually followed suit. Fortunately, Musk and the engineers on the side stopped it in time.
**▌You can "feed" the "AI driving" by feeding the video? **
In fact, in this live broadcast, when the vehicle slowed down on the speed bump and avoided the scooter rider, Musk repeatedly emphasized that there is no line of corresponding code in FSD 12, and the vehicle is artificially set to make these actions* *——It has not been trained how to read road signs, nor does it know what a scooter is, FSD 12's completion of these behaviors is entirely the result of a large number of video training. Using video training data, AI can learn to drive on its own, "doing things like humans".
If the FSD doesn't make the right decisions in a particular scenario, Tesla throws more data (mainly video) into its neural network training.
Of course, the most mediocre and random data is not enough, the data fed to the neural network needs to be carefully selected. Musk also emphasized that high-quality data from excellent drivers is the key to training Tesla's autonomous driving**.
"A large amount of mediocre data does not improve driving, and data management is quite difficult. We have a lot of software that can control what data the system selects and what data it trains on."
For Tesla, a major source of data is its fleet of cars from around the world. Musk also revealed that Tesla has multiple FSD test drivers around the world, including New Zealand, Thailand, Norway, Japan, etc.
Since 2020, Tesla has begun to shift Autopilot decision-making from programming logic to neural networks and AI. After three years of development, it can also be seen from Musk's FSD 12 live broadcast that almost the entire decision-making and scene processing has been transferred to Tesla's neural network and AI.
There are more than 300,000 lines of C++ code in the exclusive control stack of FSD 11, and only a few lines of code in 12. Musk also pointed out before that vehicle control (vehicle control) is the last piece of the puzzle on the "Tesla FSD AI puzzle", which will reduce the C++ code of more than 300,000 lines by about 2 orders of magnitude.
▌Full AI end-to-end driving control
Tesla FSD 12 is its most important upgrade, realizing full AI end-to-end driving control**.
As for why choose the end-to-end solution? When Musk connected with WholeMars before the live broadcast, he gave more details.
** "This is how humans do it," he said, "photons in, hands and feet (control) out." - Humans rely on eyes and biological neural networks to drive. For autonomous driving, cameras and neural network AI are correct The general decision-making scheme**.
Although it is difficult for the AI neural network to explain the specific details, correspondingly, human passengers cannot know exactly what the driver is thinking when they take a taxi, and can only see the driver's evaluation.
Brokers pointed out that one of the key differences between the end-to-end solution and the previous one is that the traditional modular architecture splits intelligent driving into separate tasks, which are handled by specialized AI models or modules, such as perception, prediction, planning, etc. ; while end-to-end AI is "integration of perception and decision-making", that is, integrating "perception" and "decision-making" into one model.
At present, most of Tesla's training still needs to rely on Nvidia's GPU, and Tesla's own Dojo supercomputer is used as an auxiliary. Since this year, Tesla has spent $2 billion on training.
Tesla is still working overtime, preparing a new computing power cluster, including 10,000 NVIDIA H100s, which is expected to go online this Monday (August 28). It is worth mentioning that the cluster uses Infiniband for connection transmission. Musk said frankly that Infiniband is more lacking than GPU today.