AI Online

Ai INNOVATION, SINCE 1895

Existing in-vehicle computing technology for advanced driv­er assistance simply does not have the processing power to take the next step towards fully autonomous systems.

“Imagine training such a system to be ready for any possible eventuality. It’s just not possible,” says NVIDIA press statement. The company has applied neural network theory to the design of its DRIVE PX development platform, which includes a new deep neural network software development kit called DIGITS, as well as video capture and video processing libraries.

DIGITS can be run on systems powered by NVIDIA GPUs — including its new DIGITS DevBox development platform. This lets computers train themselves to understand complex scenes that a driver would encounter in the real world. “Now we can do more than just train systems to recognize objects like a human – we can train the system to behave and anticipate like a human,” says NVIDIA. The NVIDIA DRIVE PX development platform is now available to automakers, Tier I automotive suppliers and research institutions.

In March this year, shortly after announcing the DRIVE PX self-driving car computer, NVIDIA CEO and co-founder Jen-Hsun Huang invited Tesla Motors CEO Elon Musk to see the progress first-hand. Musk remarked: “What NVIDIA is doing with Tegra is really interesting and really important for self driving in the future.

We’ll take autonomous cars for granted in quite a short time. I almost view it as a solved problem. We know what to do, and we’ll be there in a few years.”

Audi AG announced in January 2015 that it would be using Tegra X1 to build on its current work with predecessor, the Tegra K1, to supply the intelligence needed to help achieve the dream of a self-driven car. “With every mile it drives, every hour, the car will learn more and more,” said Ricky Hudi, Audi’s executive vice president for electrics/electronics development. “We’re very close to reality,” Hudi said. “We’re not demonstrating a vision. We’re demonstrating what will be reality,” he added.

Automotive Industries (AI) asked Danny Shapiro, senior director of automotive at NVIDIA, how his company has helped cars get smarter.

Shapiro: Cars today are full of electronics. However, NVIDIA is not just selling chips to the auto industry; we are developing extremely powerful computers for the car. These complex systems are the brains of the vehicle, and are essential as we move into the age of piloted driving. It is this combination of energy-efficient supercomputing and sophisticated software that will enable cars to be much safer in the near term, and then ultimately drive themselves.

AI: What are some of the challenges of deep learning for automobiles?

Shapiro: An enormous problem in developing self-driving technologies is being able to anticipate all the possible scenarios that a vehicle could encounter. As a programmer there is no way to account for all the different “if…then…else” statements. Deep learning gives the car the ability to learn, not just how to recognize objects, but also to learn driving behavior. We are modeling this system on the human brain, and how humans actually learn through experience. The challenge of course is to be able to have a robust system with enough data.

AI: What expertise was put into play when NVIDIA approached this technology?

Shapiro: NVIDIA has been working in computer vision and deep learning for a long time. Our GPUs are ideally suited for deep learning due to the highly parallel nature of the algorithms. Training the neural network is an extremely compute intensive task. For example, what can takes months on the CPU can be done in days on the GPU. The advantage of deep learning is that it can be applied to many types of applications: from natural language processing to computer vision to behavioral analysis.

AI: Tell us about the DRIVE PX – what went into its development?

Shapiro: DRIVE PX was designed to be a development platform for automakers that want to build self-driving cars. As more and more sensors appear on the car these components are simply just generating data. We realized that there needed to be a central processing system to fuse together all the different data from camera, radar, lidar, ultrasonics, etc. Working with several automakers, we came up with the design for DRIVE PX to be able to aggregate the data, and then be able to act upon it in real time.

AI: How will DRIVE PX change the way car computers are viewed?

Shapiro: DRIVE PX is the most sophisticated car computer ever built, but we are just getting started. This advanced technology is going to continue to get more and more powerful. The benefit is that DRIVE computers are based on a scalable architecture. The same code that runs on supercomputers in the datacenter can run on DRIVE car computers. This makes it extremely cost-effective for development, and reduces time to market.

AI then asked Mike Houston, Distinguished Engineer, Deep Learning at NVIDIA, how difficult it is to get cars to “learn”.

Houston: Now that NVIDIA has developed the complete training hardware and software, the learning process has been dramatically simplified. The learning process will be the result of automakers and Tier 1s processing massive amounts of video to teach the system about different driving situations, as well as regional differences in laws and customs. The more an automaker invests in the training process, the better the results will be.

AI: Are we close to self-driving cars that are safe and feasible?

Houston: Later this year some automakers will offer auto piloted features under certain conditions and speeds. And over the next several years more automakers will activate these features for traffic jam assistance, highway cruising and self-parking. This is a reality. Fully autonomous vehicles will be many years away, but sooner than you think.

AI: How have advanced driver assistance systems evolved over the years and how important is it for them to start “thinking” for themselves?

Houston: Traditional ADAS has relied on basic computer vision techniques and straightforward classification schemes. These have been effective for some ADAS features, such as lane departure warnings or pedestrian detection. But for a vehicle to be able to fully assess everything that is happening 360 degrees around it, build an environment map, identify free space and be able to take appropriate action will require a much more sophisticated system, like a deep neural network.

AI: What is DIGITS and the role it plays in making this a reality.

Houston: DIGITS is the deep learning GPU training system. This phase of creating the self-driving car happens offline, before a car is on the road. Massive amounts of data from thousands if not millions, of miles of driving are fed into the system. Through supervised learning, a data scientist helps guide the training system. Once a robust model has been created, it is loaded into the vehicle. During further testing if the car encounters situations in which it is not confident it will record that information and then communicate it back to the datacenter. That new video will be used for additional training and will become part of an enhanced deep neural network that can be updated to the vehicle over the air. The training is not real-time, but will be an iterative process, making the vehicle smarter the more it drives.

 

Previous posts

Next posts

Wed. April 17th, 2024

Share this post