Issue: Aug 2018


Giving cars the power to think



We humans can drive cars almost without thinking about what we are doing consciously, and often in semi-autonomous mode on well-known routes.

by Nick Palmen


Darrin Shewchuk, Senior Director, Corporate Communications Harman, a Samsung Company.

Engineers are only now beginning to appreciate just how much computing power is needed to perform such everyday functions. “The human brain is constantly performing incredibly complex calculations while driving,” said Darrin Shewchuk, Senior Director, Corporate Communications Harman, A Samsung Company. “How far is that lamppost? Is that pedestrian going to step into the street? How long until the yellow light turns red? The industry has made incredible advances in automation, yet in-car computing is still a long way from approximating the power of our brains”.

HARMAN and Samsung chose the 2018 Geneva International Motor Show as the European venue to unveil a range of connected car solutions that support their joint mission to become leaders in connectivity and autonomous driving.
The key innovations that were showcased included a reinvented digital cockpit platform for all vehicle segments that has given the interior of the car a makeover; a telematics solution along with the industry’s first automotivegrade 5G-ready connectivity solution; and an ecosystem of partners and solutions to further build out Samsung’s DRIVELine, its new open platform for automated driving solutions.

One of the partnerships has resulted in the Rinspeed SNAP, which demonstrates the concept of providing a seamless and
rich user experience (UX) for full Level 5 autonomous driving by as early as 2025.

The Rinspeed SNAP concept car comes without a steering wheel and provides occupants of the vehicle with their own personalized settings and experience.

Setup like a “skateboard” and “pod,” detachable “pods” clip onto the SNAP chassis. The entertainment provided in the pods
is highly configurable and delivers a personalized experience for every passenger on every ride. The configuration possibilities range from shape-shifting in-vehicle speakers to brand specific visual elements changing in real time and sound tonic to accommodate the listening demands of each user.

Automotive Industries (AI) asked Shewchuk how he is combining his twin passions for vehicles and technology.

Shewchuk:I am a car guy, but I am also a technology geek. I have been in the automotive industry for 23 years now.
I started my career in GM’s Delphi. I was 19½ years at Delphi working on all the cool technology that is just now coming
to market. I worked on EV1 the first electric vehicle, worked on fuel cells, worked on telematics, worked in safety doing
radars and adaptive cruise control in the 90s and 2000s. And then ran Delphi’s autonomous driving program. The car that drove from California to New York in 2015 that was my project.

At the end of 2015 when I got a chance to join Samsung it was pretty exciting for me because Samsung said “we want to get
into auto, what should we do?” So, I got the chance to work on three major things within Samsung. Number one corporate strategy, looking at how do we enter the automotive market and what can we do to become a major player. And that resulted in the acquisitions of HARMAN. Secondly was new venture investment.

In 2017 Samsung invested in about 75 start-ups with US$200 million. In addition, we have invested around US$140 million in
existing automotive companies over the past two years. Our objective is to leverage those investments strategically in order
to bring new technology to market and help us accelerate and develop our own intellectual property. And then thirdly developing
innovation in autonomous driving.

AI: What are the latest HARMAN and Samsung joint automotive activities?

Shewchuk: When we acquired HARMAN, we already had 200 synergies between the two companies. Three of those are automotive. Those are telematics - looking at the future of 5G; the cockpit, where it is all about displays, user experience and
AI digital assistant; and the third area – ADAS and autonomous driving. I am leading the ADAS and autonomous driving synergy.
Those three are the main focus areas. The results were on show at CES and at the Geneva Motor Show.

AI: What does the Samsung DRIVELine autonomous driving platform bring to Rinspeed SNAP?

Shewchuk:The idea of DRIVELine is that it is a modular and scalable computing platform. We really believe that it is the future of in-car computing, but first we have to solve the biggest problem, which is how to develop advanced drive assistance systems (ADAS) to the point where they can drive the vehicle autonomously.

If you look at Level 2 ADAS, it requires what in information technology language is known as 3-10 tera (1012) floating point operations per second (FLOPS). At Level 3 you are talking to 50-100 tera operations. At Level 4 it is 200, and probably around 500, or one petFLOP (1015 or one quadrillion FLOPs per second), at Level 5. Computing power is therefore a big problem that needs to be solved for the rollout of the autonomous vehicle.

Another challenge is managing all the data and sensor inputs. DVLine is designed to solve these challenges. There are five focus
areas: Number one is computing power – we have a modular solution that can scale from Level 2 up to Level 5. Second is
sensors – bringing in everything from cameras to Lidars, to Radars. The third one is artificial intelligence (AI) and algorithms.
The fourth is the cloud. The fifth is connectivity which enables the cloud. And the sixth is user experience. The cloud has two
elements to it – first is the training of algorithms to process the data that you collect and annotate. Second is the deployment
or operational cloud, which is how you manage a fleet, how you access the vehicle for OTA programming, ensuring cybersecurity
and providing content services for the autonomous car. Those are the main elements that we focus on in the DVLine platform.

AI: How is the new autonomous driving platform designed to scale from Level 3 automation up to Level 4 and 5 (open & modular)?

Shewchuk:That platform scales from roughly 30 tera operations up to 500 tera operations. If you think about it, it is like a data center. And if you look at Samsung history with IT and memory we think the future of in-car computing is going in the same direction as IT in terms of evolution of the servers and data centers. So, it is essentially a centralized computing platform that is scalable and has a PCIE backbone with Internet connectivity to enable data sharing.

At the same it has to be ASOL D compliant. Our architecture ensures it is functionally safe and ready up to Level D. But the architecture that we’ve designed both in the hardware perspective and the software on top enables modularity and scalability, and openness. The openness means that we can bring different chip technologies whether is internal Samsung or external partners and we can distribute programs across these different chipsets for separate computing solutions.

We can also bring in different sensor types, different types of data and we can scale the number of inputs that we have through modularity, supported by different application algorithms. If a car maker says “we want to use our perception system,” then we have an API that is available, and the OEM can use the platform of their choice.



Send your comment:
Name: Email:
Phone: Town & Country:
Comment:



















































































































































































































































































Automotive Industries
Call For Interviews, News & Advertising

x

Thank You

x