
Arm's Zena CSS empowers automotive innovators to deliver leading-edge vehicle experiences with high performance.
Artificial Intelligence (AI) is disrupting the design of vehicles and customer expectations in the automotive sector as much as in other industries.
AI workloads are growing based on the foundation created by software-defined vehicles as OEMs strive to differentiate vehicles and models through user experiences. This introduces challenges that include ensuring seamless integration, providing real-time performance, safety, and cybersecurity across increasingly complex systems.
To meet the need, Arm recently unveiled Arm Zena Compute Subsystems (CSS), which the company describes as “a transformative, pre-integrated and validated CSS to accelerate development for AI-defined vehicles.
“Designed to scale from digital cockpit to advanced driver-assistance systems (ADAS) as part of the wider shift to a central compute architecture, Zena CSS empowers automotive innovators to deliver leading-edge vehicle experiences with high performance, functional safety, and cloud-native readiness at its core,” the company stated at the introduction of CSS.
Arm has been involved in the automotive industry for more than 25 years. Nearly every global automaker incorporates Arm vehicle technology in one form or another, be it Cortex-M and R portfolio of CPUs or higher-end microprocessors using Arm Cortex-A CPUs.
Most recently NVIDIA announced the Jetson Thor series of modules for physical artificial intelligence and robotics. NVIDIA DRIVE AGX Thor integrates Arm Neoverse V3AE CPUs with NVIDIA’s GPUs and accelerators to provide the centralized computing power needed for AI-driven automotive functions, while Jetson AGX Thor offers similar capabilities for physical AI applications like robotics.
Another example is the long-term technology partnership between Arm and MediaTek in which Arm licenses its CPU and GPU architecture, and MediaTek designs chipsets such as the Dimensity and Kompanio series. MediaTek’s chipsets are built on Arm’s power-efficient designs and include the latest Arm Cortex CPUs and Immortalis GPUs.
MediaTek’s new Dimensity Auto Cockpit Platform C-X1 SoC uses Arm v9.2-A architecture, NVIDIA Blackwell GPU, and a deep learning accelerator to power efficient, on-device AI for vehicles. It supports real-time features such as voice assistants, route planning, travel vlog creation, driver monitoring, environmental perception inside and outside the car, and personalized media recommendations.
Arm also works closely with a number of other cockpit SoC suppliers primarily with its Cortex-A, -M and -R series.
Automotive Industries (AI) asked Suraj Gajendra, Vice President Product and Solutions, Automotive Line of Business at Arm to provide some more background on Arm in the automotive space, starting with how Arm designs for automotive use cases:
Arm Automotive Enhanced (AE) solutions provide a scalable, safety-certified compute platform designed to support AI functions and streamline software development in vehicles.

They are all designed, validated and optimized to be used with Arm Cortex processors and Arm GPUs and ISPs. Built upon the open AMBA interface standard, Arm System IP provides design teams with the foundation for building better systems.
The third generation of IPs introduced in 2024 optimizes automotive workloads for applications such as entertainment and advanced driver assistance systems.
So, from a technology enablement perspective, we build compute IPs for use in processors and sensors while at the same time we also supply all the other system IPs that ensure there is solid compute architecture for infotainment and ADAS, for example.
AI: Does your portfolio cover a full range of AE IP products?
Gajendra: Fundamentally, through our full range of microcontrollers to high-end microprocessors, our technology is found from bumper to bumper.
Arm Neoverse cores are designed for high-performance compute, Cortex-M for real-time, Cortex-R for safety-critical, and Cortex-A for scalable applications across automotive systems. All of these cores incorporate safety features like lockstep cores (DCLS) and compliance with ASIL D standards.
AI: Do your Automotive Enhanced (AE) solutions offer a scalable, safety-certified compute foundation to power AI across the vehicle?
Gajendra: The way we approach AI is through the workload, which can be shared by a CPU, a GPU and an NPU.
We ensure that our IPs and technologies are optimized for whatever part of that workload runs on our systems. In March 2025 we announced the release of the Arm Kleidi Libraries for accelerating any framework on Arm-based CPUs.
The libraries are built from our software ecosystem, which we have nurtured over the last 35 years.
They are designed to fully leverage the capabilities of Arm Cortex-A, Cortex-X, and Arm Neoverse CPUs, ensuring maximum performance for ML and computer vision (CV) tasks.
AI: Is the industry entering the era of the AI-defined vehicle?
Gajendra: Absolutely. We firmly believe that the industry is primed for enabling the AI defined vehicle of tomorrow. We are very proud to be driving that with the rest of our ecosystem and partners.
AI: What do you see as the difference between software and AI-defined vehicles?
Gajendra: For the past four to five years, we as an industry have been talking about software-defined vehicles, and now we have added AI-defined vehicles.
AI-defined vehicles are not going to replace the software defined vehicle. The way we explain it is that the software-defined vehicle provides the foundation and bare infrastructure needed to ensure that the vehicle can continue to receive software updates and upgrades after it leaves the dealership.
It is essential to provide flexibility for software developers to refine applications in the cloud and feed them to the car using over the air updates.
When it comes to the AI-defined vehicle, we are enabling artificial intelligence applications to run on top of the software-defined infrastructure. The AI-defined vehicle will enable new workloads. Examples would be driver or passenger monitoring algorithms driven by an AI model or end-to-end AI models for autonomy.
The industry is ready for the next generation of vehicles because we have built the necessary infrastructure over the past five or six years to connect vehicles to the cloud.
A single OEM, chip provider or software developer cannot develop the whole vehicle system, even though it is basically a computer on four wheels.
It becomes very complicated, especially with high-end areas applications which require multiple sensors that need to be optimized at the same time as being integrated with an onboard computer.
The sensor makers and sensor software providers need somebody in the mix to integrate with the chip providers and the central compute platform providers. The software application developers who write for the platform also need to be in the mix.
Of course, when you talk about software-defined vehicles, you need to be working very closely with the cloud service providers as well.
So, it is not just one company or one group of companies that can deliver what is needed. An industry-wide ecosystem needs to come together to enable this, and again we are proud to be in the middle of this evolution.

AI: What are the main challenges in meeting the growing energy demands of future vehicles?
Gajendra: As we transition from internal combustion engines to electric vehicles, people will not want to charge their car every day and will want longer ranges between charging.
At the same time, they want new features in the car which require more compute horsepower, which significantly raises the overall energy consumption of a car.
When it comes to AI, whatever application or feature gets implemented in the central computer must be power efficient so that it does not hog all the energy. The importance of having power efficient architectures in vehicle is therefore absolutely paramount.
Once again, we are prepared for the transition as one of the pillars of the Arm compute architecture is power efficiency, which is why our chips are found in so many cellphones. We are bringing that technology to the automotive sector.
Please tell us about Arm’s involvement in the SDV accelerator.
Gajendra: As demand for AI-defined features in vehicles increases, traditional linear automotive development timelines are becoming less practical.
SDV Accelerator, led by AWS and HERE, supports a collaborative environment that offers automakers and their partners access to Arm solutions operating virtually in the cloud, aiming to reduce complexity, allow for early software development, and shorten time to market for future vehicles.
AI: How is Arm supporting the evolution of mobility towards smarter, safer, and more efficient vehicles?
Gajendra: When it comes to enabling new AI-defined applications, we are ensuring that there is enough compute power available in the central computer, be it through our CPUs or AI-enabled GPUs or other IPs.
Of course, we will continue to support the investment we’ve made over many years in safety and cyber security for automotive to ensure that automotive compute processes are done in a safe and secure way.
It is a big challenge and a very important topic that we talk about every day because with more and more smart applications being run, we have to ensure that this is done in a safe and secure way.
A cockpit application, no matter how cool it is, may not interfere with an ADAS application, so the challenge is to ensure that the applications are separated out and don’t interfere with each other.
At the same time, we are focused on power efficiency in all our technologies to ensure we deliver higher levels of performance without consuming more power.
AI: What is next for Arm?
Gajendra: Traditionally, the company has supplied core IP products to the industry. Where we are going next is the compute subsystem for automotive called the Zena Compute Subsystem (CSS), which is needed to support the shortening of automotive silicon development cycles.
Arm Zena CSS offers efficient, high-performance compute for safety-critical tasks like ADAS and infotainment, cutting engineering time by 20% and enabling scalable, AI-ready solutions. It can shorten development timelines by a year or more for AI and software-defined vehicles through pre-verified, safety-certified compute platforms.
It supports ADAS, AI cockpits, and OTA features, integrating a standardized safety island and security enclave, and can be expanded with Arm GPU and ISP technology for flexible, optimized SoC solutions.
With CSS we are not just supplying IPs, but a much more complete subsystem. The Zena compute subsystem has 16 CPUs as an integrated safety island with built-in cybersecurity certified to ISO 26262 and ISO 21434.
Standardizing Arm architecture at the CSS level and providing shared modelling tools streamline software porting, reducing platform fragmentation and supporting flexible SoC design for easier AI integration in vehicles.
More Stories
Magna catering for a world of diversified powertrains
Machina Labs Advances Custom Automotive Manufacturing with AI and Robotics
Common Reasons Why Most Left-Turn Accidents Occur