AI Online

Ai INNOVATION, SINCE 1895

Sony Develops a Stacked Direct Time of Flight Depth Sensor for Automotive LiDAR with SPAD Pixels, an Industry First*1

Tokyo, Japan — Sony Corporation announced today that it has developed a stacked direct Time of Flight (dToF) depth sensor for automotive LiDAR using single-photon avalanche diode (SPAD) pixels, an industry first.*1 This achievement was announced at the International Solid-State Circuits Conference (ISSCC), which opened on February 13, 2021.

  • *1Among stacked depth sensors for automotive LiDAR. As of announcement on February 18, 2021.

In addition to sensing devices such as cameras and millimeter wave radar, LiDAR is becoming ever more important as a method of high-precision detection and recognition of not only road conditions but also the location and shape of objects such as vehicles and pedestrians. This trend is being driven by the popularization of advanced driver assistance systems (ADAS) and the need for this technology in autonomous driving (AD).

SPAD is a pixel structure that uses avalanche multiplication to amplify electrons from a single incident photon, causing a cascade like an avalanche, and it can detect even weak light. It is possible to accomplish long-distance, high-precision distance measuring by employing SPAD as the detector in a dToF sensor, which measures the distance to an object based on the time of flight (time difference) of a light emitted from a light source until it returns to the sensor, after being reflected by the object. Now, by leveraging Sony’s technologies such as back-illuminated pixel structure, stacked configurations and Cu-Cu connections*2 cultivated in the development of CMOS image sensors, and achieving the SPAD pixels and distance measuring processing circuitry on a single chip, Sony has succeeded in developing a compact yet high-resolution sensor. This enables high-precision, high-speed measurement at 15-centimeter range resolutions up to a distance of 300 meters*3. The new development will also help enable detection and recognition under severe conditions such as various temperature and weather as required for automotive equipment, thereby contributing to greater reliability for LiDAR. Achieving a single chip also helps lower the cost of LiDAR.

Sony has also developed a MEMS (Micro Electro Mechanical System)*4 LiDAR system equipped with this new technology for evaluation purposes, which is now being offered to customers and partners.

  • *2Technology that provides electrical continuity via connected Cu (copper) pads when stacking the pixel section (top chip) and logic circuits (bottom chip). Compared with through-silicon via (TSV) wiring, where the connection is achieved by electrodes intruded around the circumference of the pixel area, this method gives more freedom in design, improves productivity, allows for a more compact size, and increases performance.
  • *3When measuring an object with a height of 1 meter and reflectance of 10% using additive mode of 6 x 6 pixels (H x V) under cloudy daylight conditions.
  • *4“MEMS” is a device that integrates various components on a single substrate using microfabrication technology. This LiDAR uses a method of scanning the light emitted from the light source with a MEMS mirror.

Structural diagram of new development

SPAD Pixel Principles

On a dToF depth sensor, SPAD is able to detect single photons. Applying breakdown voltage (VBD)*5 to the electrodes in the SPAD pixel and letting in the photons set in excess bias voltage (VEX)*6 that exceed the breakdown voltage amplifies the electrons generated in photoelectric conversion via avalanche multiplication. Avalanche multiplication stops when the voltage between electrodes drops to the breakdown voltage. After electrons generated by avalanche multiplication are discharged and return to the breakdown voltage (quenching action), the voltage between electrodes is set to the excess bias voltage again to enable detection of the next photon (recharge action). This multiplication action of electrons triggered by the arrival of the photon is known as Geiger mode.

  • *5Voltage at which avalanche multiplication begins
  • *6Voltage that exceeds the breakdown voltage (VBD)
SPAD pixel principles (current/voltage)
Illustration of avalanche multiplication

Main Features

1) High-precision measurement at 15-cm range resolutions up to 300 m distance

The new technology employs a back-illuminated SPAD pixel structure that uses a Cu-Cu connection to achieve conduction for each pixel between the pixel chip (top) and the logic chip equipped with distance measuring processor circuits (bottom). This allows for a configuration with all circuits other than light-incorporating pixels to be placed on the bottom, resulting in a high aperture ratio*7 and a high 22%*8 photon detection efficiency rate. Even with its compact chip size, a high resolution of approximately 110,000 effective pixels (189 x 600 pixels) at a pixel size of 10 μm is achieved. This enables high-precision distance measuring at 15-centimeter range resolutions up to a distance of 300 meters, thereby contributing to improved LiDAR detection and recognition performance.

  • *7Ratio of aperture section (section other than light-blocking sections) as viewed from the light incident side per pixel.
  • *8When a 905 nm wavelength laser used in a general automotive LiDAR is projected on an object.
Photon detection efficiency and wavelength

Point cloud (Left: Conventional LiDAR, Right: LiDAR with new development)

2) High-speed responsiveness using Sony-original Time to Digital Converter (TDC) and a passive quenching/recharge circuit

Sony has developed its original Time to Digital Converter (TDC), which converts the detected photon flight time into a digital value, and an original passive quenching/recharge circuit, and employed them along with a Cu-Cu connection for each pixel, making it possible to improve the response speed per photon to 6 nanoseconds*9 under normal conditions. High-speed distance measuring processing contributes to safer driving by detecting and recognizing surrounding conditions in real time.

  • *9In a temperature environment of 60° C.

3) Stable photon detection efficiency and response speed in severe conditions

Sony’s original SPAD pixel structure enables stable photon detection efficiency and response speed even in severe conditions between -40℃ and 125℃, contributing to improved LiDAR reliability.

Photon detection efficiency and operating temperature
Response speed and operating temperature

Previous posts

Next posts

Thu. April 18th, 2024

Share this post