“Eyes” for robots - Industrial LiDAR Systems gaining momentum

For industrial applications using robots and automatic guided vehicles, the interaction between infrared light and corresponding sensors is essential. To capture the world in 3D system manufacturers can chose between different approaches.

Mobility is transforming in giant leaps and bounds. Whether it is within the automotive space developing autonomous mobility solutions or industrial applications using robots and automatic guided vehicles. There are various components of an overall system that must fit together and complement each other. The main goal is to create a seamless 3D view of the vehicle's direct surroundings. This map is needed to calculate distances to objects and initiate the vehicle's next action with the help of special algorithms. In fact, three sensor technologies come together here: LiDAR (Light Detection and Ranging), radar and cameras. Depending on the scenario they all have their advantages. Combining these advantages with redundant data increases safety significantly. The better they are coordinated, the better self-driving vehicles will move through their environment.

 


 

Direct Time-of-Flight (dToF)

In the Time-of-Flight approach, system manufacturers use the speed of light to generate depth information. Put simply, a directed pulse of light is sent into the environment. When it hits an object, it is reflected and registered by the detector that is placed close to the light source. By measuring the time taken by the light beam to reach the object and back, the distance of the object or in this case of a single pixel can be determined. The received signal is finally processed to trigger a corresponding action – for example an evasive maneuver of the vehicle to avoid collisions with people or obstacles. This approach is called direct Time of Flight (dToF), because the exact "flight time" of the light beam is relevant. An example of a classic dToF application are LiDAR systems for autonomous vehicles.

 

 

Indirect Time-of-Flight (iToF)

The indirect Time-of-Flight (iToF) approach is similar, but it has a significant difference. The illumination from the light source (usually an infrared VCSEL) is expanded by a diffuser and emitted pulsed (50% duty cycle) into the defined field-of-view.

 

In the downstream system, a "standard signal" is stored that triggers the detector within a certain time if the light does not encounter any obstacle. If an object interrupts this standard signal, the system can determine the depth information per defined pixel of the detector due to the resulting phase shift and the time delay of the pulse train.

 

 

Active Stereo Vision (ASV)

In the "Active Stereo Vision" approach, an infrared light source (usually a VCSEL or IRED) illuminates the scene with a pattern and two infrared cameras record the image in stereo.

 

By comparing the two images, the downstream software can calculate the required depth information. The light source projects a pattern to support depth computation, even on objects with little texture like a wall, floor, table etc. This approach is ideally suited for short range, high resolution 3D sensing on robots and Automatic Guided Vehicles (AGVs) for obstacle avoidance. It also finds application in optical inspection of production line parts, security cameras and surveillance. ams OSRAM have a portfolio of dot projectors providing high contrast dot pattern illumination in the NIR, making systems immune to sunlight. The relatively simple system design of Active Stereo Vision has a positive effect on the overall system costs. However, this approach requires a correspondingly large amount of installation space because of the system-related separation of the cameras.

 

With trends such as increasing automation in industrial production or logistics, the need for corresponding technologies is also growing. Depending on the respective technology approach and final application, some light sources are better suited than others. 
For LiDAR, two different main systems are used to get a 3D point cloud: Flash LiDAR and Scanning LiDAR. A scanning LiDAR system consists of a focused pulsed laser beam, which is directed to a certain small solid angle by either a mechanical rotating mirror or a micro electro-mechanical system (MEMS) mirror.

 

The high-power pulsed laser beam is controlled so it is only emitted into a small solid angle, the reachable distance with the optical power used can be much larger compared to 3D Flash systems. EELs are the product of choice for these system architectures. They deliver a particularly large amount of light in a compact space via a small emission area, which leads to outstanding power and range. Therefore, EELs are already used in many solutions. ams OSRAM has been the leading manufacturer of LiDAR lasers for more than 15 years, with well over 10 million chips already in the field - without a single chip defect. The semiconductor expert has established today's common wavelength for LiDAR systems of 905 nanometers. Compared to systems with other wavelengths, 905 nanometer solutions are characterized by their outstanding system efficiency, perfect reliability and attractive system costs.

 

In connection with LiDAR, a new lighting technology has recently been mentioned more and more frequently: VCSELs (Vertical Cavity Surface Emitting Lasers). VCSELs combine the properties of two illumination technologies: the high-power density and simple packaging of an infrared LED with the spectral width and speed of a laser. The advantages of the technology including excellent beam quality, simple design and advances in miniaturization, which explain the growth of the VCSEL market. In general, they may require more installation space than EEL emitters, but offer advantages in certain applications. For example, their radiation characteristics make them particularly suitable for flash LiDAR systems as well as industrial applications such as robotics and autonomous mobile robots. With a 3D Flash LiDAR the pulsed laser beam is emitted to the whole solid angle of interest in one shot. To obtain a certain resolution of the point cloud, an n x m array of the photosensitive detector (arrays of photodetectors or CMOS ToF chip) is required.

 

No matter which system approach customers prefer, ams OSRAM can serve all common approaches with its extremely broad portfolio including infrared LEDs, VCSELs and EELs. The company is a global leader in providing VCSELs and EELs for LiDAR. Both products are superior in terms of optical performance, as well as efficiency. In addition, customers can choose from a variety of edge-emitter package designs that work best for their system – whether it is TO-Can, Plastic or an SMT package with a peak output power of 120 W. In the VCSEL sector, ams OSRAM offer a wide range of wavelengths (680 to 940 nanometers), power classes (7mW to >60 W) and Field-Of-Views. In addition to their compact dimensions, the products are characterized by their outstanding robustness and leading VCSEL technology for a wide range of applications.