A self-propelled vehicle manufactured by Uber struck and killed a pedestrian. This is the first incident of its kind and will certainly be examined as no other autonomous vehicle interaction in the past. But at first glance, it’s hard to understand how, unless there is a total breakdown of the system, this can happen when the entire car has been designed to prevent this from happening.
Something unexpected coming into the vehicle ‘s trajectory is pretty much the first emergency event that autonomous automotive engineers are looking at. The situation could be many things – a stopped car, a deer, a pedestrian – and the systems are all designed to detect them as early as possible, identify them and take appropriate action. It could slow down, stop, deviate, anything.
Uber vehicles are equipped with several different imaging systems that work for both ordinary service (monitor nearby cars, signs and lane markings) and extraordinary tasks like those just described. No less than four different people should have taken the victim in this case.
Lidar mounted at the top. The bucket-shaped element on these cars is a lidar, or light sensing and telemetry system, which produces a 3D image of the car’s environment several times a second. Using infrared laser pulses that bounce off objects and return to the sensor, the lidar can detect static and moving objects in considerable detail, day or night.
Heavy snow and fog can obscure the lasers of a lidar, and its accuracy diminishes with reach, but for anything from a few feet to a few hundred feet, it’s a tool for Priceless imagery that we find on virtually every car.
The lidar unit, if it worked properly, should have been able to discern the person in question, if it was not totally obscured, then that they were still more than 100 feet from distance, and to transmit their presence to the brain. “Who gathers the images.
Front radar. The radar, like the lidar, sends a signal and waits for it to bounce, but uses radio waves instead of light. This makes it more resistant to interference, since the radio can cross snow and fog, but also reduce its resolution and change its range profile.
Depending on the radar unit used by Uber – probably multiple both front and back to provide 360 degree coverage – the range could differ significantly. If it is intended to complement the lidar, it is likely that it overlaps considerably, but it is no longer built to identify other cars and larger obstacles.
A person’s radar signature is not so recognizable, but it is very likely that it would at least have manifested itself, confirming what the lidar has detected.
Short and long range optical cameras . Lidar and radar are perfect for locating shapes, but they are not good for reading signs, determining what color is something, and so on. This is a job for visible light cameras with sophisticated computer vision algorithms running in real time on their images.
Uber cameras monitor patrons for vehicles braking (sudden red lights), traffic lights, crossing pedestrians, and so on. On the front of the car, angles and multiple camera types would be used, in order to get a complete picture of the scene in which the car is driving.
Detecting people is one of the most common computer vision problems, and the algorithms that relate to it are pretty good. Segmentation of an image, as it is often called, usually also involves identifying things like signs, trees, sidewalks and more.
That said, it can be hard at night. But it is an obvious problem, the answer to which are the two previous systems, which work night and day. Even in total darkness, a person wearing all the black would appear on the lidar and radar, warning the car that she should perhaps slow down and be ready to see this person in the headlights. This is probably why a night vision system is not commonly found in autonomous vehicles (I can not be sure that there is none on the Uber car, but this seems unlikely).
Security Pilot . It may seem cynical to refer to a person as a system, but the safety drivers in these cars act a lot as versatile safety devices. People are very good at detecting things even though we do not have a laser coming out of our eyes. And our reaction times are not the best, but if it is clear that the car will not respond or reacted badly, a qualified safety driver will react properly.
It should be mentioned that there is also a central computing unit that takes input from these sources and creates its own more complete representation of the world around the car. A person may disappear behind a car in front of the system’s sensors, for example, and no longer be visible for a second or two, but that does not mean that she has ceased to exist. This goes beyond mere recognition of objects and begins to introduce broader concepts of intelligence such as the permanence of the object, the prediction of actions, and so on.
It is also arguably the most advanced and well-guarded part of any autonomous driving system and is therefore well guarded.
It is unclear under what circumstances this tragedy occurred, but the car was certainly equipped with technology that should, and should, have detected the person and reacted appropriately. In addition, if one system did not work, another should have been enough – several failbacks are only practical in high-stakes cases like driving on public roads.
We will know more as Uber, local law enforcement, federal authorities and others will investigate the accident.