Anyone that spends time, driving would agree that the least reliable part of the car is the driver. Even as a driver, I admit that I am the weak point and most likely cause for an accident. The car just goes where you point it right?
For self driving vehicles, the drivers senses need to be replaced with electronic sensors and video recognition providing real time, real world data to an artificial intelligence program. It is these components, their reliability, repeat-ability and accuracy, that will determine the success or failure of self driving technology. There are some pretty big players involved in pushing this technology forward including Tesla Motors, Uber, Google and a new Swedish start-up called Uniti but it is MIT that may have provided the biggest break through.
Uber is testing a self-driving car of their own. They sure need a lot of stuff on the roof!
Google has their own self driving car
In this video, Chris Urmson ( who heads up Google’s driver-less car program), describes one of several efforts to remove humans from the driver’s seat. He talks about where his program is right now, and shares fascinating footage that shows how the car sees the road and makes autonomous decisions about what to do next.
New sensor and artificial intelligence will make self driving vehicles viable
For the past 10 years, the Camera Culture group at MIT’s Media Lab has been developing innovative imaging systems, from a camera that can see around corners to one that can read text in closed books, by using “time of flight,” an approach that gauges distance by measuring the time it takes light projected into a scene to bounce back to a sensor.
In a new paper appearing in IEEE Access, members of the Camera Culture group present a new approach to time-of-flight imaging that increases its depth resolution 1,000-fold. That’s the type of resolution that could make self-driving cars practical.
The new approach could also enable accurate distance measurements through fog, which has proven to be a major obstacle to the development of self driving vehicles.
At a range of 2 meters, existing time-of-flight systems have a depth resolution of about a centimeter. That’s good enough for the assisted-parking and collision-detection systems on today’s cars.
But as Achuta Kadambi, a joint PhD student in electrical engineering and computer science and media arts and sciences and first author on the paper, explains,
As you increase the range, your resolution goes down exponentially. Let’s say you have a long-range scenario, and you want your car to detect an object further away so it can make a fast update decision. You may have started at 1 centimeter, but now you’re back down to [a resolution of] a foot or even 5 feet. And if you make a mistake, it could lead to loss of life.
At distances of 2 meters, the MIT researchers’ system, by contrast, has a depth resolution of 3 micrometers. Kadambi also conducted tests in which he sent a light signal through 500 meters of optical fiber with regularly spaced filters along its length, to simulate the power falloff incurred over longer distances, before feeding it to his system. Those tests suggest that at a range of 500 meters, the MIT system should still achieve a depth resolution of only a centimeter.
Kadambi is joined on the paper by his thesis advisor, Ramesh Raskar, an associate professor of media arts and sciences and head of the Camera Culture group.
With time-of-flight imaging, a short burst of light is fired into a scene, and a camera measures the time it takes to return, which indicates the distance of the object that reflected it. The longer the light burst, the more ambiguous the measurement of how far it’s traveled. So light-burst length is one of the factors that determines system resolution.
The other factor, however, is detection rate. Modulators, which turn a light beam off and on, can switch a billion times a second, but today’s detectors can make only about 100 million measurements a second. Detection rate is what limits existing time-of-flight systems to centimeter-scale resolution.
There is, however, another imaging technique that enables higher resolution, Kadambi says. That technique is interferometry, in which a light beam is split in two, and half of it is kept circulating locally while the other half ( the “sample beam” ) is fired into a visual scene. The reflected sample beam is recombined with the locally circulated light, and the difference in phase between the two beams ( the relative alignment of the troughs and crests of their electromagnetic waves ) yields a very precise measure of the distance the sample beam has traveled.
But interferometry requires careful synchronization of the two light beams.
You could never put interferometry on a car because it’s so sensitive to vibrations,” Kadambi says. “We’re using some ideas from interferometry and some of the ideas from LIDAR, and we’re really combining the two here.
On the beat
They’re also, he explains, using some ideas from acoustics. Anyone who’s performed in a musical ensemble is familiar with the phenomenon of “beating.” If two singers, say, are slightly out of tune — one producing a pitch at 440 hertz and the other at 437 hertz — the interplay of their voices will produce another tone, whose frequency is the difference between those of the notes they’re singing — in this case, 3 hertz.
Hi Tech Sensors For Self Driving Vehicles.
The same is true with light pulses. If a time-of-flight imaging system is firing light into a scene at the rate of a billion pulses a second, and the returning light is combined with light pulsing 999,999,999 times a second, the result will be a light signal pulsing once a second which is a rate easily detectable with a commodity video camera. And that slow “beat” will contain all the phase information necessary to gauge distance.
But rather than try to synchronize two high-frequency light signals, as interferometry systems must, Kadambi and Raskar simply modulate the returning signal, using the same technology that produced it in the first place. That is, they pulse the already pulsed light. The result is the same, but the approach is much more practical for automotive systems.
“The fusion of the optical coherence and electronic coherence is very unique,” Raskar says.
We’re modulating the light at a few gigahertz, so it’s like turning a flashlight on and off millions of times per second. But we’re changing that electronically, not optically. The combination of the two is really where you get the power for this system.