As you drive down the road, you make countless subconscious micro-decisions and calculations built on past experience. You know what 40 mph feels like by observing how fast the trees are passing by, how hard you can hit the brakes to comfortably slow down at a traffic signal, and that you should coast down a steep hill to avoid speeding. Even if you are on an unfamiliar road, driving experience has built foundational expectations and awareness, so that you are not hurdling into the unknown waiting to react to situations that arise. In the case of autonomous vehicles, however, these decisions are made by software. Simply adding sensors like LiDAR and cameras to a vehicle allow it to perceive its surroundings, but, on their own, would fail to enable a safe ride. Enter 3D maps – a critical element of autonomy that is often overlooked.
Detailed, 3-dimensional, continuously updated maps are essential to true widespread adoption of self-driving cars. This is what separates a system that needs to be overseen by a human focused on the road, and one where you can fall asleep and wake up at your destination. While it is technically possible for a car to navigate an unfamiliar setting without a digital map, the information that 3D maps provide is critical to building the trust necessary for widespread adoption of autonomy. Maps effectively teach a self-driving vehicle the rules of the road. The car’s AI can learn the mechanics of driving like a human, but the map introduces things like bike and HOV lanes, speed limits, construction zones, train tracks, and pedestrian crosswalks. Maps also ease the burden on the car’s computers by giving them foresight and adding redundancy to its understanding of the situation it faces.
Civil Maps CEO, Sravan Puttagunta explains, “Radar and cameras cannot always recognize a stop sign if pedestrians are standing in the way or the sign has been knocked down. But if the map knows there is a stop sign ahead, and the sensors just need to confirm it, the load on the sensors and processor is much lower.”
Reducing the load on the car’s computing power must be considered, because a fully autonomous vehicle could produce as much as a gigabyte of data every second. By building an accurate and up-to-date, digital representation of the world around us, a car is able to process this data in conjunction with the data created by its sensors to create a safer, smoother, and more reliable driving experience. Maps allow a vehicle to see into the future – further than human drivers can see – anticipating instead of reacting to changes in their environment.
Maps allow a vehicle to see into the future – further than human drivers can see – anticipating instead of reacting to changes in their environment.
Maps are an important aspect of vehicle-to-vehicle (V2V) communication as well. Using maps as an extension of a car’s sensors requires a reliance on other cars for input information. This presents us with the consortium conundrum that we wrote about here. In the realm of V2V communication, where it does no good to ‘out-communicate’ the competition, we believe company collaboration would be beneficial, if not a requirement. Maps that are shared through a single cloud-based platform are updated frequently, adding exponentially to their utility. The minute details of roads are constantly changing – construction zones, fallen trees, or damaged roads are all things that must be mapped and updated to reflect current conditions. This can be accomplished using the cameras and sensors on each car including some element of automated Waze-like crowd sourcing from the vehicles, too. As a vehicle drives, its sensors are constantly comparing their inputs to the map. When a discrepancy is detected, it is corroborated by other vehicles and changed in the cloud, so every car in the network is up-to-date. Take, for example, the scenario pictured below.
Here, there are three layers of safety that come from V2V and mapping. As the black car drives by the wreck, it observes a discrepancy in its map and relays that message. The cars involved in the accident share their location and that their speed is zero, and the car blindly approaching the wreck knows to avoid its current lane and switches lanes accordingly. Sensors alone, which have limited range and therefore reaction time, would not have been able to detect and prevent a collision.
Navigation apps like Google Maps provide more than enough detail to find your way from A to B, but these maps are only able to locate your car within a margin of several meters – 3D maps must be accurate within centimeters. They must show the precise location of all street signs, lane markings, curbs, and even deep potholes that should be avoided. Moreover, if we want autonomous vehicles to be able to take us anywhere, we have to have detailed maps everywhere – and there are more than 4 million miles of roads in the U.S. How do we tackle such a monumental task? This question has provoked the attention and innovative efforts of a host of companies.
Lvl5, a startup from former Tesla and iRobot engineers, aims to crowdsource mapping data with their app called Payver. While not all cars are equipped with cameras, nearly all of their drivers carry smartphones with them. By mounting your phone aiming out the windshield, you can earn between 2 and 5 cents per mile depending on whether or not the road is common or uncharted. The process, which relies heavily on machine vision to stitch together and label every fragmented video, is a logical way to build maps early on, leveraging the user base of smartphones and the sizable number of people who drive for a living for a ridesharing, delivery, or freight service.
Waymo, who has a longer history of mapping tech and a large budget thanks to parent company Google, is taking the opposite approach. In its usual ‘do everything ourselves’ fashion, Waymo is building their maps by driving around in vehicles equipped with spinning LiDAR units. LiDAR provides a much more detailed image of its surroundings than a camera, but still requires substantial human input to label each object. Labeling things like traffic lights, street signs, and buildings is tedious, but is necessary so that a car can tell the difference between a tree and a yield sign. There is also promise of automation of this process by AI tech similar to Google Lens.
Here Mapping and Mobileye have combined many of their efforts around building, maintaining, and distributing maps to become the defacto leader in the space early on. Here, owned by a consortium of German automakers, (Audi, BMW, and Mercedes-Benz) has ambitions to build a digitized version of our world that can be interpreted by autonomous vehicles and other machines with what they call an open location platform. Here has built out their maps with LiDAR-equipped vehicles and will maintain them with an extensive network of cars outfitted with Mobileye hardware. Mobileye, purchased by Intel earlier this year for $15.3B, offers a range of services for self-driving cars like sensor fusion and camera tech, but has recently been focusing on mapping. The combined result will be a comprehensive 3D map that is aggregated in the cloud and maintained in near real-time by crowdsourcing data from a network of connected cars. The maps will be sold as a service that automakers with autonomous systems can subscribe to.
Tesla has a distinct advantage stemming from the sheer number of cars they have on the road equipped with the hardware necessary to build maps (cameras, RADAR, and ultrasonic sensors). 3D mapping presents a textbook network effect and Tesla, with thousands of vehicles already in play, is in a great position to take advantage of that market force. The question will be one of communication with other cars as more automakers begin to develop and test autonomous systems.
While the method of building sufficiently detailed maps varies, their importance in the self-driving equation is almost universally agreed upon. In contrast with lively debates over the correct combination of RADAR, LiDAR, and camera sensor arrays, or countless chipmakers jockeying to provide cars with computing power, mapping seem under-appreciated and underinvested. As Mobileye co-founder Amnon Shashua suggests, there are three core elements of self-driving cars – sensing the road, mapping the road, and negotiating your position on the road. 3D maps will be a key determinant of the long-term winners in autonomy.
Thanks to Will Thompson for his work on this note.
Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.