Smart Money Says Lyft a Winner in Autonomy

Ridesharing, more specifically, the fate of Uber and Lyft, is at the top of the potentially disrupted list as autonomy approaches. Both of these companies will need to make the leap to autonomy whether it be organically, through acquisition, or by partnership. simply put these companies must adopt autonomy to survive. Recent news of CapitalG leading a $1B investment in Lyft is a key endorsement that Lyft will have a future in an autonomous world as the platform for on-demand self-driving vehicles. Uber’s fate, however, appears more in its own hands, given it is developing its own autonomous systems, and the partnerships that Uber has inked to date tend to be on the manufacturing side (Daimler, Volvo, Tata, and Toyota). While Uber is still in the running, our money is also on Lyft.

CapitalG, Alphabet’s late-stage venture fund, has not disclosed how much of the round they account for. Despite a CapitalG spokesman saying otherwise, this investment is about much more than making a profit for Alphabet. As Google navigates the impossibly complex landscape of companies vying for leadership in ridesharing and autonomy, they are making sure they have a strong presence in each possible outcome of the booming technology.

GV, another venture arm of Alphabet that invests in much younger companies, made a $258M investment in Uber in 2013 predating the Waymo IP theft allegations). On top investments in of both ride-sharing giants, Alphabet’s own Waymo is largely considered a leader in self-driving systems. We see this strategic investment as a strong confirmation of Google’s conviction in the future of self-driving cars, and the concept of fleet services. In our Auto Outlook, we detail the rise of both autonomous vehicles and fleet ownership, predicting fully autonomous cars to outnumber drivers and cars owned by fleet services to outnumber personally owned cars in 2036.

The investment brings Lyft’s valuation north of $11B, almost 50% higher than last round. While Lyft remains roughly 1/7th the size of Uber, a large influx of cash should strengthen their competitive efforts, allowing them to expand internationally, and to build out their platform for self-driving cars. While Lyft does not have any aspirations of developing their own autonomous systems, they have positioned themselves as the platform for on-demand autonomous cars in the near future. With Lyft’s open platform, automakers can plug self-driving cars in to a network of drivers that make nearly 1 million rides per day (growing at 25% per year), and smooth the transition to full autonomy with a Lyft “driver” behind the wheel. While Uber, who hopes to develop self-driving technology in-house, is embroiled in legal battles, Lyft has made countless partnerships with both automakers and self-driving tech companies. Their list now extends to Waymo, NuTonomy, Drive.ai., Ford, GM, and Jaguar. The next chapter in the race to autonomy will likely be a Lyft IPO. Stay tuned.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Don’t Underestimate the Importance of 3D Mapping for Autonomy

As you drive down the road, you make countless subconscious micro-decisions and calculations built on past experience. You know what 40 mph feels like by observing how fast the trees are passing by, how hard you can hit the brakes to comfortably slow down at a traffic signal, and that you should coast down a steep hill to avoid speeding. Even if you are on an unfamiliar road, driving experience has built foundational expectations and awareness, so that you are not hurdling into the unknown waiting to react to situations that arise. In the case of autonomous vehicles, however, these decisions are made by software. Simply adding sensors like LiDAR and cameras to a vehicle allow it to perceive its surroundings, but, on their own, would fail to enable a safe ride. Enter 3D maps – a critical element of autonomy that is often overlooked.

Detailed, 3-dimensional, continuously updated maps are essential to true widespread adoption of self-driving cars. This is what separates a system that needs to be overseen by a human focused on the road, and one where you can fall asleep and wake up at your destination. While it is technically possible for a car to navigate an unfamiliar setting without a digital map, the information that 3D maps provide is critical to building the trust necessary for widespread adoption of autonomy. Maps effectively teach a self-driving vehicle the rules of the road. The car’s AI can learn the mechanics of driving like a human, but the map introduces things like bike and HOV lanes, speed limits, construction zones, train tracks, and pedestrian crosswalks. Maps also ease the burden on the car’s computers by giving them foresight and adding redundancy to its understanding of the situation it faces.

Civil Maps CEO, Sravan Puttagunta explains, “Radar and cameras cannot always recognize a stop sign if pedestrians are standing in the way or the sign has been knocked down. But if the map knows there is a stop sign ahead, and the sensors just need to confirm it, the load on the sensors and processor is much lower.”

Reducing the load on the car’s computing power must be considered, because a fully autonomous vehicle could produce as much as a gigabyte of data every second. By building an accurate and up-to-date, digital representation of the world around us, a car is able to process this data in conjunction with the data created by its sensors to create a safer, smoother, and more reliable driving experience. Maps allow a vehicle to see into the future – further than human drivers can see – anticipating instead of reacting to changes in their environment.

Maps allow a vehicle to see into the future – further than human drivers can see – anticipating instead of reacting to changes in their environment.

Maps are an important aspect of vehicle-to-vehicle (V2V) communication as well. Using maps as an extension of a car’s sensors requires a reliance on other cars for input information. This presents us with the consortium conundrum that we wrote about here. In the realm of V2V communication, where it does no good to ‘out-communicate’ the competition, we believe company collaboration would be beneficial, if not a requirement. Maps that are shared through a single cloud-based platform are updated frequently, adding exponentially to their utility. The minute details of roads are constantly changing – construction zones, fallen trees, or damaged roads are all things that must be mapped and updated to reflect current conditions. This can be accomplished using the cameras and sensors on each car including some element of automated Waze-like crowd sourcing from the vehicles, too. As a vehicle drives, its sensors are constantly comparing their inputs to the map. When a discrepancy is detected, it is corroborated by other vehicles and changed in the cloud, so every car in the network is up-to-date. Take, for example, the scenario pictured below.

Here, there are three layers of safety that come from V2V and mapping. As the black car drives by the wreck, it observes a discrepancy in its map and relays that message. The cars involved in the accident share their location and that their speed is zero, and the car blindly approaching the wreck knows to avoid its current lane and switches lanes accordingly. Sensors alone, which have limited range and therefore reaction time, would not have been able to detect and prevent a collision.

Navigation apps like Google Maps provide more than enough detail to find your way from A to B, but these maps are only able to locate your car within a margin of several meters – 3D maps must be accurate within centimeters. They must show the precise location of all street signs, lane markings, curbs, and even deep potholes that should be avoided. Moreover, if we want autonomous vehicles to be able to take us anywhere, we have to have detailed maps everywhere – and there are more than 4 million miles of roads in the U.S. How do we tackle such a monumental task? This question has provoked the attention and innovative efforts of a host of companies.

Lvl5, a startup from former Tesla and iRobot engineers, aims to crowdsource mapping data with their app called Payver. While not all cars are equipped with cameras, nearly all of their drivers carry smartphones with them. By mounting your phone aiming out the windshield, you can earn between 2 and 5 cents per mile depending on whether or not the road is common or uncharted. The process, which relies heavily on machine vision to stitch together and label every fragmented video, is a logical way to build maps early on, leveraging the user base of smartphones and the sizable number of people who drive for a living for a ridesharing, delivery, or freight service.

Waymo, who has a longer history of mapping tech and a large budget thanks to parent company Google, is taking the opposite approach. In its usual ‘do everything ourselves’ fashion, Waymo is building their maps by driving around in vehicles equipped with spinning LiDAR units. LiDAR provides a much more detailed image of its surroundings than a camera, but still requires substantial human input to label each object. Labeling things like traffic lights, street signs, and buildings is tedious, but is necessary so that a car can tell the difference between a tree and a yield sign. There is also promise of automation of this process by AI tech similar to Google Lens.

Here Mapping and Mobileye have combined many of their efforts around building, maintaining, and distributing maps to become the defacto leader in the space early on. Here, owned by a consortium of German automakers, (Audi, BMW, and Mercedes-Benz) has ambitions to build a digitized version of our world that can be interpreted by autonomous vehicles and other machines with what they call an open location platform. Here has built out their maps with LiDAR-equipped vehicles and will maintain them with an extensive network of cars outfitted with Mobileye hardware. Mobileye, purchased by Intel earlier this year for $15.3B, offers a range of services for self-driving cars like sensor fusion and camera tech, but has recently been focusing on mapping. The combined result will be a comprehensive 3D map that is aggregated in the cloud and maintained in near real-time by crowdsourcing data from a network of connected cars. The maps will be sold as a service that automakers with autonomous systems can subscribe to.

Tesla has a distinct advantage stemming from the sheer number of cars they have on the road equipped with the hardware necessary to build maps (cameras, RADAR, and ultrasonic sensors). 3D mapping presents a textbook network effect and Tesla, with thousands of vehicles already in play, is in a great position to take advantage of that market force. The question will be one of communication with other cars as more automakers begin to develop and test autonomous systems.

While the method of building sufficiently detailed maps varies, their importance in the self-driving equation is almost universally agreed upon. In contrast with lively debates over the correct combination of RADAR, LiDAR, and camera sensor arrays, or countless chipmakers jockeying to provide cars with computing power, mapping seem under-appreciated and underinvested. As Mobileye co-founder Amnon Shashua suggests, there are three core elements of self-driving cars – sensing the road, mapping the road, and negotiating your position on the road. 3D maps will be a key determinant of the long-term winners in autonomy.

Thanks to Will Thompson for his work on this note.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Google’s Hardware Push: Ubiquitous AI

Yesterday, Google released eight devices that all screamed a single message – Google’s hardware ambitions are both serious and AI-centric. Google is doubling down on hardware, as a means to democratize the use of AI. The company has recently poured resources into hardware development, and while hardware will always be a fraction of their advertising and cloud businesses, these products will be key factors in Google’s future as a competitor with Apple, Samsung, and Amazon.

Last year, Google charged former Motorola president Rick Osterloh with unifying the new hardware division, and more recently, the company paid $1.1 billion to acquire over 2,000 smartphone engineers and some valuable IP from HTC (although their efforts won’t be reflected in this product line). Now all-in on hardware, the question becomes one of marketing, manufacturing, and distribution power that will determine if they can elbow a position among some of the world’s largest and most successful consumer product companies.

They keynote was heavy on Google’s “AI first” message, but now that sentiment seemed to take on a more tangible form as each presenter explained how the hardware products will facilitate AI first computing. In 2017 Sundar has opened each of his public remarks stating Google’s goal of becoming and AI first company. This has obvious implications for Google’s advertising, Maps, YouTube, and cloud business, but we are adding hardware to this list.

We are most excited to see how well Google has woven their AI Assistant into the fabric of the device ecosystem. This is important because integrating an array of devices (i.e. the handoff between the home and the road) is what will push us toward the next generation of our interaction with machines. Don’t mistake Amazon’s early lead in smart speaker market share for a lead in the AI first world. While Google is lightyears behind Alexa’s install base, we believe Google has the best AI (see our comparison here), and their more robust product line could catch up quickly.

In an AI first world (as opposed to mobile first), Sundar explains, computers will adapt to the humans that use them, instead of the other way around. 4 things They must be conversational and sensory, ambient and multi-device, thoughtfully contextual, and they must learn and adapt. How is Google’s hardware collection enabling the AI first transition?

Google’s stepped-up efforts in hardware will have little impact on the broader smartphone market share. We estimate in the U.S. today that Pixel accounts for about 1% of the market, iPhone for 50%, and Samsung for 30%. We believe this increased effort around the Pixel 2 could inch up market share from 1% to 2-3% in the U.S., likely at the cost of other Android devices. As for the smart speaker market, Google’s efforts in the next few years could yield a measurable increase in market share. Today, we estimate various forms of Alexa account for roughly 70-80% and we envision Google’s smart speaker share increasing from about 25% today to greater than 35% in the next 3-5 years.

Pixel 2 and Pixel 2 XL. As smartphones more or less reach parity on technical specs, the Pixel rises above the noise with pure utilitarianism. The phone may not be as aesthetically pleasing as the iPhone X or other competitors, but it is simply built to work around you. It’s most compelling feature is the pervasiveness of Assistant that can be summoned by squeezing the phone’s sides.

Google Home Mini and Max. Both of these devices are answers to the Echo Dot and the HomePod (diversifying on size and sound quality are table steaks now), with the aim is to bring the Assistant to as many homes as possible. The Max ($400) appeals to higher-budget audiophiles, and the Mini ($50) is positioned as an impulse buy. Google sweetened the purchase of a Pixel 2 by offering a Mini for free. One of the Pixel 2’s strengths is its integration with Google Home, giving an ambient presence to Assistant.

Google Clips. Clips is a wearable camera for capturing candid memories. This $250 product is a head scratcher and fit’s better in Facebook’s product roadmap.

Pixel Buds. Google is not kidding about making Assistant a ubiquitous presence. Their answer to AirPods are yet another way to stay connected to Assistant. On the positive side, we liked the real-time language translation. On the negative side the design of the buds seem clumsy given they’re connected by a rope-like cable.

Other product announcements. Google also announced a new Pixelbook, an upgraded Daydream VR headset, and plenty of new software features.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Amazon Flooding Smart Speaker Market

With the smart speaker market still comfortably in its youth, and voice as a computing interface just beginning to take hold, no one is sure what the best use cases for the technology are, or what the device that best captures the tech (as the smartphone did for mobile apps) will look like. Amazon’s solution: make a device for just about everything and see what works.

Here’s a breakdown of the new devices announced on the 27th

Echo – $100

  • The new Echo offers much better sound quality thanks to a dedicated subwoofer and tweeter which means it will compete more directly with Sonos and HomePod. It also has six interchangeable metal or sport fabric finishes for a new look.

Echo Plus – $150

  • The Echo Plus looks the same as the original Echo, this time in silver, white, or black, that also includes upgraded sound. It will act as a hub for smarthome IoT devices like lightbulbs (it comes with a Philips Hue smart bulb), locks, or thermostats.

Echo Spot – $130

  • With a circular 2.5-inch display, the Spot is a cross between the Echo Dot and the Echo Show. It runs certain screen-based apps, makes video calls, and is geared towards use-cases like a smart bedside alarm clock.

Echo Connect – $35

  • This device more or less turns any Echo model into a landline phone that allows you to make VoIP and traditional calls from you home phone number on you echo device.

Echo Buttons – $20 for two

  • The buttons seem to be made for the singular purpose of interactive games like trivia or Simon Says. You can bet we will see designated games in the Alexa Skills market that require use of the Buttons shortly. It is also said to be “the first of many Alexa Gadgets.”

New Fire TV – $70

  • The streaming box is smaller than its previous iteration and supports 4K and HRD video at 60fps. You’ll be able to control the new Fire TV with any Alexa device.

These additions come on top of existing products like the Echo, Dot, Tap, Look, and Show. If Amazon’s device lineup seems experimental, that’s because it is. After effectively losing in the mobile space, Amazon’s real goal is to become the de facto platform for the voice-controlled smart home. By flooding the market with hardware for every conceivable use case, Alexa hopes that by the time people realize that voice computing is here to stay, she will already be in enough homes to be the go-to platform that all IoT devices run on. For third party developers of smart home hardware, this is a classic network effect – hardware manufacturers want to make devices compatible with the platform that has the most users, and users want to buy the software (on Alexa speakers) that can control the widest array of smart devices. Furthermore, each new Alexa device means another trove of user data that is used to constantly improve the underlying AI software.

Remember, this is still a miniscule portion of Amazon’s business. Even though they have more than quintupled the number of people working on Alexa to over 5,000, the vast majority of their revenue comes from other segments like ecommerce and web services. This means they are able to sell these devices at a narrow margin, focusing instead on penetrating as many homes as possible. This recent product-line revamp makes the flagship Echo less than half the price of Apple’s upcoming HomePod and about on par with Google’s $130 Home device. We have written about our long-term outlook on the smart speaker market (here), which remains unchanged – the winner will be the product that delivers frictionless connectivity between devices and real increased efficiency. Amazon’s sprawling product line and compelling price points are attractive early on, but they could face difficulties with their lack of an ecosystem of staple products like phones and computers in the future. Nonetheless, Amazon’s aggressive expansion into the space is exciting for both tech-hungry consumers and those of us watching voice-first computing take form before our eyes.

UTM: What is it and why should you care?

Legislation and infrastructure are two of the biggest hurdles facing the unmanned aircraft industry. An Unmanned Traffic Management (UTM) system is an important solution. With nearly 30 years of experience in the Aviation sector over his career at the Department of Transportation, Federal Aviation Administration (FAA), and the FAA’s Unmanned Aircraft Systems (UAS) Integration Office, Jim Williams is an important influencer in the drone space.

Written by guest author Jim Williams, founder and President of JHW Unmanned Solutions.

What is the UTM? Unmanned Aircraft Systems (UAS) Traffic Management (UTM) is a concept created by NASA to safely integrate manned and unmanned aircraft into low altitude airspace.  If you are interested in the details you can take a look at NASA’s excellent website for UTM:  NASA UTM Home Page.  NASA’s goal for UTM is “Enabling Civilian Low-Altitude Airspace and Unmanned Aircraft System Operations”.  They are using lessons learned from previous research projects where they developed software to help improve the efficiency and safety of terminal area operations at major airports and applying it to low altitude unmanned aircraft operations.  They are building a system to allow operators like Amazon, Google, and now Uber to connect into a central coordinating service to manage unmanned operations at low altitudes (probably no higher than 400 feet above the ground).  Uber is included in the conversation because they plan to operate their urban mobility aircraft without a pilot on board.  Their business model is to have their air taxi aircraft operate autonomously similar to the way Amazon plans to operate their delivery aircraft.

Why is this capability needed?  Currently the FAA only manages airspace below 400’ near large airports which leaves the vast majority of the country’s airspace below 1200’ as uncontrolled.  Managed airspace is much easier for unmanned aircraft operations since the air traffic service provider can maintain safe separation between aircraft.  Low altitude uncontrolled airspace safety depends on manned aircraft pilots to see and avoid other aircraft near them.  Since the unmanned aircraft pilot is on the ground, she must rely on sensors in the UAS to allow her to detect and avoid other aircraft.  This adds cost and complexity to the system than can be reduced or eliminated by a functioning UTM system.

The core functionality of a UTM implementation will be managed by a UAS Service Supplier (USS).  There are currently no companies who have stepped forward to take on the role of a USS.  Several drone software companies have partnered with NASA to support the development of the concept (e.g. Skyward and Airware).  The role of the USS is still evolving but we know that the USS would be a commercial entity with approval and oversight by the FAA.  The USS would provide services like:

  • Command and control communications between the UAS Pilot and the aircraft
  • Ground based radar to detect manned aircraft and provide the location to the unmanned operators
  • High density weather sensors to provide critical environmental conditions to operators
  • Coordinate with the FAA air traffic control facilities when unmanned aircraft need to operate in controlled airspace
  • Control access to the airspace to approved operators and help identify unapproved aircraft to the FAA
  • Manage contingencies that alter routine operations (e.g. a severe weather event)

As you can see from the diagram below there are many functions that are required to support the UTM concept.  Each green box represents a service that could be provided by one or more companies working together.  Each USS could provide opportunities for individual companies to provide one or more of the functions listed above.

Notional UTM Architecture from the FAA NASA UTM Research Plan

The USS would be able to charge for their services to cover their costs but it is unknown if they would charge per flight or a monthly fee.  The FAA’s role would be strictly regulatory oversight but they would work closely with the USS to make sure the service was safe and fair to all operators.  The NASA concept envisions the USS would provide service to all operators like Uber, Amazon, and Google and does not envision that each operator would have to set up these shared services.  However, the actual business and regulatory model is still evolving.  Congress supported the concept by directing the FAA to participate in the NASA UTM program in the “FAA Extension, Safety, and Security Act of 2016” by creating research plan and creating a pilot program.  The plan is available on the FAA website FAA NASA UAS Traffic Management Research Plan and the NASA demonstration in 2018 is the “Pilot Program” mentioned in the law.

Current visual line of site operators would be unaffected by the implementation of UTM.  The service is directed at enabling UAS flights beyond visual line of site.  It is also intended to enable highly automated operations that would allow multiple UAS to be operated by a single person.  Many companies in the industry (e.g. Amazon and Uber) believe high levels of automation are essential to their business models.  NASA has conducted two technology demonstrations and plans two more to test the concepts and learn from issues that may come up.  The third demonstration is planned for early 2018 and will be the most ambitious.  The focus will be on testing technologies that maintain safe spacing between cooperative (aircraft equipped with transponders or ADS-B) and non-cooperative (aircraft only detectable by primary radar) over moderately populated areas.  The final demonstration has not been scheduled but would be close to an actual commercial application of the technology.  The plan is for the FAA to take over the program in 2019 to establish the policy needed to approve operations.

Amazon and Uber have both stated publicly that they believe UTM is a key enabler for achieving their business plans.  There are several other large corporations who are participating in the UTM program because they see the potential benefits for improving UAS integration into low altitude airspace.

However, there are still many unknowns that make the true business potential of the concept uncertain.  The central question is can the USS return their investment in software and infrastructure based on a cost-effective fee structure?  I believe the answer is yes and we will see the first implementations of UTM by 2020.  This opinion is based on the participants in the NASA program and the amount of effort those participants are investing in the demonstrations.  Many obstacles remain, but the work is continuing with enthusiastic support from the UAS industry.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.