Waymo Unleashes Autonomous Cars – Now Must Earn Public’s Trust

On November 7th, Waymo reached a new milestone by removing the safety net and testing fully autonomous vehicles on public roads without someone behind the wheel to take over in case of an emergency. Waymo has been testing in the Phoenix, AZ area for some time now, and other companies like Uber, Cruise, and NuTonomy have similar operations. But there has always been an employee in the driver’s seat. November 7th marked a new stage of testing self-driving cars. Along with last Tuesday’s test, which took place in Chandler, AZ, Waymo has recently made an exciting push to prepare the public for the cars that, as they just proved, are closer to full deployment than many people believe.

Largely considered the leader in autonomy, Waymo has driven a collective 3.5 million autonomous miles on public roads across 20 cities (that’s the equivalent of the average American driving for 291 years). They have also completed over 20,000 different scenario tests at their facility in California, and simulate 10 million miles per day. “In short: we’re building our vehicles to be the most experienced driver on the road,” they write in a blog post. Along with a growing crowd of other competitors, Waymo gets closer each day to deploying a fleet of self-driving cars available to summon at your convenience. Regardless of how advanced the tech may be, however, the reality remains that people simply aren’t ready to be driving down the road next to an empty car – perhaps the biggest hurdle facing autonomy is widespread acceptance of them.

Being realistic about autonomy. Waymo knows that in order for mass adoption to take place, the public must first trust autonomous vehicles. Successfully building a vehicle that can operate autonomously is a feat of engineering, but educating the public on its benefits is a different task entirely.

Realizing this fact, Waymo has recently made an impressive effort to prepare the public for what’s coming. They have partnered with organizations like the National Safety Council and Mothers Against Drunk Driving, trained law enforcement departments on how to deal with incidents involving self-driving vehicles, and attempted to be transparent by releasing a comprehensive Safety Report and inviting reporters to a test drive at their facility in Atwater, CA. According to a AAA survey this year, only 20% of Americans would trust an autonomous vehicle to drive itself with them in it. This leaves no doubt that we have a long way to go before this technology becomes mainstream. Waymo, more than any other player in the space, is attacking the problem head on, opening up a dialogue with the public and taking an inclusive approach to educating everyone on the risks and benefits of a new type of mobility.

Partnerships. By engaging the public and partnering with organizations outside of tech and auto, Waymo hopes not only to raise awareness and educate people on self-driving cars, but to demonstrate how they are, in fact, a much safer and smarter mode of transportation. Here are some of the groups that Waymo has teamed up with:

  • The National Safety Council – Focused on areas where the greatest number of preventable injuries or deaths occur, including workplace safety, prescription medicine abuse, teen driving, and cell phone use while driving. 40,000 Americans die on the road each year.
  • Foundation for Senior Living – Believes age shouldn’t slow anyone down. 80% of seniors live in vehicle-dependent suburbs, and there are 45M people in the U.S. over the age of 65.
  • Mothers Against Drunk Driving – Intoxicated driving is the number one cause of death on roadways.
  • The Foundation for Blind Children – Focused on empowering the blind with independence. There are 1.3 million legally blind individuals in the U.S., growing to over 2 million by 2050.
  • East Valley Partnership – Concerned with improving quality of business and life in the East Valley region. Americans spend 50 minutes on average commuting to and from work each day

Waymo’s Safety Report. “Fully self-driving vehicles will succeed in their promise and gain public acceptance only if they are safe.” This thesis, stated in the early pages of a recently released Safety Report, resonates throughout the next 43 pages, as Waymo lays everything on the table. The report details safety procedures, how vehicles respond to numerous situations, how the autonomous systems function, and several other elements that must be understood before feeling comfortable riding in a self-driving car. It quickly becomes clear that safety is at the core of Waymo’s pitch. As the first voluntary safety assessment of its kind, much of its contents will likely be mandated by regulatory bodies going forward.

Law Enforcement Training. Yes, there will still be accidents on the road when cars drive themselves. While we believe there will be radically fewer of them, law enforcement must still understand how to respond to an incident involving a driverless car. Waymo has designed their systems to interact with law enforcement and first responders with audio sensors to discern where sirens are coming from, and responses like safely yielding or pulling over to a complete stop. They have also briefed authorities in every city in which they test, and conducted on-site trainings to help police and emergency vehicles identify and access self-driving cars.

Test Drives. On October 31st, Waymo hosted a group of journalists at their usually secret testing compound in Atwater, CA. This act is not unprecedented; however, coupled with Waymo’s other recent actions, it represents a level of transparency unmatched by any of their competitors. The group was given a test drive in a mock town they have created, complete with an array of real-life scenarios like an unexpected cyclist cutting in front of the car, or a man standing beside a broken-down Hyundai. Find a detailed write-up of the test drive here or here.

The idea was to give riders a feel for what it’s like to use Waymo’s Chrysler Pacifica minivans as on-demand vehicles. It will function a lot like current ride-hailing platforms – a rider summons a car on a smartphone app, the car locates the rider, and navigates to their destination. Press a friendly, blue “Start Ride” button to embark, and passenger-facing screens show a rider-friendly version of what the car is seeing.

Trust building 101 – with transparency. As a cyclist rides by or a car passes, it appears on the monitor – it even shows trees, parked cars, and buildings in your surroundings. The level of detail that Waymo focuses on in terms of user experience leads one to believe they are months, not years, from deploying their much talked-about fleet. A video from Waymo exhibits how remarkably smooth the process is. Between a groundbreaking and successful test, and a new level of transparency focused on building trust and engaging the public, we believe Waymo has earned its pole position in the race for autonomy. Their next step will likely be the deployment of a small fleet to a limited group of participants in AZ – stay tuned.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Nvidia Weighs In on Timing for Seismic Tech Shifts

Nvidia Corporation beat earnings today, posting revenues of $2.64B ($2.36B est.), up 32% y/y. Nvidia also posted $1.33 EPS ($0.94 est.) an increase of 60% y/y. In addition, Nvidia is raising its quarterly cash dividend 7 percent to $0.15 per share.

What we learned about the size and timing of seismic shifts in tech. Today, Nvidia’s bread and butter business is around data centers and gaming, but the company will evolve to become the hardware foundation beneath AI, autonomy, and cryptocurrencies.

Size. To put the significance of this shift into perspective, CEO Jensen Huang shared on the call:

“I happen to believe that everything that moves will be autonomous some day.” – Jensen Huang

As evidence of everything moving to autonomy, the company reported DHL is using its Drive PX chips for autonomous light trucks. Separately, the company outlined why the in-car infotainment is going to become an important market in the future. As drivers become passengers, their actions inside a car will change, increasing the need for living-room quality mobile entertainment.

Timing. Separately on the call, Nvidia offered their perspective regarding the timing of these upcoming seismic tech shifts.

  • Expect robotaxis in late 2019 or early 2020.
  • Consumer level 5 fully-autonomous vehicles on the road by late 2020 or 2021.
  • Largely absent from the earnings call was talk about the VR opportunity, suggesting Nvidia sees ESports gaming as a bigger opportunity in the near-term. This observation does not dampen Loup Ventures’ optimism around VR’s long-term potential.

CPUs passé, GPUs the future. As a starting point, Nvidia is a GPU company. Over the years, Moore’s Law related to CPUs has been the measurement of computing capacity. Jensen mentioned the well-documented breakdown in Moore’s Law multiple times on tonight’s call given his belief its coming to an end as CPU performance improvements plateau. Nvidia’s belief is that GPU improvement will replace CPU improvement as the measurement of computing capacity, giving the company an open-ended growth opportunity in the years to come.

The plot soon to thicken as Intel tries to catch up in GPUs. Intel has been a laggard in the past year with INTC shares are up 37%, compared to NVDA shares up 203%, illustrating investors’ optimism around Nvidia’s 5-year opportunity. However, Intel isn’t letting Nvidia run away with the GPU market, and this week, hired AMD’s GPUs head Raja Koduri to help establish Intel as a player in the GPU space.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Eight Fun Facts About Computer Vision

Our experience of the world is intensely visual. Researchers suggest over half of our brain power is devoted to processing what we see. We talk a lot about how artificial intelligence will transform the world around us, automating physical and knowledge work tasks. In order for such a system to exist, it’s clear that we must teach it to see. This is called computer vision, and it is one of the most basic and crucial elements of artificial intelligence. At a high level, endowing machines with the power of sight seems simple, just slap on a webcam and press record. However, vision is our most complex cognitive ability, and machines must not only be able to see, but understand what they are seeing. They must be able to derive insights from the entirely new layer of data that lies all around them and act on that information.

Despite being an important driver of innovation today, computer vision is little understood by those outside of the tech world. Here are a handful of facts that help put some context around what computer vision is and how far we’ve come in developing it.

1.)  Computer scientists first started thinking about vision about 50 years ago. In 1966, MIT professor Seymour Papert gave a group of students an assignment to attach a camera to a computer and describe what it saw, dividing images into “likely objects, likely background areas, and chaos.” Clearly, this was more than a summer project, as we are still working on it half a century later, but it laid the groundwork for what would become one of the fastest growing and most exciting areas of computer science.

2.)  While computer vision (CV) has not reached parity with human ability, its uses are already widespread, and some may be surprising. Scanning a barcode, the yellow first down line while watching football, camera stabilization, tagging friends on Facebook, Snapchat filters, and Google Street View are all common uses of CV.

3.)  In some narrow use cases, computer vision is more effective than human vision. Google’s CV team developed a machine that can diagnose diabetic retinopathy better than a human ophthalmologist. Diabetic retinopathy is a complication that can cause blindness in diabetic patients, but it is treatable if caught early. With a model that has been trained on hundreds of thousands of images, Google uses CV to screen retinal photos in hopes of earlier identification.

4.)  One of the first major industries being transformed by computer vision is an old one you might not expect: farming. Prospera, a startup based in Tel-Aviv, uses camera tech to monitor crops and detect diseases like blight. John Deere just paid $305M for a computer-vision company called Blue River. Their technology is capable of identifying unwanted plants and dousing them in a focused spray of herbicide to eliminate the need for coating entire fields in harmful chemicals. Beyond these examples, there are countless aerial and ground based drones that monitor crops and soil, as well as robots that use vision to pick produce.

5.)  Fei-Fei Li, head of Stanford’s Vision Lab and one of the world’s leading CV researchers, compares computer vision today to children. Although computers can “see” better than humans in some narrow use cases, even small children are experts at one thing – making sense of the world around them. No one tells a child how to see. They learn through real-world examples. Considering a child’s eyes as cameras, they take a picture every 200 milliseconds (the average time an eye movement is made). So by age 3, the child will have seen hundreds of millions of pictures, which is an extensive training set for a model. Seeing is relatively simple, but understanding context and explaining it is extremely complex. That’s why over 50% of the cortex, the surface of the brain, is devoted to processing visual information.

6.)  This thinking is what led Fei-Fei Li to create ImageNet in 2007, a database of tens of millions of images that are labeled for use in image recognition software. That dataset is used in the ImageNet Large Scale Visual Recognition Challenge each year.  Since 2010, teams have put their algorithms to the test on ImageNet’s vast trove of data in an annual competition that pushes researchers and computer scientists to raise the bar for computer vision. Don’t worry, the database includes 62,000 images of cats.

7.)  Autonomous driving is probably the biggest opportunity in computer vision today. Creating a self-driving car is almost entirely a computer vision challenge, and a worthy one — 1.25 million people die a year in auto-related deaths. Aside from figuring out the technology, there are also questions of ethics like the classic trolley problem: Should a self-driving vehicle alter its path into a situation that would kill or injure its passengers to save a greater number of passengers in its current direction? Lawyers and politicians might have to sort that one out.

8.)  There’s an accelerator program specifically focused on computer vision, and we’re excited to be participating as mentors. Betaworks is launching Visioncamp, an 11-week program dedicated to ‘camera-first’ applications and services starting in Q1 2018. Betaworks wants to “explore everything that becomes possible when the camera knows what it’s seeing.”

We’re just scratching the surface of what computer vision can accomplish in the future. Self-driving cars, automated manufacturing, augmented and virtual reality, healthcare, surveillance, image recognition, helpful robots, and countless other spaces will all heavily employ CV. The future will be seen.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Robot Fear Index: 30.9

Like many in the tech space, we believe robotics is changing the nature of work; however, public perception of robots is still a question mark. We developed our Robot Fear Index to measure and track the average consumer’s perception of robots. We asked over 500 US consumers about topics ranging from their use of robots at home to their comfort level with self-driving cars. Then we distilled the data down to an index value that we will publish regularly. An index value of 100 suggests widespread and extreme fear of robots; an index value of 0 suggests minimal fear of robots.

Robot Fear Index: 30.9. Consumer adoption of artificial intelligence and robotics is already quite broad, and yet, fear of robots is also pervasive. We fear that they’ll replace our jobs or somehow overthrow us; and to be blunt, those fears are valid. That said, our 2017 survey indicates acceptance for these technologies continues to grow. Our most recent Robot Fear Index value of 30.9 (vs. 31.5 in late 2016) suggests that public perception of robots is essentially unchanged over the last year despite increased awareness of artificial intelligence, robotics, and the potential impact of these technologies. Notably, the related increase in media coverage of these issue does not seem be causing the rise in fear that we might expect. In fact, the slight year-over-year decline in our index value suggests slightly less fear of automation technologies.

Our most recent Robot Fear Index value of 30.9 (vs. 31.5 in late 2016) suggests that public perception of robots is essentially unchanged over the last year despite recent media coverage and increased awareness of automation technologies.

Survey Demographics. Of the 433 US consumers that responded to our 2017 Robot Fear Survey, 54% were male and 46% were female. Our survey population was also equally weighted across all age demographics, as shown in the exhibits below.

Use of Digital Assistants Growing Slowly. We continue to see digital assistants as an onramp to AI and robotics for many consumers. Our 2017 survey shows 69% of consumers have used a digital assistant (Siri, Google Assistant, Alexa) and roughly one-third use a digital assistant once a day or more, which is in-line to our results last year. When asked how many digital assistant consumers own, 21% said 1, while 14% indicated greater than 1.

Comfort with Robots is Up Slightly. We believe the comfort with AI is driving comfort with robotics. We asked consumers on a scale of 1 – 10 (1 being the most) how comfortable they are with using robots in many different settings including house cleaning (robot vacuums), healthcare (surgical procedures) and travel (self-driving cars). We were encouraged to see that 7 of the 8 categories we track saw a modest increase in comfort levels around robotics.

Domestic Robot Adoption Large Catalyst. We believe that consumer awareness of robotics is closely correlated to the rise of domestic robots within households. Domestic robots are classified as robot vacuum cleaners, mops and lawn mowers, and over the next 10 years we believe this category will be one of the fastest growing robot markets in the world. Our data shows that 75% of US consumers have yet to buy a household robot. Although we do not have the historical data to show y/y comparisons, last week, iRobot, a leading robotic vacuum and wet floor company, reported better than expected Q3 results and raised their FY17 revenue guidance for a third consecutive quarter (see note here). Given iRobot’s results, we believe the domestic robot market is seeing strong adoption domestically and internationally.

What Is Keeping Consumers From Using Robots? Many consumers have not yet adopted AI or robotic technology. When asked what has kept you from using robots, 41% (36% in 2016) of consumers said they are just not interested, while 29% (21% in 2016) believes robots are too expensive. That said, it was encouraging only 6% of consumers don’t use robots because it makes them nervous, which is down from 11% in 2016. We believe one of the the bigger fears when it comes to AI and robotics, is the risk of taking jobs. When asked when will AI and robotics cause significant job loss, 27% said within 5 years, 31% believe in 10 years and 24% anticipate significant job loss in 20 years. The remaining 17% of consumers did not believe robots would ever take our jobs.

Bottom Line. Following our 2017 Robot Fear Index survey, we believe consumer fear of robots is essentially unchanged, despite growing awareness of the potential risks of automation. We think our index value of 30.9 quantifies this cautious comfort with robots and we’re looking forward to updating the Robot Fear Index regularly as we track the progress of the robotics theme.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Google Earnings: Moving the Ball Forward on AI

A thought on the quarter. Google reported Sept-17 results tonight highlighted by 24% fxn revenue growth, comparable to 2016 growth and above 2015 which was 20%. It’s worth taking a step back to recall investors concern from 2 years ago that revenue growth was going to drift lower from the high teens to the high single digits by the end of 2017. September results show growth rates are running 2x higher than what investors had predicted back in 2015. The reason is Google has three properties that 1.5B+ people can’t live without, including Search, Maps, and Youtube. Going forward, we expect the company to add increasing ease of use, utility, and monetization efforts to these products that will result in 15-20% revenue growth for the foreseeable future.

Sundar leads off with AI for 4th consecutive quarter. It’s clear that Sundar is trying to get his point across: AI is the future of Google. We went back and looked at his opening comments over the last year and found he has lead his prepared remarks by asserting Google’s evolution from a mobile to an AI-first company on each of the past four earnings calls. below are his opening remarks over those four quarters.

  • Sept-17 – “Thank you, Ruth. We had another great quarter. (omitting 1 line) Even though we are in the early days of AI, we are already rethinking how to build products around machine learning. It’s a new paradigm compared to mobile first software, and I’m thrilled how Google is leading the way.”
  • Jun-17 – “Thanks, Ruth. We had a phenomenal quarter. Google continues to lead the shift to AI-driven computing.”
  • Mar-17 – “Thanks, Ruth. It’s been a terrific start to the year. (omitting 10 lines) Now turning first to machine learning and access to information. I’m really happy with how we are transitioning to an AI-first company.”
  • Dec-16 – “Thanks, Ruth. 2016 was a great year for Google and 2017 is shaping up to be even more exciting. (omitting 11 lines) First, machine learning and access to information. As I’ve shared before, computing is moving from a mobile­first to AI­-first with more universal, ambient and intelligent computing that you can interact with naturally, all made smarter by the progress we are making with machine learning.”

Buzzword Bingo. For the same four quarters, we tracked how many times presenters and analysts made comments artificial intelligence by tallying instances of AI jargon (AI, artificial intelligence, machine learning, deep learning, TensorFlow, natural language processing). We also noted mentions during prepared remarks vs. those during Q&A. This is evidence of the intensity level at which Google is pursuing AI.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.