Humans Are a Bigger Existential Risk Than AI

Elon Musk continues to warn us of the potential dangers of AI, from debating the topic with Mark Zuckerberg to saying it’s more dangerous than North Korea. He’s called for regulating AI, just as we regulate other industries that can be dangerous to humans. However, Musk and the other AI debaters underestimate the biggest threat to humanity in the AI era: humans.

For the purposes of the current debate, there are three potential outcomes debaters of artificial intelligence propose:

  1. AI is the greatest invention in human history and could lead to prosperity for all.
  2. A malevolent AI could destroy humanity.
  3. An “unwitting” AI could destroy humanity.

There are few arguments in between worth considering. If the first possibility was not the ultimate benefit, then the development of AI wouldn’t be worth exploring given the ultimate risks (2, 3).

There’s certainly a non-zero chance that a malevolent AI destroys humanity if one were to develop; however, malevolence requires intent, which would require at least human level intelligence (artificial general intelligence, or AGI), and that is probably several decades away.

There’s also a non-zero chance that a benign AI destroys humanity because of some effort that conflicts with human survival. In other words, the AI destroys humanity as collateral damage relative to some other goal. We’ve seen early AI systems begin to act on their own in benign ways where humans were able to stop them. A more advanced AI with a survival instinct might be more difficult to stop.

There’s also a wild card relative to the first outcome that the two sides of the AI debate. On the road to scenario one, the positive outcome and probably the most likely outcome, humans will need to adapt to a new world where jobs are scarce or radically different than work we know it today. Humans will need to find new purpose outside of work, likely in the uniquely human capabilities of creativity, community, and empathy, the things that robots cannot authentically provide. This radical change will likely scare many. They may rebel with hate toward robots and the humans that embrace them. They may band behind leaders that promise to keep the world free of AI. This could leave us with a world looking more like the Walking Dead than utopia.

Since the advent of modern medicine, humans have been the most probable existential threat to humanity. The warning bells on AI are valid given the severity of the potential negative outcomes (even if unlikely), and some form of AI regulation makes sense, but it must be paired with plans to make sure we address the human element of the technology as well. We need to prepare humans for a post-work world in which different skills are valuable. We need to consider how to distribute the benefits of AI to the broader population via a basic income. We need to transform how people think about their purpose. These are the biggest problems we face as we prepare to enter the Automation Age, perhaps even bigger than the technical challenges of creating the AI that will take us there.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Machines Taking Jobs: Why This Time Is Different

Will AI and robotics revolutionize human labor or not? 

More than half of all US jobs could be disrupted by automation in the next several decades; at least that’s our opinion. About half the people we talk to disagree. Those that disagree think AI will open up new job opportunities by enhancing human abilities. A common element to their argument is that we’ve always had technical innovation and human work has evolved with it. A few examples would be the cotton gin, the printing press, and the automobile. All of these inventions threatened jobs of their era, but they ended up creating more jobs than they destroyed. So why is this time different?

Because, for the first time in history, we don’t need to rely on human intelligence to operate the machines of the future. The common denominator among those three examples and countless other technical innovations is that they were simply dumb tools. There was no on-board intelligence. Humans using those tools provided the intelligence layer. Humans were the brains of the cotton gins, printing presses, and automobiles. If the human operator saw or heard a problem, they fixed it and continued working. Today, the intelligence layer can be provided by computers through computer vision, natural language processing, machine learning, etc. Human intelligence is no longer required.

You might say that machines aren’t nearly as smart as humans, so they aren’t as capable as humans. But in reality, they don’t need to be. AI required to operate a machine only needs to have very limited domain knowledge, not human level intelligence (a.k.a. artificial general intelligence). Think about driving a car. You aren’t using 100% of your total intelligence to drive a car. A large portion is thinking about other things, like disagreeing with this article, singing along with the radio, and probably texting. An autonomous driving system only needs to be capable of processing image data, communicating with computers from other devices related to driving, like other vehicles, traffic signals, and maybe even the road itself, making dynamic calculations based on those data inputs and turning those calculations into actions performed by the vehicle. Any incremental intelligence not related to those core functions is irrelevant for an autonomous driving system.

The magnitude of the technological change is also significantly different in this current wave of advancements in AI and robotics. This wave is more akin to the advent of the farm when humans were still gatherers, or the advent of the factory when we were still farmers. Farms not only organized the production of food, but also encouraged the development of community and trade. Factories organized the production of all goods, encouraged the development of cities, and enabled our modern economic system by institutionalizing the trade of labor for wages. Automation will result in equivalent fundamental changes to the philosophy of production by taking it out of the hands of humans. This could result in societal changes of greater freedom of location and a basic income. In a way, the Automation Age may be an enhanced return to the hunter/gatherer period of humanity where basic needs were provided, originally by nature, in the future by machines. Except in the Automation Age, our purpose will be to explore what it means to be human instead of simply survive.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Faceoff: Amazon Echo Show vs Google Home Part II

As a part of our continuing efforts to understand the ways and speed at which artificial intelligence enters our everyday lives, we reexamined two home assistants based on a study we performed in February.  The two most popular assistants, Google Home and Amazon Echo were put to the test, this time substituting the Echo with the Echo Show, which includes a 7″ touchscreen.

Methodology. For this experiment, we asked the same 800 queries of both the Echo Show and Google Home, similar to our first study. We graded the queries on two metrics:  First, did the device understand what we asked correctly? Second, did the device answer the query correctly? In our study, Amazon’s Echo Show understood 95.88% of the queries we asked and answered 53.57% of all queries correctly. Google Home understood 94.63% of the queries we asked, but was able to answer 65.25% correctly. Below, you can see the improvements that each home assistant made since our last set of queries.

One advantage the Amazon Echo Show has when it comes to understanding queries is that we have the ability to confirm the data using Amazon’s companion app.  This app gives the user a live feed of what Amazon Echo Show heard.  Google Home does not offer a transcript of what it’s home assistant device picked up.  Because of this, it was difficult to tell if Google Home understood the queries but couldn’t answer them, or if it truly had a harder time understanding queries. Since we were unable to see exactly how well Google Home understood our queries, we assumed that if Google Home responded that it was unable to perform a certain function, then it had understood the query correctly.  For example, if we asked, “Hey Google, send a text to John” and received a response “Sorry, I can’t send texts yet,” then the query would be marked as understood correctly, but answered incorrectly.

Results. Both home assistants showed increased performance across the board.  This time the Google Home outperformed the Echo in total number of correct answers by nearly 12 percentage points, up from a 5 point performance gap in our February results.  While each digital assistant has its strengths and weaknesses, Google Home outperformed its rival in 3 of the 5 query categories by a surprising margin.  This is significant because it shows not only rapid improvement, but outperformance of Amazon who has both a 2-year head start and a near 70% market share vs. Google’s 24% share of the home assistant market, according to eMarketer.

Both Home Assistants Notably Improved in Navigation. The most dramatic increase for both assistants was in navigation. In February, over 90% of navigation questions were answered with: “I can’t help you with that.” Today, navigation is the best category for both the Google Home and the Echo Show, with the Google Home answering 92% of queries correctly, and the Echo Show answering 63% of queries correctly.

Echo Show: Screen adds to experience, but software upgrades drive improvement. The Echo Show’s camera and touchscreen allow it to make video calls, monitor your security cameras, visually display some forms of information, and introduces new use cases with Alexa Skills that incorporate a screen. For instance, you can say, “Alexa, show me the trailer for the new Spiderman movie,” or scroll through recommendations for local pizzerias. While this adds to the user experience, the addition of the screen itself isn’t driving all of the improvement that we are seeing with Alexa. Instead, numerous software updates have increased the way Alexa can contribute to our daily lives. The Echo Show had a near 20% improvement in its ability to answer both local questions (“Where can I find good barbecue?”), and respond to commands (“Cancel my 2:00 p.m. meeting tomorrow”). Both of these changes are driven by software improvements, not the addition of the screen.

Google Home: Quickly adding features to pass Alexa. Google Home improved its local and commerce results by 46 percentage points and 24 percentage points, respectively. This represents a broadening of its skills along with high navigation, information, and local scores. Google Home also supports up to 6 different user accounts, meaning your whole family can get personalized responses when you say, “Okay Google, what’s on my calendar today?” Google Home will recognize your voice and read your upcoming events. Separately, commerce is an area that was previously dominated by Amazon, but Google is now at parity, mainly due to its superior ability to understand more diverse natural language. While Alexa still has a larger database of add-on skills, Google Home outperformed in our set of queries.

Future home assistant competition looks intense. While Amazon and Google are the current frontrunners in the home assistant race, they are facing competition from several notable future entrants:

  • Apple HomePod (expected December 2017)
  • Alibaba Tmall Genie (released August 8th, 2017)
  • Microsoft Invoke (expected Fall 2017)
  • Lenovo Smart Assistant (utilizing Alexa, expected Fall 2017)
  • HP Cortana Speaker
  • Samsung Vega

Persisting Problems of Home Assistants. While home assistants continue to make noticeable improvements, we still believe that they are in the early innings of a platform that will become an important part of computing in the future. That being said, there are small, technologically reasonable improvements that we would like to see from these products. Our main complaint is the lack of integration with devices to make use of information or take further action. In most cases, the fastest way to get information to a user is on a screen – it’s hardly convenient to have a list of 10 restaurant recommendations read to you one at a time. Instead, you should be able to call up information verbally and have it sent to your smartphone, computer screen, or television. The Echo is able to interact with your phone via the Alexa app. Google Home can control a Chromecast. Both are able to control certain smart home devices. There is clear progress being made on this front, but it remains a key obstacle to the devices’ effectiveness. Another shortcoming that persists is unsatisfactory natural language processing, an added barrier to widespread use. Both assistants were selective in the way you had to phrase a question in order for it to be answered correctly. For example, Google Home will understand, “What is a gigawatt?” but cannot process, “Google gigawatt.” or, “Tell me what a gigawatt is.” In order for digital assistants to reach widespread adoption, users need to interact with them seamlessly.

Overall, we were impressed by the improvement that took place in a few short months and remain optimistic that the technology will continue to advance at this pace going forward.  As new players enter the space and homes become more connected, the technology in these devices will be increasingly important in our everyday lives.  Later this year we will track the further progress made by the Echo and the Home, and compare them to some of the new entrants set to arrive by the end of 2017.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

AGV Deep Dive: How Amazon’s 2012 Acquisition Sparked a $10B Market

Special thanks to Austin Bohlig for his work on this note. 

Amazon’s 2012 acquisition of Kiva Systems was the spark that ignited the Autonomous Guided Vehicles (AGV) industry, which we believe will represent a $10B market by 2025. We’ve taken a deep dive into the AGV market, where we identify the different use cases for AGVs, market leaders and opportunity, as well as highlight specific areas where we see the best investment opportunity.

We believe the aggregate robotics hardware market will grow 17% y/y to $24.5B in 2017, and by 2025 we believe the market will eclipse $73B. When including supporting software and services, we believe the total robotics market will be more than $200B in the next ten years. Many categories within the 5 robotics domains (industrial, commercial, domestic, military and social entertainment) will flourish over this time frame.

We are particularly excited about the impact three categories will have on the world: collaborative robots (co-bots), domestic robots (aka robot vacuums, mops and lawnmowers), and Autonomous Guided Vehicles. While we have recently picked up positive data points in the Co-bot and domestic robot markets, the AGV market is a little bit harder to track due to the limited number of publicly traded companies in the space. However, based on the number of AGVs Amazon is deploying internally, as well as the amount of funding and M&A activity occurring in the space, we are convinced this sub-segment of the commercial robot market is inflecting.

What Is An Autonomous Guided Vehicle (AGV)?

AGVs are mobile robots used in manufacturing and other commercial industries to improve logistics efficiencies by transporting goods and other materials autonomously. The major benefits of AGVs are twofold: 1) these robots do not require human interaction when deployed; 2) AGVs do not require supporting infrastructure (tracks, floor sensors, etc), which are needed to operate legacy material handling equipment. Without the need for supporting infrastructure, these robots are more flexible and have a lower total cost of ownership. Advancements in Simultaneous Location and Mapping (SLAM) software and computer vision technologies allow these robots to understand their surrounding environment in real-time, which allows them to operate in variable surroundings and around people. Pricing on AGVs has come down significantly over the last 5 years, which has been a catalyst for the industry. Today, AGV pricing varies from $35 – 50K (not including supporting software and services). Below we highlight a few examples of AGVs in the market today.

Amazon Sparked the AGV Industry

The AGV market flew under-the-radar throughout the early 2000s, but in 2012, the industry became one of the most talked about sub-markets in the robotics space after Amazon acquired the AGV leader (Kiva Systems) for $775M. Amazon had no plans to sell these robots externally, only using them internally to improve logistics efficiencies within their fulfillment centers, which created a significant supply/demand imbalance and a massive opportunity for other companies to enter the space. Since deploying Kiva robots, Amazon has highlighted publicly the positive impact that robots are having on productivity and efficiencies. According to a 2017 Business Insider article, Amazon has deployed 15K mobile robots annually since 2014, and now has over 45K robots in operation throughout 20 fulfillment centers. These data points show the benefits of AGVs and validate that this market is real.

AGV Applications: Today Warehouses; Tomorrow Hospitals, Hotels, and Beyond

Today, most AGVs are deployed within warehouses and fulfillment centers to automate material handling and package logistics. Robots in these settings autonomously retrieve a shelf or bin and return to a packaging station where a human employee picks specific SKUs out of bin. Or, more commonly, a human will follow the AGV around a warehouse, and the AGV will stop in front of specific spots where a human then places the desired product in a bin. While most AGV products need to be fully purchased, there are a few companies capable of retrofitting legacy equipment with autonomous technologies and transforming them into AGVs. There are also a few companies that are taking automation to the next level by adding a robot arm to pick the desired object, taking humans completely out of the equation. While this is where the industry is heading, object recognition and grasping are two of the toughest challenges to solve in this space. Random pick-and-place is considered the “holy grail” of robotics, and it will take time for humans to be fully eliminated within a warehouse.

While we believe AGV adoption within warehouses and fulfillment centers will be a key industry driver, we believe the opportunity in other verticals will add meaningful tailwinds to this market. For example, AGVs are already being deployed in hospitals to autonomously transport food, prescriptions, or other medical supplies throughout a medical facility. In addition, manufactures in all different industries are adopting these technologies because of the cost advantages and flexibility over other legacy solutions. We also see a large opportunity for AGVs to be deployed in many commercial services settings such as delivering products to rooms in a hotel, as well as eventually companies such as Amazon using AGVs to deliver packages autonomously.

Read More

Mcity Expert Weighs In On Autonomous Driving

“People tend to overestimate the amount of change that can happen in the near-term, and understate it in the long-term.” Huei Peng, Director of Mcity

Last week we made a trip to Ann Arbor, Michigan to hear more about Mcity. Mcity is a public-private partnership that focuses on the research, development, and deployment of connected and automated vehicles.  It runs the Mcity Test Facility, a 32-acre proving ground for advanced mobility vehicles and technologies, which opened in 2015 at the University of Michigan.  The Mcity Test Facility has eight connected intersections, a traffic control center, cameras, radars, and a small fleet of its own fully-automated driverless vehicles. The public-private partnership is comprised of more than 65 industry members including: BMW, Ford, GM, Honda, Toyota, Intel, Qualcomm, and State Farm.  It also leverages many government funded projects, from the U.S. Department of Transportation, and the U.S. Department of Energy.

Mock city approach. While there are other paved test facilities used for connected and autonomous vehicle testing, Mcity is the first purpose-built mock city in the United States designed specifically for autonomous driving research.  We see this mock city approach as an important avenue to advance autonomy. It’s important to note that WayMo and Uber’s projects in Phoenix and Pittsburgh are testing largely on public roads.  Testing in a controlled environment such as the Mcity Test Facility has many benefits: the tests are safer, cheaper, faster, and repeatable.

Importance of DSRC. One focus at Mcity is dedicated short-range communications, or DSRC. DSRC is designed specifically for automotive safety applications, and enables vehicle-to-vehicle or vehicle-to-infrastructure communications with an effective range of 1000 feet.  DSRC, like other wireless communications, does not need line-of-sight visibility to detect potential safety threats, such as an unseen vehicle ahead stopping suddenly in a snowstorm. DSRC can essentially serve as another sensor, providing useful vehicle and traffic information to support autonomous driving. We’re surprised to hear that many traditional auto manufactures believe DSRC makes cars safer, but WayMo and Tesla are not taking advantage of its benefits.

When will we have self-driving cars? Heading into our visit, we wanted to get a read for when consumers will be able to purchase a fully autonomous vehicle in the United States. Peng cautioned that while autonomous vehicles are ready for some niche and limited applications, anytime, anywhere driverless vehicles may take longer than we think, commenting that “developing a car that is 90% safe is relatively easy, 99% safe is harder, and the remaining 1% will take a very long time.” While Peng stopped short of predicting a roll out year, our sense is that 2025 will be the year when some autonomous vehicles are on par with, or better than, average human drivers in most driving conditions. It is clear that a lot needs to happen between now and the time we reach full autonomy. Peng illustrated this in an analogy that today self-driving cars are in the relatively early stages of development, much like planes before the airline industry invested a tremendous amount of time testing new technologies as it moved to fully automated planes like the fly-by-wire Boeing 777.   Getting to that level of readiness is necessary for autonomous driving to reach mainstream use, and will require much more evaluation and testing.

We took our findings from Mcity and applied it timing of Tesla autonomy, AI and auto, and quality of miles driven data. It’s important to note that the insights below are attributed to Loup Ventures.

We’re believers in Tesla, despite the fact we think they’ll miss their autonomy launch target. We expect Tesla to miss their 2019 autonomy launch target, and see 2022 as more realistic roll out year. Elon Musk has been clear that he expects each Tesla made today to be autonomous in 2019. What’s not clear about the 2019 target is what level of autonomy will be reached. At the recent Model 3 hand-off event, Musk made a reference to sleeping in your vehicle as an acceptable activity in autonomy, suggesting their 2019 goal is to reach the Level 4 or Level 5. Getting to Level 5 is a light year leap from Level 4. Level 4 autonomy is where no driver is needed, but the vehicle’s speed, range, and weather conditions (snow is a material problem) are limited. Level 2 autonomy is available today in some production cars as advanced driver assistance. It’s important to note, Tesla’s approach is evolutionary, moving Autopilot to Advanced Autopilot, and finally to self-driving. This is different than WayMo’s revolutionary approach of entering the market with a Level 4 or 5 autonomous vehicle.

AI’s fit. When we think about AI being better than humans, we think of cases where machines have defeated humans, like in chess or video games. These examples are in a world with a finite number of choices. On the other hand, driving in the real world has an infinite number of choices. WayMo, Uber, and Tesla are at an advantage when it comes to tackling these infinite choices, given that they’ve already logged significant miles to feed their respective self-driving neural networks. We believe the miles driven gap will be hard for new self-driving players to close. Because the problem is so complex, the neural network will need to learn from hundreds of billions of miles (today we are sub 2 billion miles). Also, consider that a car can only drive on roads that its trained on, so a U.S. autonomous car can’t drive in China.

Quality of data. While it’s true AI with more miles is better than fewer miles, it’s important to understand the distinctions between data captured by each player. We believe Waymo’s 3 million miles have saved most if not all of the critical data, and we have questions regarding how much data is stored and shared back from Tesla’s owners. Most Tesla data is discarded, since a Wi-Fi connection once a week doesn’t have enough bandwidth to share all of the data.

We left our visit to Mcity with a better sense of how traditional auto and tech companies are approaching autonomy. In addition, we reached a few new insights about the timing question, including the fact that the timing question is less relevant. We think back to Peng’s flight analogy, and believe we’re underestimating the significance of change autonomous vehicles will bring to the world.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.