Humans Are a Bigger Existential Risk Than AI

Elon Musk continues to warn us of the potential dangers of AI, from debating the topic with Mark Zuckerberg to saying it’s more dangerous than North Korea. He’s called for regulating AI, just as we regulate other industries that can be dangerous to humans. However, Musk and the other AI debaters underestimate the biggest threat to humanity in the AI era: humans.

For the purposes of the current debate, there are three potential outcomes debaters of artificial intelligence propose:

  1. AI is the greatest invention in human history and could lead to prosperity for all.
  2. A malevolent AI could destroy humanity.
  3. An “unwitting” AI could destroy humanity.

There are few arguments in between worth considering. If the first possibility was not the ultimate benefit, then the development of AI wouldn’t be worth exploring given the ultimate risks (2, 3).

There’s certainly a non-zero chance that a malevolent AI destroys humanity if one were to develop; however, malevolence requires intent, which would require at least human level intelligence (artificial general intelligence, or AGI), and that is probably several decades away.

There’s also a non-zero chance that a benign AI destroys humanity because of some effort that conflicts with human survival. In other words, the AI destroys humanity as collateral damage relative to some other goal. We’ve seen early AI systems begin to act on their own in benign ways where humans were able to stop them. A more advanced AI with a survival instinct might be more difficult to stop.

There’s also a wild card relative to the first outcome that the two sides of the AI debate. On the road to scenario one, the positive outcome and probably the most likely outcome, humans will need to adapt to a new world where jobs are scarce or radically different than work we know it today. Humans will need to find new purpose outside of work, likely in the uniquely human capabilities of creativity, community, and empathy, the things that robots cannot authentically provide. This radical change will likely scare many. They may rebel with hate toward robots and the humans that embrace them. They may band behind leaders that promise to keep the world free of AI. This could leave us with a world looking more like the Walking Dead than utopia.

Since the advent of modern medicine, humans have been the most probable existential threat to humanity. The warning bells on AI are valid given the severity of the potential negative outcomes (even if unlikely), and some form of AI regulation makes sense, but it must be paired with plans to make sure we address the human element of the technology as well. We need to prepare humans for a post-work world in which different skills are valuable. We need to consider how to distribute the benefits of AI to the broader population via a basic income. We need to transform how people think about their purpose. These are the biggest problems we face as we prepare to enter the Automation Age, perhaps even bigger than the technical challenges of creating the AI that will take us there.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Machines Taking Jobs: Why This Time Is Different

Will AI and robotics revolutionize human labor or not? 

More than half of all US jobs could be disrupted by automation in the next several decades; at least that’s our opinion. About half the people we talk to disagree. Those that disagree think AI will open up new job opportunities by enhancing human abilities. A common element to their argument is that we’ve always had technical innovation and human work has evolved with it. A few examples would be the cotton gin, the printing press, and the automobile. All of these inventions threatened jobs of their era, but they ended up creating more jobs than they destroyed. So why is this time different?

Because, for the first time in history, we don’t need to rely on human intelligence to operate the machines of the future. The common denominator among those three examples and countless other technical innovations is that they were simply dumb tools. There was no on-board intelligence. Humans using those tools provided the intelligence layer. Humans were the brains of the cotton gins, printing presses, and automobiles. If the human operator saw or heard a problem, they fixed it and continued working. Today, the intelligence layer can be provided by computers through computer vision, natural language processing, machine learning, etc. Human intelligence is no longer required.

You might say that machines aren’t nearly as smart as humans, so they aren’t as capable as humans. But in reality, they don’t need to be. AI required to operate a machine only needs to have very limited domain knowledge, not human level intelligence (a.k.a. artificial general intelligence). Think about driving a car. You aren’t using 100% of your total intelligence to drive a car. A large portion is thinking about other things, like disagreeing with this article, singing along with the radio, and probably texting. An autonomous driving system only needs to be capable of processing image data, communicating with computers from other devices related to driving, like other vehicles, traffic signals, and maybe even the road itself, making dynamic calculations based on those data inputs and turning those calculations into actions performed by the vehicle. Any incremental intelligence not related to those core functions is irrelevant for an autonomous driving system.

The magnitude of the technological change is also significantly different in this current wave of advancements in AI and robotics. This wave is more akin to the advent of the farm when humans were still gatherers, or the advent of the factory when we were still farmers. Farms not only organized the production of food, but also encouraged the development of community and trade. Factories organized the production of all goods, encouraged the development of cities, and enabled our modern economic system by institutionalizing the trade of labor for wages. Automation will result in equivalent fundamental changes to the philosophy of production by taking it out of the hands of humans. This could result in societal changes of greater freedom of location and a basic income. In a way, the Automation Age may be an enhanced return to the hunter/gatherer period of humanity where basic needs were provided, originally by nature, in the future by machines. Except in the Automation Age, our purpose will be to explore what it means to be human instead of simply survive.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Tesla’s $1.8 Billion Insurance Policy

This note was originally published as an op-ed last week iFortune. Details around Tesla’s debt raise have been updated.  

If only traditional auto and other tech companies were as bold as Tesla. It’s betting itself on the Model 3 moonshot, but de-risking the bet by taking out an insurance policy in the form of selling $1.8 billion in debt at 5.3% yield, greater than the targeted $1.5 billion raise at 5.25%.

It’s important to understand that Tesla does not need the $1.8 billion in debt to ramp Model 3 production. The company can fund the nearly $2 billion manufacturing investment out of its $3 billion cash balance. But Tesla is raising money because it’s critical it has an insurance policy if the Model 3 ramp hits any unexpected issues. Things are going well right now, and raising the money will be much easier than it would be in the future if Tesla faced unexpected issues. If Tesla doesn’t fund for the unexpected, CEO Elon Musk may lose the company.

To illustrate what’s at stake, let’s imagine that Tesla does not raise money today, and a year from now, something unforeseen comes up: a supplier goes under, a recall happens, or there is an earthquake, and the company needs an additional $1 billion to fix the problem. Investors will likely get spooked and pass on the offering at Tesla’s time of need. Then the dominos would begin to fall. Model 3 production would not scale, sales targets would be missed, and the company would likely run out of cash. Tesla’s assets would quickly get scooped up by Apple, Google, or any traditional automakers eager to get their hands on Tesla’s engineers and robotic manufacturing systems. Musk would likely be out, and the army of employees with a shared mission would be leaderless. This measurable risk of default is one of the reasons why Tesla’s bond raise is, by definition, a junk bond.

Debt makes sense and is cheaper. This is Tesla’s first debt deal, which begs the question: Why use debt? Most tech companies use equity, mainly because they don’t have the option to use debt, given their lack of hard assets that debt investors demand. Tesla is a tech company that makes a product through industrial assets that should be financed through debt; doing so optimizes its capital structure. The debt was priced at 5.3%, and is 2.5 times cheaper than equity. Separately, the rank-and-file Tesla employees we talk to believe shares of Tesla are undervalued, and that the company will be much bigger 10 years from now. You can call it a view of the future or blind optimism, but we call it a shared mission.

Coming back for more money. Tesla will need to raise more money in the future. It’s a tech company with industrial assets growing 50% to 100% per year. We expect the company to raise more money to build a new production facility in 2020, as well as Gigafactories 3,4, and 5. Tesla raises money in stages because it needs to grow capital and assets at a consistent, efficient pace. If Tesla grows its capital (cash) base faster than its asset base, it’s not cost efficient. On the other hand, If Tesla grows its asset base faster than its capital base, it risks running out of cash.

Tesla can fund debt service as long as investors believe in the story. Adding the $1.8 billion debt raise at 5.3% to our model, Tesla’s 2018 loss increases to $1.16 billion from $1.06 billion in our pre debt model. While the company doesn’t have the cash flow to service the debt, we expect investors to stand behind the story and service the debt until 2020, when we expect Tesla to start making money.

Don’t confuse the term “junk bond” with the quality of the company and magnitude of the opportunity in front of Tesla. If successful in ramping the Model 3, Tesla’s sales will rise from $7 billion in 2016 to $22 billion in 2018, at which point a middle-income family will be able to afford an electric (and eventually autonomous) vehicle. Revenue from Model 3 will fund Model Y (expected in 2019), Tesla Semi (our best guess is 2022), and, most importantly, the master plan of accelerating the world’s transition to renewable energy.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Posted in Tesla  • 

Faceoff: Amazon Echo Show vs Google Home Part II

As a part of our continuing efforts to understand the ways and speed at which artificial intelligence enters our everyday lives, we reexamined two home assistants based on a study we performed in February.  The two most popular assistants, Google Home and Amazon Echo were put to the test, this time substituting the Echo with the Echo Show, which includes a 7″ touchscreen.

Methodology. For this experiment, we asked the same 800 queries of both the Echo Show and Google Home, similar to our first study. We graded the queries on two metrics:  First, did the device understand what we asked correctly? Second, did the device answer the query correctly? In our study, Amazon’s Echo Show understood 95.88% of the queries we asked and answered 53.57% of all queries correctly. Google Home understood 94.63% of the queries we asked, but was able to answer 65.25% correctly. Below, you can see the improvements that each home assistant made since our last set of queries.

One advantage the Amazon Echo Show has when it comes to understanding queries is that we have the ability to confirm the data using Amazon’s companion app.  This app gives the user a live feed of what Amazon Echo Show heard.  Google Home does not offer a transcript of what it’s home assistant device picked up.  Because of this, it was difficult to tell if Google Home understood the queries but couldn’t answer them, or if it truly had a harder time understanding queries. Since we were unable to see exactly how well Google Home understood our queries, we assumed that if Google Home responded that it was unable to perform a certain function, then it had understood the query correctly.  For example, if we asked, “Hey Google, send a text to John” and received a response “Sorry, I can’t send texts yet,” then the query would be marked as understood correctly, but answered incorrectly.

Results. Both home assistants showed increased performance across the board.  This time the Google Home outperformed the Echo in total number of correct answers by nearly 12 percentage points, up from a 5 point performance gap in our February results.  While each digital assistant has its strengths and weaknesses, Google Home outperformed its rival in 3 of the 5 query categories by a surprising margin.  This is significant because it shows not only rapid improvement, but outperformance of Amazon who has both a 2-year head start and a near 70% market share vs. Google’s 24% share of the home assistant market, according to eMarketer.

Both Home Assistants Notably Improved in Navigation. The most dramatic increase for both assistants was in navigation. In February, over 90% of navigation questions were answered with: “I can’t help you with that.” Today, navigation is the best category for both the Google Home and the Echo Show, with the Google Home answering 92% of queries correctly, and the Echo Show answering 63% of queries correctly.

Echo Show: Screen adds to experience, but software upgrades drive improvement. The Echo Show’s camera and touchscreen allow it to make video calls, monitor your security cameras, visually display some forms of information, and introduces new use cases with Alexa Skills that incorporate a screen. For instance, you can say, “Alexa, show me the trailer for the new Spiderman movie,” or scroll through recommendations for local pizzerias. While this adds to the user experience, the addition of the screen itself isn’t driving all of the improvement that we are seeing with Alexa. Instead, numerous software updates have increased the way Alexa can contribute to our daily lives. The Echo Show had a near 20% improvement in its ability to answer both local questions (“Where can I find good barbecue?”), and respond to commands (“Cancel my 2:00 p.m. meeting tomorrow”). Both of these changes are driven by software improvements, not the addition of the screen.

Google Home: Quickly adding features to pass Alexa. Google Home improved its local and commerce results by 46 percentage points and 24 percentage points, respectively. This represents a broadening of its skills along with high navigation, information, and local scores. Google Home also supports up to 6 different user accounts, meaning your whole family can get personalized responses when you say, “Okay Google, what’s on my calendar today?” Google Home will recognize your voice and read your upcoming events. Separately, commerce is an area that was previously dominated by Amazon, but Google is now at parity, mainly due to its superior ability to understand more diverse natural language. While Alexa still has a larger database of add-on skills, Google Home outperformed in our set of queries.

Future home assistant competition looks intense. While Amazon and Google are the current frontrunners in the home assistant race, they are facing competition from several notable future entrants:

  • Apple HomePod (expected December 2017)
  • Alibaba Tmall Genie (released August 8th, 2017)
  • Microsoft Invoke (expected Fall 2017)
  • Lenovo Smart Assistant (utilizing Alexa, expected Fall 2017)
  • HP Cortana Speaker
  • Samsung Vega

Persisting Problems of Home Assistants. While home assistants continue to make noticeable improvements, we still believe that they are in the early innings of a platform that will become an important part of computing in the future. That being said, there are small, technologically reasonable improvements that we would like to see from these products. Our main complaint is the lack of integration with devices to make use of information or take further action. In most cases, the fastest way to get information to a user is on a screen – it’s hardly convenient to have a list of 10 restaurant recommendations read to you one at a time. Instead, you should be able to call up information verbally and have it sent to your smartphone, computer screen, or television. The Echo is able to interact with your phone via the Alexa app. Google Home can control a Chromecast. Both are able to control certain smart home devices. There is clear progress being made on this front, but it remains a key obstacle to the devices’ effectiveness. Another shortcoming that persists is unsatisfactory natural language processing, an added barrier to widespread use. Both assistants were selective in the way you had to phrase a question in order for it to be answered correctly. For example, Google Home will understand, “What is a gigawatt?” but cannot process, “Google gigawatt.” or, “Tell me what a gigawatt is.” In order for digital assistants to reach widespread adoption, users need to interact with them seamlessly.

Overall, we were impressed by the improvement that took place in a few short months and remain optimistic that the technology will continue to advance at this pace going forward.  As new players enter the space and homes become more connected, the technology in these devices will be increasingly important in our everyday lives.  Later this year we will track the further progress made by the Echo and the Home, and compare them to some of the new entrants set to arrive by the end of 2017.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

AGV Deep Dive: How Amazon’s 2012 Acquisition Sparked a $10B Market

Special thanks to Austin Bohlig for his work on this note. 

Amazon’s 2012 acquisition of Kiva Systems was the spark that ignited the Autonomous Guided Vehicles (AGV) industry, which we believe will represent a $10B market by 2025. We’ve taken a deep dive into the AGV market, where we identify the different use cases for AGVs, market leaders and opportunity, as well as highlight specific areas where we see the best investment opportunity.

We believe the aggregate robotics hardware market will grow 17% y/y to $24.5B in 2017, and by 2025 we believe the market will eclipse $73B. When including supporting software and services, we believe the total robotics market will be more than $200B in the next ten years. Many categories within the 5 robotics domains (industrial, commercial, domestic, military and social entertainment) will flourish over this time frame.

We are particularly excited about the impact three categories will have on the world: collaborative robots (co-bots), domestic robots (aka robot vacuums, mops and lawnmowers), and Autonomous Guided Vehicles. While we have recently picked up positive data points in the Co-bot and domestic robot markets, the AGV market is a little bit harder to track due to the limited number of publicly traded companies in the space. However, based on the number of AGVs Amazon is deploying internally, as well as the amount of funding and M&A activity occurring in the space, we are convinced this sub-segment of the commercial robot market is inflecting.

What Is An Autonomous Guided Vehicle (AGV)?

AGVs are mobile robots used in manufacturing and other commercial industries to improve logistics efficiencies by transporting goods and other materials autonomously. The major benefits of AGVs are twofold: 1) these robots do not require human interaction when deployed; 2) AGVs do not require supporting infrastructure (tracks, floor sensors, etc), which are needed to operate legacy material handling equipment. Without the need for supporting infrastructure, these robots are more flexible and have a lower total cost of ownership. Advancements in Simultaneous Location and Mapping (SLAM) software and computer vision technologies allow these robots to understand their surrounding environment in real-time, which allows them to operate in variable surroundings and around people. Pricing on AGVs has come down significantly over the last 5 years, which has been a catalyst for the industry. Today, AGV pricing varies from $35 – 50K (not including supporting software and services). Below we highlight a few examples of AGVs in the market today.

Amazon Sparked the AGV Industry

The AGV market flew under-the-radar throughout the early 2000s, but in 2012, the industry became one of the most talked about sub-markets in the robotics space after Amazon acquired the AGV leader (Kiva Systems) for $775M. Amazon had no plans to sell these robots externally, only using them internally to improve logistics efficiencies within their fulfillment centers, which created a significant supply/demand imbalance and a massive opportunity for other companies to enter the space. Since deploying Kiva robots, Amazon has highlighted publicly the positive impact that robots are having on productivity and efficiencies. According to a 2017 Business Insider article, Amazon has deployed 15K mobile robots annually since 2014, and now has over 45K robots in operation throughout 20 fulfillment centers. These data points show the benefits of AGVs and validate that this market is real.

AGV Applications: Today Warehouses; Tomorrow Hospitals, Hotels, and Beyond

Today, most AGVs are deployed within warehouses and fulfillment centers to automate material handling and package logistics. Robots in these settings autonomously retrieve a shelf or bin and return to a packaging station where a human employee picks specific SKUs out of bin. Or, more commonly, a human will follow the AGV around a warehouse, and the AGV will stop in front of specific spots where a human then places the desired product in a bin. While most AGV products need to be fully purchased, there are a few companies capable of retrofitting legacy equipment with autonomous technologies and transforming them into AGVs. There are also a few companies that are taking automation to the next level by adding a robot arm to pick the desired object, taking humans completely out of the equation. While this is where the industry is heading, object recognition and grasping are two of the toughest challenges to solve in this space. Random pick-and-place is considered the “holy grail” of robotics, and it will take time for humans to be fully eliminated within a warehouse.

While we believe AGV adoption within warehouses and fulfillment centers will be a key industry driver, we believe the opportunity in other verticals will add meaningful tailwinds to this market. For example, AGVs are already being deployed in hospitals to autonomously transport food, prescriptions, or other medical supplies throughout a medical facility. In addition, manufactures in all different industries are adopting these technologies because of the cost advantages and flexibility over other legacy solutions. We also see a large opportunity for AGVs to be deployed in many commercial services settings such as delivering products to rooms in a hotel, as well as eventually companies such as Amazon using AGVs to deliver packages autonomously.

Read More