The Five Senses of Computing

The trend in computing towards more natural user interfaces is unmistakable. Graphical user interfaces have long been dominant, but machines driven by more intuitive inputs, like touch and voice, are now mainstream. Today, audio, motion, and even our thoughts, are the basis for the most innovative computer-user interaction models powered by advanced sensor technology. Each computing paradigm maps to one or more of the five human senses; exploring each sense gives us an indication of the direction in which technology is heading.

Sight – Graphical User Interface

The introduction of the graphical user interface (GUI) drove a step function change in computers as productivity tools, because users could rely heavily on sight, our dominant sense. The GUI was then carried forward and built on with the advent of touchscreen devices. The next frontier for visual user interfaces lies in virtual reality and augmented reality. Innovations within these themes will further carry forward the GUI paradigm. VR and AR rely heavily on sight, but combine it more artfully with other inputs like audio, motion, and touch to create immersive interfaces.

Touch – Touchscreen Devices

PCs leveraged basic touch as a foundational input via the keyboard and the mouse. The iPhone then ushered in a computing era dominated by touch, rejecting the stylus in favor of, as Steve Jobs put it, “the best pointing device in the world” – our fingers.  Haptics have pushed touchscreen technology further, making it more sensory, but phones and tablets fall well short of truly immersive computing. Bret Victor summarized the shortcomings of touchscreen devices in his 2011 piece, A Brief Rant on the Future of Interaction Design, which holds up well to this day.

More fully integrating our sense of touch will be critical for the user interfaces of the future. We think that haptic suits are a step we will take on the journey to full immersion, but the best way to trick the user into believing he or she is actually feeling something in VR is to manipulate the neurochemistry of the brain. This early field is known as neurohaptics.

Hearing – Digital Assistants & Hearables

Computers have been capable of understanding a limited human spoken vocabulary since the 1960s. By the 1990s, dictation software was available to the masses. Aside from limited audio feedback and rudimentary speech-to-text transcription, computers did not start widely leveraging sound as an interface until digital assistants began to be integrated into phones.

As digital assistants continue to improve, more and more users are integrating them into their daily routines. In our Robot Fear Index, we found that 43% of Americans had used a digital assistant in the last three months. However, our study of Amazon Echo vs. Google Home showed that Google Home answered just 39.1% of queries correctly vs. the Echo at 34.4%. Clearly we’re early in the transition to audio as a dominant input for computing.

Hearables, like Apple’s AirPods, represent the next step forward for audio as a user interface.

Read More

Feedback Loup: College Panel

We recently hosted a panel of 8 college students from the University of Minnesota. The goal was to better understand how millennials think about social media, communications, video, VR, AR, the selfie generation, the future of work, and privacy. Here’s a summary of what we learned:

Text Is Dying

  • Quote: “Texting replaced email, and photos have replaced text messages”.
  • Message: Text is being used less frequently by each of our panelists. They view text as a formal way to communicate. Snap, Facebook and Instagram are the preferred communication platforms, with Facebook settings being switched to photos only. The panelists mentioned tech platforms promoting messaging within games as a way to maintain usage.
  • Takeaway: Text is slowly going away, replaced by video and photos. Text is viewed more as a formal way to communicate.

Fake News

  • Quote: “I like Snap for news.”
  • Message: Our panelists get their news from a wide variety of sources. 7 of 8 panelists are not concerned about fake news. Snap was the most popular way to aggregate news from traditional sources (3 of 8), followed by mainstream news outlets; e.g., CNN and WSJ.
  • Takeaway: Professional news is still respected but not paid for by these college students.

The Future of Work

  • Quote: “It’s scary. If we can’t have cashiers, truckers and fast food jobs. . . how will people live?”
  • Message: College students know they are entering a workforce that will have dramatic changes over the next 30 years. They have concerns about who’s going to control everything as resources become more concentrated. The University of Minnesota offers a class titled “Size of the Future” that addresses the risk of job loss to automation. The group did consider these changes when thinking about a career, with an increased interest in a more technical education that feels more defensible. Ultimately these students believe that the negative impact of lost jobs will be partially offset by the positive impact of new industries being formed.
  • Takeaway: College students understand that the workforce is changing. They envision social challenges emerging from displacement of workers with lower levels of education. But they believe a college education will ensure that their futures are safe.

Read More

Jump Ball for the OS of the Future

As we watched the run up in SNAP shares since its IPO last week, we wondered how much of the move was based on potential revenue growth of more than 2x in 2017 or investors buying in to Snap’s long term vision as a camera company. Their vision suggests Snap wants to expand its position as an AR platform and compete for the jump ball of the next computing paradigm. That led to a bigger question: who is best positioned to win in AR and own the OS of the future? Here we weigh in on who’s most likely to grab that jump ball.

Counting Down to Tip Off

One of our core beliefs is that every 10-15 years a new computing paradigm emerges that changes the way humans interface with technology. Each paradigm shift creates an opportunity to own a new OS layer. In the late 80s it was the PC, ultimately powered by Windows, Mac and Linux. In the late 90s it was the Internet. We would argue that Google and Amazon provided the closest thing to an OS for the web. In the mid 2000s it was mobile, which is owned by iOS and Android. It’s obvious that the biggest value lies in owning that OS layer as evidence by the market caps of Apple ($730b), Google ($575b), and Microsoft ($490b).

What We Know About The AR OS Layer

We know that over the next few years, most AR functionality will happen through existing mobile OSes (iOS and Android); however, we also know that AR wearables – in order to drive a true paradigm shift – will need their own OS. It seems likely that there are 2-3 winners as the AR OS given what we saw in PC, Internet and mobile.

This is necessary because developers and hardware manufacturers need reach and scale to maximize profits, so they will only build for the biggest audiences. If there are more than 3 OSes, reach and scale will be difficult to achieve.

We also know that there will likely be at least one OS solution that is closed and one that is open. This is another commonality across the PC, Internet, and, mobile. Mac, Amazon, and iOS represent closed or integrated systems. The end-to-end experience is largely controlled by one player that allows some restricted development on the platform. Windows, Google, and Android represent open systems that allow broader utilization by third parties. Closed systems tend to be first to market, and the tight integration of software and hardware offer a user friendly experience that promotes early adoption. Open systems tend to follow, enabling third-party developers to innovate on hardware or software features while utilizing a standard, consumer-adopted OS. This means that hardware tends to become a commodity and, while there are definite challenges around miniaturization and battery today, we expect AR wearables to go the same way.

AR Is A Culmination Of Several Core Disciplines

Another core belief we hold is that the future of computing must build on prior technologies while introducing revolutionary changes; the AR OS will be no different. The winners of the AR OS layer will combine camera hardware with an OS that uses computer vision to map the real world and augment it with a layer of information and present it in a user-friendly interface. The OS will also need to incorporate artificial intelligence including the ability to interpret and interact with user speech as well as environmental sounds. But camera and UX design are just two of the more visible pieces of the AR stack. Supporting those elements are maps with points of interest, organized informational data, social data, a developer community, content, and payments. Unsurprisingly, that definition of the AR tech stack puts established companies like Google, Apple, Microsoft, Facebook, and Amazon in the best position to be AR platform winners because they already have many of the big pieces in place.

Below is a scorecard that ranks many of the major players in AR in each of these core disciplines. We note that low scores in the table represent categories of potential M&A for the corresponding company.

Read More

Apple Working With Tesla Is A Fairy Tale

There’s been a lot of talk about Apple buying Tesla, but what if Apple simply made a $10 billion equity investment in the company instead? It sounds so good — Apple working with Tesla. In theory, it would make our lives so much better. Imagine all of the things you love about your iPhone, perfectly integrated with all the things Tesla owners rave about. The two tech giants could take over the auto industry over the next 20 years as consumers embrace electric vehicles and automation. Unfortunately, an investment from Apple, nonetheless an acquisition, would be hard to pull off. At the end of the day, that might be better for consumers if not investors.

Before we discuss why it won’t happen, let’s go over why it sounds so good.

For Tesla. A $10 billion cash infusion would all but eliminate any current or future cash problems for the company. While $10 billion equity investment would cause about 20% dilution today, it’s likely it would have a long-term benefit on Tesla stock given the removal of the cash question. Aside from the cash, we believe Apple could and would want to provide resources from their world class hardware, software, and AI teams to make Tesla’s the entertainment system and autopilot better. The investment would likely remove Apple as a potential direct or indirect competitor. Additionally, Tesla’s Model 3 could be showcased in Apple’s 490 retail stores in 20 countries.

For Apple. Investors would feel like they are actually doing something with their cash, which should be a positive for AAPL’s multiple. Apple would be investing in a company that has the potential to be multiple times bigger over the next decade. They would not be spending on the impossible, which would be building its own car to try to catch Tesla, but rather investing in making the leader even better. The impact of AI and robotics on the automotive sector is one of the next mega tech trends, and Apple would have a pole position within that theme.

Read More

Tesla’s Bedrock In AI & Robotics Will Transform The Industry & Our Lives

I always wanted to cover Tesla, but as an internet analyst, the stock fell outside of my coverage space. Despite this, I continued to study the company and ultimately invested because I believe that Tesla is not a car company, but a consumer electronics company that thinks like an internet company. With a bedrock in AI and robotics, Tesla is one of the best positioned companies to transform our lives over the next 20 years.  We think Tesla is on par with Amazon when it comes to a reckless pursuit to shape the future, which we believe will reward investors over the long run.

The Street Is Understandably Focused On The Wrong Metric

Tesla reports December quarter results on Wednesday (Feb. 22). Given the 48% rise in TSLA shares over the past 3 months, now trading near an all-time high, it’s understandable why investors are nervous going into the print. After all, good news is priced in as information of the earlier-than-expected Fremont production retooling has stoked Model 3 production expectations. As of our last check, buy side investors expect 17k to 25k Model 3 shipments in 2017.  That’s a big number when you consider that in 2016 Tesla delivered 76k vehicles (all models) to customers. Investors will be zeroed in on Elon Musk’s comments on the earnings call about production of the Model 3 in 2017.  His comments may cause volatility in the stock short term, but they are irrelevant in the long run.

It’s Not About How Many Model 3’s Tesla Sell In 2017

As venture capitalists, we have the luxury of thinking about themes over a very long horizon. With that perspective, Wednesday’s Tesla earnings report is a non-event.  What’s more important is that Tesla makes the best car in the world, amplified by AI and robotics. That focus will keep competitors in check, allowing the company to reach scale and ride the next tech mega wave as our lives are quickly transformed (over the next 20 years) into an electric, automated existence.

Artificial Intelligence

Tesla’s obvious AI play is autopilot for autonomous vehicles, with a less well known AI push in manufacturing. We know that the company is pushing boundaries to gain data to improve its driving AI with a goal of being first to market with an L4 compatible vehicle (the automated system can control the vehicle in all but a few environments).

The first to market will have a measurable advantage because road data equates to smarter AI and safer cars.  Google’s Waymo has driven over 2 million autonomous miles, but comparisons with other automotive companies are difficult given some companies include simulation miles.  Last October, Elon Musk reported Tesla had driven 222 million cumulative autopilot miles, but those miles are not comparable to the fully autonomous number that Waymo reports.  It’s unlikely that Waymo will have a commercially available vehicle in 2019, but likely that Tesla models solid in 2019 will be L4 compatible. Traditional automotive is even further behind, with BMW, Audi, Mercedes, Ford and GM likely shipping L3 autos in 2019. Note that L5 is the highest level of autonomy, for vehicles capable of all aspects of the dynamic driving under all roadway and environmental conditions that can be managed by a human driver, followed by L4, L3 and so on.  This begs the question, why would anyone interested in an autonomous car buy an L3 compatible vehicle if it was priced similar to an L4 vehicle? We don’t know how Tesla’s autopilot AI stacks up against the market, but based on comments from our industry contacts, Tesla sees AI as one of its two core competencies and is structuring its future around it.

Read More