The Five Senses of Computing

The trend in computing towards more natural user interfaces is unmistakable. Graphical user interfaces have long been dominant, but machines driven by more intuitive inputs, like touch and voice, are now mainstream. Today, audio, motion, and even our thoughts, are the basis for the most innovative computer-user interaction models powered by advanced sensor technology. Each computing paradigm maps to one or more of the five human senses; exploring each sense gives us an indication of the direction in which technology is heading.

Sight – Graphical User Interface

The introduction of the graphical user interface (GUI) drove a step function change in computers as productivity tools, because users could rely heavily on sight, our dominant sense. The GUI was then carried forward and built on with the advent of touchscreen devices. The next frontier for visual user interfaces lies in virtual reality and augmented reality. Innovations within these themes will further carry forward the GUI paradigm. VR and AR rely heavily on sight, but combine it more artfully with other inputs like audio, motion, and touch to create immersive interfaces.

Touch – Touchscreen Devices

PCs leveraged basic touch as a foundational input via the keyboard and the mouse. The iPhone then ushered in a computing era dominated by touch, rejecting the stylus in favor of, as Steve Jobs put it, “the best pointing device in the world” – our fingers.  Haptics have pushed touchscreen technology further, making it more sensory, but phones and tablets fall well short of truly immersive computing. Bret Victor summarized the shortcomings of touchscreen devices in his 2011 piece, A Brief Rant on the Future of Interaction Design, which holds up well to this day.

More fully integrating our sense of touch will be critical for the user interfaces of the future. We think that haptic suits are a step we will take on the journey to full immersion, but the best way to trick the user into believing he or she is actually feeling something in VR is to manipulate the neurochemistry of the brain. This early field is known as neurohaptics.

Hearing – Digital Assistants & Hearables

Computers have been capable of understanding a limited human spoken vocabulary since the 1960s. By the 1990s, dictation software was available to the masses. Aside from limited audio feedback and rudimentary speech-to-text transcription, computers did not start widely leveraging sound as an interface until digital assistants began to be integrated into phones.

As digital assistants continue to improve, more and more users are integrating them into their daily routines. In our Robot Fear Index, we found that 43% of Americans had used a digital assistant in the last three months. However, our study of Amazon Echo vs. Google Home showed that Google Home answered just 39.1% of queries correctly vs. the Echo at 34.4%. Clearly we’re early in the transition to audio as a dominant input for computing.

Hearables, like Apple’s AirPods, represent the next step forward for audio as a user interface.

Read More

AirPods: The First Mass Market Hearable

Apple’s AirPods are a step towards the future of computing. Starting with the move from the keyboard and mouse to the touchscreen, computing continues to move towards more intuitive user interfaces. Audio and motion capture are the basis for most innovative computer hardware today, including wearables. And we think that voice-controlled wearables, like AirPods, show a lot of promise. We call them hearables.

In order to look forward to the impact of AirPods on the future of computing, it helps to first look back at how sound has evolved as an interface. Computers have been capable of understanding a limited human vocabulary since the 1960s. By the 1990s, dictation software was available to the masses. Aside from limited audio feedback and rudimentary speech-to-text transcription, computers have not leveraged sound as an input or as an interface until natural language processing matured in the early 2000s.

As digital assistants continue to improve, more and more users are integrating them into their daily routines, and AirPods make that even more convenient. In our Robot Fear Index, we found that 43% of Americans had used a digital assistant in the last three months. However, our study of Amazon Echo vs. Google Home using 800 different everyday queries showed that Google Home answered just 39.1% of the queries correctly vs. the Echo at 34.4%. We’re early in the transition to audio as a dominant input for computing.

Hearables, like Apple’s AirPods, represent a giant leap forward for audio as a user interface. Now, Siri is always available and your phone stays in your pocket. In fact, using AirPods necessarily means using your phone less frequently – or at least pulling it out of your purse or your pocket less frequently. AirPods can handle information requests, dictation, media control, and phone calls; meanwhile, quick glances at the Apple Watch on your wrist will suffice for most notifications. All of this means that your phone stays in your pocket. As Jason Calacanis declared just yesterday, “AirPods are the new smartphone.” And we believe audio as a UI is a key enabler of AR technology. AirPods may not be perfect, but they’ll get better, smarter, and easier to use. They are just the beginning for hearables and a new wave of computing.

We surveyed 55 AirPods users to better understand the state of the early hearables market. We were surprised that AirPods received a net promoter score (NPS) of -2, which means that the number of detractors, passives, and promoters were split roughly in thirds, with slightly more detractors than promoters.

While an NPS of -2 could actually represent relative outperformance within the bluetooth headphones category, AirPods clearly have some kinks to work out if hearables are going to perform more of our daily computing. Among detractors and passives, half identified the ear fit as the opportunity for improvement, not the software or functionality. So, we remain convinced that AirPods and other hearables will play a big role in shaping the future of how we interface with our devices and with each other.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

iPhone X Survey Shows Increased Intent to Upgrade & Interest in AR

Last week, we surveyed 502 people in the US regarding their interest in purchasing the iPhone X, expected to launch this fall. Among the 234 iPhone owners we surveyed, 23% intend to upgrade to iPhone X, which compares to 15% that intended to upgrade to an iPhone 7 prior to its launch. The iPhone 7 data point is based on a survey we conducted while at Piper Jaffray in July 2016.

We conducted our most recent survey roughly six months ahead of the iPhone X release. Last year, the survey was done two months ahead of the iPhone 7 release. As we get closer to launch, the rumor mill can positively or negatively impact excitement about buying. If the rumors live up to the early hype, interest will likely increase; if they don’t, interest may decrease. The higher intent to purchase the iPhone X likely also reflects the increasing popularity of the iPhone Upgrade Program. While it’s still early, interest in upgrading to the iPhone X appears to be meaningfully higher than upgrade interest was for the iPhone 7.

Interest in augmented reality (AR) features is high. Among those that plan to purchase the iPhone X, 26% indicate an interest in AR features compared to 16% among those that don’t plan to purchase the iPhone X. This could be driven by the simple correlation between early adopters of iPhones and early adopters of new technology, like AR.

Read More

Feedback Loup: College Panel

We recently hosted a panel of 8 college students from the University of Minnesota. The goal was to better understand how millennials think about social media, communications, video, VR, AR, the selfie generation, the future of work, and privacy. Here’s a summary of what we learned:

Text Is Dying

  • Quote: “Texting replaced email, and photos have replaced text messages”.
  • Message: Text is being used less frequently by each of our panelists. They view text as a formal way to communicate. Snap, Facebook and Instagram are the preferred communication platforms, with Facebook settings being switched to photos only. The panelists mentioned tech platforms promoting messaging within games as a way to maintain usage.
  • Takeaway: Text is slowly going away, replaced by video and photos. Text is viewed more as a formal way to communicate.

Fake News

  • Quote: “I like Snap for news.”
  • Message: Our panelists get their news from a wide variety of sources. 7 of 8 panelists are not concerned about fake news. Snap was the most popular way to aggregate news from traditional sources (3 of 8), followed by mainstream news outlets; e.g., CNN and WSJ.
  • Takeaway: Professional news is still respected but not paid for by these college students.

The Future of Work

  • Quote: “It’s scary. If we can’t have cashiers, truckers and fast food jobs. . . how will people live?”
  • Message: College students know they are entering a workforce that will have dramatic changes over the next 30 years. They have concerns about who’s going to control everything as resources become more concentrated. The University of Minnesota offers a class titled “Size of the Future” that addresses the risk of job loss to automation. The group did consider these changes when thinking about a career, with an increased interest in a more technical education that feels more defensible. Ultimately these students believe that the negative impact of lost jobs will be partially offset by the positive impact of new industries being formed.
  • Takeaway: College students understand that the workforce is changing. They envision social challenges emerging from displacement of workers with lower levels of education. But they believe a college education will ensure that their futures are safe.

Read More

Jump Ball for the OS of the Future

As we watched the run up in SNAP shares since its IPO last week, we wondered how much of the move was based on potential revenue growth of more than 2x in 2017 or investors buying in to Snap’s long term vision as a camera company. Their vision suggests Snap wants to expand its position as an AR platform and compete for the jump ball of the next computing paradigm. That led to a bigger question: who is best positioned to win in AR and own the OS of the future? Here we weigh in on who’s most likely to grab that jump ball.

Counting Down to Tip Off

One of our core beliefs is that every 10-15 years a new computing paradigm emerges that changes the way humans interface with technology. Each paradigm shift creates an opportunity to own a new OS layer. In the late 80s it was the PC, ultimately powered by Windows, Mac and Linux. In the late 90s it was the Internet. We would argue that Google and Amazon provided the closest thing to an OS for the web. In the mid 2000s it was mobile, which is owned by iOS and Android. It’s obvious that the biggest value lies in owning that OS layer as evidence by the market caps of Apple ($730b), Google ($575b), and Microsoft ($490b).

What We Know About The AR OS Layer

We know that over the next few years, most AR functionality will happen through existing mobile OSes (iOS and Android); however, we also know that AR wearables – in order to drive a true paradigm shift – will need their own OS. It seems likely that there are 2-3 winners as the AR OS given what we saw in PC, Internet and mobile.

This is necessary because developers and hardware manufacturers need reach and scale to maximize profits, so they will only build for the biggest audiences. If there are more than 3 OSes, reach and scale will be difficult to achieve.

We also know that there will likely be at least one OS solution that is closed and one that is open. This is another commonality across the PC, Internet, and, mobile. Mac, Amazon, and iOS represent closed or integrated systems. The end-to-end experience is largely controlled by one player that allows some restricted development on the platform. Windows, Google, and Android represent open systems that allow broader utilization by third parties. Closed systems tend to be first to market, and the tight integration of software and hardware offer a user friendly experience that promotes early adoption. Open systems tend to follow, enabling third-party developers to innovate on hardware or software features while utilizing a standard, consumer-adopted OS. This means that hardware tends to become a commodity and, while there are definite challenges around miniaturization and battery today, we expect AR wearables to go the same way.

AR Is A Culmination Of Several Core Disciplines

Another core belief we hold is that the future of computing must build on prior technologies while introducing revolutionary changes; the AR OS will be no different. The winners of the AR OS layer will combine camera hardware with an OS that uses computer vision to map the real world and augment it with a layer of information and present it in a user-friendly interface. The OS will also need to incorporate artificial intelligence including the ability to interpret and interact with user speech as well as environmental sounds. But camera and UX design are just two of the more visible pieces of the AR stack. Supporting those elements are maps with points of interest, organized informational data, social data, a developer community, content, and payments. Unsurprisingly, that definition of the AR tech stack puts established companies like Google, Apple, Microsoft, Facebook, and Amazon in the best position to be AR platform winners because they already have many of the big pieces in place.

Below is a scorecard that ranks many of the major players in AR in each of these core disciplines. We note that low scores in the table represent categories of potential M&A for the corresponding company.

Read More