Apple’s Dream Car: Hardware + Software

Friday’s news of Apple’s permit to test self-driving cars in California should not have come as much of a surprise given the poorly kept secret of Project Titan. But the permit begs the question of whether Apple is building a car or just building software for a car.

Apple’s overarching philosophy has been to own both the hardware and software to create superior product experiences. Typically, this means owning the technology stack from end to end for a given product; for example, Mac + macOS, iPod + iTunes, and iPhone + iOS; however, sometimes the company stops short of owning the entire experience. The Apple TV requires a third-party television panel, although we would argue that is the equivalent of plugging your Mac into a third-party monitor. Once the Apple TV is engaged, the experience is all Apple. Unlike a television, a car is not just a dumb panel. Modern vehicles require a tremendous amount of sensors and other electronics that monitor the exterior and interior. In an ideal world, Apple’s car project would involve the company building the actual automobile, combining hardware and software. In reality, the complexity of designing and manufacturing a vehicle may push the company to integrate deeply with an automotive partner or partners in an effort more similar to the Apple TV  — plugging Apple’s technology into an existing product.

From a software standpoint, building an automotive product beyond CarPlay is a no-brainer.  Apple has one of the better stacks of necessary components to build a great car OS:

  • Entertainment: iTunes/Apple Music
  • Assistance/Voice: Siri
  • Navigation/Local: Apple Maps
  • Image Processing/Recognition (Autonomous Driving): iPhone Camera
  • Security: Touch ID
  • Third-party Software: App Store (potential for OTA software updates)

Despite the obvious software advantages, the auto market poses challenges that Apple may not be able to overcome on the hardware side, i.e. building the car end-to-end. First and foremost, Apple would likely need to find a manufacturing partner because it is not a manufacturing company, but a design company. Foxconn and other manufacturing partners build iPhones, iPads, and Macs to Apple’s specs. A company like Magna Steyr may be a capable of building a car to Apple’s design specs. Aside from finding a partner, we note that the typical automotive design process takes 5-7 years. Even on an accelerated time table, Apple is likely multiple years away from a completed hardware design for a car. Finally, Tesla and Google have a multi-year lead on Apple in developing autonomous vehicle capabilities, including the associated testing mileage. We see autonomous driving capabilities as a key factor in auto desirability over the next five years.

Apple is almost certainly exploring how it could build an entire car, but as we learned the hard way with an Apple television, exploration does not mean a product comes to market. Apple is the best connected device maker in the world and the car is the biggest connected device in the world. There is a natural fit in the self-driving car market if Apple can figure out how to get past the challenges of making the hardware.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Can Anyone Catch Alexa?

Amazon’s third-party developer strategy has Alexa looking like the Gingerbread Man of digital assistants. After taking over CES in January with multiple Alexa-enable devices, there’s been little pause in Alexa skills and device-enabled growth. As of this writing, we count more than 12,000 Alexa skills (apps) and roughly 100 manufacturers with integrated Alexa IP across multiple smart home categories. And yesterday, Amazon opened its Echo voice processing IP to third party developers, extending Alexa’s lead in smart devices. Given our belief that natural language processing is one of a few core technologies that will enable the screen-less future of computing, we think it’s important to track the pace of the key players in the field.

Source: Amazon

Skills growth impressive, but getting the basics right remains the key for scale.

One measure of Alexa’s increasing utility is the growth in skills that can be downloaded to Echo devices. In just the last 3 months, nearly 5,000 skills have been added to Alexa’s repertoire, which now tops 12,000. Roughly a third of Alexa’s skills are knowledge-based – from education apps like the Old Farmer’s Almanac to trivia categories like Lesser Known Star Wars Facts. Other growth categories include health and fitness with skills like answers to common medical questions and workout suggestions. In addition, nearly 100 smart home skills are available today, an important catalyst for scaling the Alexa-enabled device ecosystem. It’s too early to tell how much scalable utility these skills bring to Alexa usage, particularly the nearly 500 knowledge/trivia skills categories. There is a fun-factor with a lot of these skills and voice access is seamless relative to paging through dozens of apps on your phone. However, Alexa needs to get better at answering basic information-related queries, which we believe will produce sustained utility and growth in Alexa-enabled devices. In a recent test, we found the Echo answered only 41% of information queries correctly.

We found the Echo answered only 41% of information queries correctly.

Device integration pacing well ahead of Google Home.

We count close to 100 manufacturers across several categories that are compatible with Alexa IP today. Smart home coverage, perhaps the most seamless hands free utility Alexa offers, continues to grow – from lighting to locks to thermostat control. Included on this list is Google’s own Nest device, a collaboration that began early last year; however, the relationship has been anything but seamless as a preponderance of Nest skill reviews suggest. We wonder if after a year of collaboration, whether Alexa and Google will ever nest together. Contrasting Echo’s ~100 smart home partners with Google Home, we find a much shorter list. Only around two dozen device manufacturers are integrated with Google’s Assistant IP. Amazon has taken advantage of Echo’s head start on Google Home by pushing integration across many manufacturers and platforms. Google Home will likely follow, but has a long road ahead.

Read More

AI’s Busted Bracket

The Loup Ventures NCAA bracket contest isn’t as hotly contested as we thought it would be. We entered Bing’s AI bracket into our pool, and it’s just as busted as the others. In fact, Bing’s bracket will finish at the bottom of our pool, in 7th place, regardless of the outcome of tonight’s game. We would like to think that we outsmarted AI, but the reality is that predicting the outcome of the NCAA tournament is more a matter of luck than skill. Bing’s performance doesn’t mean it’s broken, just unlucky this year.

* Bing Predicts 2017 NCAA Basketball Bracket

To date, Bing has chosen 39 out of 67 games correctly, including the opening round. Bing was 2 of 4 in the opening round, 24 of 32 in the 1st Round, 9 of 16 in the 2nd Round, 4 of 8 in the Sweet Sixteen, before going 0 for 4 in the Elite Eight and ending its chances at victory. If you look at Bing’s bracket now, it will show a different story, because it re-picked winners for matches after each round. Even with this adjustment, it only picked 47 of 66 games correctly, leading into tonight’s game. In the adjusted rounds, Bing chose Final Four weekend right with Gonzaga and UNC as winners, with UNC ultimately taking home the crown.

Read More

AirPods Are More Important Than The Apple Watch

At this point, it might not even be that crazy to say it, but we think AirPods are going to be a bigger product for Apple than the Watch. After using AirPods for the past month, the Loup Ventures team is addicted. The seamlessness in connecting and disconnecting with our phones and enabling Siri has meaningfully improved the way we work and consume content. AirPods are a classic example of Apple not doing something first, but doing it better. And they look cool. We think there are three reasons that AirPods are more important than the Apple Watch.

AI-First World
Google has been talking about designing products for an AI-first world for about a year now. In our view, an AI-first world is about more natural interfaces for our screen-less future. Speech is an important component of the next interface. Siri, Alexa, Google Assistant, and Cortana are making rapid improvements in terms of voice commands they understand and what they can help us with.

We view AirPods as a natural extension of Siri that will encourage people to rely more on the voice assistant. As voice assistants become capable of having deeper two-way conversations to convey more information to users, AirPods could replace a meaningful amount of interaction with the phone itself. By contrast, using Siri on the Apple Watch is less natural because it requires you to hold it up to your face. Additionally, the screen is so small that interaction with it and information conveyed by it is not that much richer than an AI voice-based interface.

Read More

The Five Senses of Computing

The trend in computing towards more natural user interfaces is unmistakable. Graphical user interfaces have long been dominant, but machines driven by more intuitive inputs, like touch and voice, are now mainstream. Today, audio, motion, and even our thoughts, are the basis for the most innovative computer-user interaction models powered by advanced sensor technology. Each computing paradigm maps to one or more of the five human senses; exploring each sense gives us an indication of the direction in which technology is heading.

Sight – Graphical User Interface

The introduction of the graphical user interface (GUI) drove a step function change in computers as productivity tools, because users could rely heavily on sight, our dominant sense. The GUI was then carried forward and built on with the advent of touchscreen devices. The next frontier for visual user interfaces lies in virtual reality and augmented reality. Innovations within these themes will further carry forward the GUI paradigm. VR and AR rely heavily on sight, but combine it more artfully with other inputs like audio, motion, and touch to create immersive interfaces.

Touch – Touchscreen Devices

PCs leveraged basic touch as a foundational input via the keyboard and the mouse. The iPhone then ushered in a computing era dominated by touch, rejecting the stylus in favor of, as Steve Jobs put it, “the best pointing device in the world” – our fingers.  Haptics have pushed touchscreen technology further, making it more sensory, but phones and tablets fall well short of truly immersive computing. Bret Victor summarized the shortcomings of touchscreen devices in his 2011 piece, A Brief Rant on the Future of Interaction Design, which holds up well to this day.

More fully integrating our sense of touch will be critical for the user interfaces of the future. We think that haptic suits are a step we will take on the journey to full immersion, but the best way to trick the user into believing he or she is actually feeling something in VR is to manipulate the neurochemistry of the brain. This early field is known as neurohaptics.

Hearing – Digital Assistants & Hearables

Computers have been capable of understanding a limited human spoken vocabulary since the 1960s. By the 1990s, dictation software was available to the masses. Aside from limited audio feedback and rudimentary speech-to-text transcription, computers did not start widely leveraging sound as an interface until digital assistants began to be integrated into phones.

As digital assistants continue to improve, more and more users are integrating them into their daily routines. In our Robot Fear Index, we found that 43% of Americans had used a digital assistant in the last three months. However, our study of Amazon Echo vs. Google Home showed that Google Home answered just 39.1% of queries correctly vs. the Echo at 34.4%. Clearly we’re early in the transition to audio as a dominant input for computing.

Hearables, like Apple’s AirPods, represent the next step forward for audio as a user interface.

Read More