Latest Research
Seeing What We Say: Improving Siri And Alexa

Seeing What We Say: Improving Siri And Alexa

We’ve been talking a lot about digital assistants lately. They were a big theme at CES and a recent survey of ours showed that US consumers view digital assistants as the fourth most frustrating tech product, behind devices, poor Internet service, and automated-telephone systems. Here’s a view into how we might be able to improve digital assistants in the future.

Humans are non-verbal communicators by nature. Almost 60% of human-to-human communication is through body language, but our current natural language interfaces only use voice. This means robot assistants miss 60% of the information we send to them. How often do you say thanks to Siri or Alexa after you get a right answer? How often do you curse at them when you get a wrong one? Then how often do you nod your head when Siri or Alexa give you a right answer? How often do you scrunch your face up in anger when they give you a wrong one?

The most obvious answer to this problem would seem to be some sort of computer vision implementation. This would solve part of the body language problem as the digital assistant could see any obvious gestures we make in response to its answers, but that’s not all the device would need to know.  The assistant would also need to know who’s talking if there are multiple people in a room and what the speaker’s facial expressions mean in the context of the answer. You might frown at bad news, even if that was the correct answer to your question. You might smile at the hilarity of a wrong answer. This means the robot needs to build a model of what humans may interpret as good or bad or associated with some other emotion and that model must be specific to the user. Good and bad are subjective to the individual with politics as a dangerous example.

Another potential solution to help digital assistants interpret body language might be connecting with a sensor on your body. Sensors could help address one issue with computer vision solutions: that we aren’t always in the robot’s line of sight.  Some of these sensors are already built into watches or advanced fitness trackers and detect biomarkers like change in heart rate or blood pressure. A rise in blood pressure might signify anger at a wrong response. A decrease in body temperature may imply sadness.

Both of these solutions beg the privacy question. Are we comfortable with allowing our robot assistants to see us and our physical data? Privacy tends to be a point of contention for every evolution of technology. It was an issue for Facebook as it grew to be indispensable for over a billion users.  It was an issue for Google Glass as the most recognizable wearable with a camera. Our belief is that we already live in a post-privacy world. One of the key trade-offs we make for the convenience of many technologies we use today is that we give up privacy. We trade privacy for the benefit of connecting with people on social platforms.  We trade privacy for better recommendations in search and shopping. Yes, there will be some noise about the intrusion of privacy that comes with incorporating body language and body data into digital assistants, but we expect that concern to go about as far as it did with Facebook.

The bottom line is this: adding the ability to read and interpret body language would result in a step-function change in our experiences with digital assistants. Incorporating body language is a crucial step in being able to create robots that can truly understand humans, allowing them to perform complex, human-like tasks. We view this as a complex and extremely interesting AI and robotics opportunity and a problem we need to solve as we pursue The Future Perfect.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.  

Amazon, Apple, Artificial Intelligence, Future, Google
3 min. read Show less
Apple’s Services Biz Starts 2017 with a Blowout Day

Apple’s Services Biz Starts 2017 with a Blowout Day

Apple’s news release on App Store sales earlier today implies they generated $78M in App Store gross sales per day in 2016. We would have estimated New Year’s Day sales would gross about 30% more than an average day, or about $100M. The $240M in App Store sales they saw on 1/1/17 is a blowout day. 

We estimate that the App Store accounts for more than 65% of Apple’s gross Services revenue. Given the significance of the App Store to Apple’s Services business coupled with today’s announcement, we believe our previous expectation of 15% y/y Services revenue growth in 2017 is conservative. The actual number may be closer to 20% y/y growth in Services revenue.

The 2016 App Store numbers and the New Year’s Day App Store sales underscore how quickly Apple is becoming a Services business. We previously shared our thoughts on Apple reinventing itself as a Services business here. In short: the transition to Services is important as new platforms like AR and VR emerge and transform Apple’s existing mobile device businesses.

The transition to Services is important as new platforms like AR and VR emerge and transform Apple’s existing mobile device businesses.

In the Sep-16 quarter, Services accounted for 13% of revenue. We think that over the next 5 years, Services can grow to be 30% of Apple’s revenue, given new services that will be required for emerging platforms like AR and VR. Meanwhile, we expect hardware revenue to be flat to up slightly over the next five years, again, underscoring the importance of the Services business as new computing platforms emerge.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.  

Apple, Augmented Reality, News, Virtual Reality
2 min. read Show less
Digital Assistants: The Tech We Love to Hate

Digital Assistants: The Tech We Love to Hate

Alexa – you can’t live with her, you can’t live without her. Digital assistants are some of the most widely used and convenient technologies, but also some of the most frustrating tech we use. We can confirm that Alexa is “everywhere” at CES, now being integrated into third-party hardware. And Siri, undoubtedly, is the most present digital assistant without an official CES presence. Google Assistant, the technology driving Google Home, has also expanded its reach with several new integrations announced at CES.

We’ve seen how hard it is to use CES as a gauge for the new technologies we’ll be using in five years, or even next year, so we collected responses from 355 consumers across the US about what technologies they find most frustrating today. Unprompted (in an open-ended response), here’s what they had to say:

Slow and glitchy devices (mainly phones), spotty internet connections, and the well-loathed automated phone systems lead the way. It’s not surprising that our phones frustrate us the most, given how much we use them. However, we were surprised to see that digital assistants (Siri, Alexa and Google Home) were the fourth most frustrating technology for consumers. More than twice as many people find Digital Assistants more frustrating than credit card chips and printers!

Maybe we shouldn’t be surprised. Digital assistance is a field that benefits proportionally more early in the AI learning curve, because the products learn from consumer use. Google and Amazon are more comfortable releasing early tech and even Apple chose to release Siri before she was perfect. The space is too interesting to sit out, and the early part of the learning curve is perhaps the most important time for AIs to start learning. But it is clear that the technology is not yet where it needs to be for the average tech consumer that expects products to just work. We love what we’re seeing in the natural language processing space and the advancements Apple, Amazon and Google are each making, but there’s clearly an opportunity for these systems to improve before people fire their assistants.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.  

Amazon, Apple, Artificial Intelligence, Google
2 min. read Show less