Faceoff: Amazon Echo Show vs Google Home Part II

As a part of our continuing efforts to understand the ways and speed at which artificial intelligence enters our everyday lives, we reexamined two home assistants based on a study we performed in February.  The two most popular assistants, Google Home and Amazon Echo were put to the test, this time substituting the Echo with the Echo Show, which includes a 7″ touchscreen.

Methodology. For this experiment, we asked the same 800 queries of both the Echo Show and Google Home, similar to our first study. We graded the queries on two metrics:  First, did the device understand what we asked correctly? Second, did the device answer the query correctly? In our study, Amazon’s Echo Show understood 95.88% of the queries we asked and answered 53.57% of all queries correctly. Google Home understood 94.63% of the queries we asked, but was able to answer 65.25% correctly. Below, you can see the improvements that each home assistant made since our last set of queries.

One advantage the Amazon Echo Show has when it comes to understanding queries is that we have the ability to confirm the data using Amazon’s companion app.  This app gives the user a live feed of what Amazon Echo Show heard.  Google Home does not offer a transcript of what it’s home assistant device picked up.  Because of this, it was difficult to tell if Google Home understood the queries but couldn’t answer them, or if it truly had a harder time understanding queries. Since we were unable to see exactly how well Google Home understood our queries, we assumed that if Google Home responded that it was unable to perform a certain function, then it had understood the query correctly.  For example, if we asked, “Hey Google, send a text to John” and received a response “Sorry, I can’t send texts yet,” then the query would be marked as understood correctly, but answered incorrectly.

Results. Both home assistants showed increased performance across the board.  This time the Google Home outperformed the Echo in total number of correct answers by nearly 12 percentage points, up from a 5 point performance gap in our February results.  While each digital assistant has its strengths and weaknesses, Google Home outperformed its rival in 3 of the 5 query categories by a surprising margin.  This is significant because it shows not only rapid improvement, but outperformance of Amazon who has both a 2-year head start and a near 70% market share vs. Google’s 24% share of the home assistant market, according to eMarketer.

Both Home Assistants Notably Improved in Navigation. The most dramatic increase for both assistants was in navigation. In February, over 90% of navigation questions were answered with: “I can’t help you with that.” Today, navigation is the best category for both the Google Home and the Echo Show, with the Google Home answering 92% of queries correctly, and the Echo Show answering 63% of queries correctly.

Echo Show: Screen adds to experience, but software upgrades drive improvement. The Echo Show’s camera and touchscreen allow it to make video calls, monitor your security cameras, visually display some forms of information, and introduces new use cases with Alexa Skills that incorporate a screen. For instance, you can say, “Alexa, show me the trailer for the new Spiderman movie,” or scroll through recommendations for local pizzerias. While this adds to the user experience, the addition of the screen itself isn’t driving all of the improvement that we are seeing with Alexa. Instead, numerous software updates have increased the way Alexa can contribute to our daily lives. The Echo Show had a near 20% improvement in its ability to answer both local questions (“Where can I find good barbecue?”), and respond to commands (“Cancel my 2:00 p.m. meeting tomorrow”). Both of these changes are driven by software improvements, not the addition of the screen.

Google Home: Quickly adding features to pass Alexa. Google Home improved its local and commerce results by 46 percentage points and 24 percentage points, respectively. This represents a broadening of its skills along with high navigation, information, and local scores. Google Home also supports up to 6 different user accounts, meaning your whole family can get personalized responses when you say, “Okay Google, what’s on my calendar today?” Google Home will recognize your voice and read your upcoming events. Separately, commerce is an area that was previously dominated by Amazon, but Google is now at parity, mainly due to its superior ability to understand more diverse natural language. While Alexa still has a larger database of add-on skills, Google Home outperformed in our set of queries.

Future home assistant competition looks intense. While Amazon and Google are the current frontrunners in the home assistant race, they are facing competition from several notable future entrants:

  • Apple HomePod (expected December 2017)
  • Alibaba Tmall Genie (released August 8th, 2017)
  • Microsoft Invoke (expected Fall 2017)
  • Lenovo Smart Assistant (utilizing Alexa, expected Fall 2017)
  • HP Cortana Speaker
  • Samsung Vega

Persisting Problems of Home Assistants. While home assistants continue to make noticeable improvements, we still believe that they are in the early innings of a platform that will become an important part of computing in the future. That being said, there are small, technologically reasonable improvements that we would like to see from these products. Our main complaint is the lack of integration with devices to make use of information or take further action. In most cases, the fastest way to get information to a user is on a screen – it’s hardly convenient to have a list of 10 restaurant recommendations read to you one at a time. Instead, you should be able to call up information verbally and have it sent to your smartphone, computer screen, or television. The Echo is able to interact with your phone via the Alexa app. Google Home can control a Chromecast. Both are able to control certain smart home devices. There is clear progress being made on this front, but it remains a key obstacle to the devices’ effectiveness. Another shortcoming that persists is unsatisfactory natural language processing, an added barrier to widespread use. Both assistants were selective in the way you had to phrase a question in order for it to be answered correctly. For example, Google Home will understand, “What is a gigawatt?” but cannot process, “Google gigawatt.” or, “Tell me what a gigawatt is.” In order for digital assistants to reach widespread adoption, users need to interact with them seamlessly.

Overall, we were impressed by the improvement that took place in a few short months and remain optimistic that the technology will continue to advance at this pace going forward.  As new players enter the space and homes become more connected, the technology in these devices will be increasingly important in our everyday lives.  Later this year we will track the further progress made by the Echo and the Home, and compare them to some of the new entrants set to arrive by the end of 2017.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Google Is Betting On The Right Long-Term Trends

Following the company’s Q2 earnings release, Google shares are down 3%, based on higher traffic acquisition costs (TAC). As a percentage of revenue, TAC increased to 11.1%, up from 8.8% a year ago. We think this is a classic example of investors looking at the near-term bumps rather than the long-term positives. We saw several positive themes in the quarter:

  1. Revenue growth has been stable over last 5 quarters. Google’s revenue grew 21% y/y. Over the last five quarters, revenue has grown between 20-22%, even though there has been anticipation that revenue growth would slow.
  2. AI is having a positive impact on Google. Sundar Pichai began his portion of the earnings call by saying: “Google continues to lead the shift to AI driven computing.” This was the third consecutive earnings call in which Sundar touched on AI during his commentary. In Q1 of this year, he said: “I’m really happy with how we are transitioning to an AI-first company.” In Q4 of 2016, Sundar stated: “Computing is moving from mobile-­first to AI­-first with more universal, ambient and intelligent computing that you can interact with naturally, all made smarter by the progress we are making with machine learning.” Google mentioned “AI” or “Machine Learning” 18 times during the Q4’16 call, 24 times on the Q1’17 call, and 21 times on the Q2’17 call. The focus on AI is important because AI will empower Google to have better, more targeted search results for consumers, higher ROI for advertisers (through Google’s smart bidding platform), lay the groundwork for natural language processing (the future of Google Home and Assistant), and improve computer vision-based search.
  3. Google remains heavily invested in the AR/VR theme. Google Lens, a computer vision platform driven by machine learning, is the foundation of Google’s future in Augmented Reality. Google is taking the long-term approach to Google Lens, as new computing form factors emerge (ie. AR Glasses) that lend themselves to input methods more natural than taking out a phone and snapping a picture. In addition, Google shared that by year end, there will be 11 Daydream-ready devices on the market. Most notable, Samsung’s Galaxy S8 and S8+ are Daydream-ready.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

How the Future of Voice Search Affects Marketers Today

Written by guest author Lindsay Boyajian at Conductor

Since Amazon announced its acquisition of Whole Foods, the running joke across social media has been, “Jeff Bezos said to Alexa, ‘Buy me something on Whole Foods,’ and Alexa bought Whole Foods.”

This quip highlights the shortcomings that plague voice search. Today, voice recognition technology is very much flawed and often falls short in delivering on the user’s intent.

Despite its weaknesses, voice search is promising to be the user input of tomorrow. The major tech companies are investing heavily in the technology— Apple has Siri, Amazon has Alexa, Google has Google Assistant, and Microsoft has Cortana. Even with the technology in its nascency, Google reports 20 percent of queries on its mobile app and Android devices are voice searches.

And thanks to artificial intelligence and machine learning, voice search is improving quickly. It improves with every user interaction, becoming more apt at understanding user intent. With the technology advancing, more users will adopt voice search, fueling the growth cycle.

The work that is going into voice recognition technology today will power the next evolution in computing— augmented reality.

Augmented Reality & Voice Search

Augmented reality (AR) represents a new computing paradigm. Augmented reality overlays digital assets on the real-world environment. The technology promises to change how users interact with the digital world.

Soon, everything from office activities to shopping will be experienced through augmented reality. For instance, a shopper will be able to put on a lightweight pair of AR glasses to visualize in 3D what different couches will look like in her home. Some AR experiences like this are already offered today through head-mounted devices like Hololens and Meta. However, these devices are only available to developers and still have their limitations. They are not ready for mass consumer adoption.

The principal user input for augmented reality devices (excluding hardware input accessories like keypads and clickers) is gesture and voice. The issue with gesture controls is user discomfort and fatigue. Many experts agree that voice will be the primary input for these devices.

As the augmented reality space matures so will the importance of voice search.

The tech company with the most advanced voice recognition technology will have an advantage in augmented reality computing.

Optimizing Organic Search for the Future of Voice Search

Although mass consumer adoption of AR hardware is still years away, brands that optimize for voice search early will lead in organic and search marketing when the technology becomes ubiquitous.

Voice search behavior differs from traditional search patterns. Consumers approach voice search using natural, more conversational language. The queries are often longer and delivered as questions.

The result for marketers is that content optimized only for keywords will falter, while content that delivers value and matches the intent of the user will see improved organic search performance. To do this, marketers need to develop a deeper understanding of their customers to deliver content that provides relevant and timely value. This approach to marketing is known as customer first marketing.

Customer first marketing is not new. More and more brands are quickly adopting a customer-centric marketing approach. Relevant and contextual content drives traffic, fosters customer engagement, and builds loyalty. The rise of voice search and its link to the future of augmented reality only makes adopting a customer first marketing strategy even more advantageous for brands and marketers.

This piece originally appeared on LinkedIn. For more, follow Lindsay Boyajian on Twitter and LinkedIn

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Bad Culture Doesn’t Scale

The most important lesson from Uber’s travails is that bad culture doesn’t scale. Talented teams with bad culture can build fantastic businesses, but not businesses that last. A unicorn with bad culture is a unicorn with a bomb strapped to its back — it’s only a matter of time before bad culture catches up and forces disruptive change. Sometimes bad culture rears its ugly head quickly, as it did with Zenefits. Sometimes it doesn’t happen until after multi-billion dollar per year business is established, as it did with Uber.

The culture at Uber wasn’t a secret. It had always been known as an aggressive one, and that culture deserves some credit for helping Uber transform the ride hailing industry; however, the bigger and more established a company becomes, the harder it is to maintain bad culture. Rumors spread, lawsuits happen, and good hires leave because it wasn’t what they signed up for. The media will report every painstaking detail. Advanced companies like Uber also face public backlash from customers, impacting revenue. If Uber were a publicly traded company, the stock would be down at least 30% in the past month given the CEO turmoil. Maybe down 50% for the year adding in the Google lawsuit and other well-publicized troubles.

During our time as public equity analysts, we’ve had the opportunity to cover some great, lasting companies like Apple, Amazon, Google, and Facebook. A common thread between all four of those companies is great culture. When Steve Jobs passed away, we wrote that his greatest achievement wasn’t the iPhone, the iPod, or the Mac, but Apple itself. He left behind a culture of good people driving revolutionary innovation. That might sound simple, but not compromising on your values and consistently hiring the right people that share those values is hard. It’s especially hard for a startup trying to build quickly while bearing the pressure of venture investor expectations.

It’s hard to determine the long-term fallout of Uber’s culture problem. The company has “verbed” itself, much like Google, which allows it a significant brand advantage. One of our teammates has joked that he would, “uber us a Lyft.” With broad leadership change, including the departure of its CEO, Uber has a chance to grow new roots and overcome the negative culture that’s now detracting far more than it ever added.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Eric Schmidt is Wrong About Automation

At the Viva Tech conference in Paris, Google Chairman Eric Schmidt stated that he believes automation will create more jobs, not eliminate them. I think he’s wrong, and I hope he’s wrong. Disagreeing with the Chairman of the most advanced AI company in the world about automation is a dangerous game, but there are three things that can challenge his statement: timing, incentives, and economic realities. Let’s discuss his position through each of those lenses.

Timing. While Schmidt said he expects AI to create more jobs instead of eliminate them, it’s unclear what time frame he’s considering. When people talk about AI eliminating jobs, it’s almost always on how far out they’re looking into the future. Sometimes that’s five years, sometimes it’s 20, and sometimes it’s 50. At the same conference, GE CEO Jeff Immelt said the idea that robots would run factories in five years is “bullshit.” That I can agree with. The five-year picture of automation isn’t going to result in mass job loss, but the transition will start. Low-skill blue collar jobs will see continued automation. We should start to see autonomous vehicles and continued industrial automation.

Long term, automation isn’t bullshit. If we don’t have machines and software capable of performing most of the tasks we call labor in 30, 40, 50 years, then it will be a failure of Google and our technology ecosystem. We already have machines that can see and hear. We have machines that can roughly manipulate objects in the real world. We have machines that can “understand” enough at a base level to be useful at specialized tasks. Robots don’t get tired, they don’t need breaks, and they don’t get distracted. They will eventually be able to do things with greater precision and sophistication than humans, whether physical work or knowledge work. When robots get sick (broken), they’re much easier to fix or even replace. Robots don’t need to commute to jobs, which saves on energy costs. Robots don’t need paid vacation or catered lunches. For all these reasons, robots will eventually be the most competitive option for the majority of jobs. A few more decades of improvement on artificial intelligence and robotics should yield far more capable machines that can perform almost all work more effectively and more efficiently than humans.

Incentives. Any discussion around job creation or loss is highly political, which means it’s also highly emotional. Many people get scared or even angry when confronted with the possibility of mass automation and human “unemployment”. Eric Schmidt is a savvy politician, and Google is the world’s leader in artificial intelligence development. He doesn’t want the world associating Google with job loss because it could negatively impact their business. He may also have other desires to serve in public office and is setting the stage for those ambitions. Either way, he’s incentivized to be an automation unemployment denier.

But the issue of incentives flows similarly to all executives and CEOs, Immelt included. They’re in an impossible political situation. If Immelt, or any CEO, were to embrace robots capable of eliminating human jobs, there would be backlash among their employees. No worker would be happy to be viewed as a stop gap to automation. There would also be massive PR backlash.

This psychological reality of having to avoid these negative incentives can have real impact on the progress of automation adoption. Decision makers who are caught between looking for improved productivity from automation and maintaining jobs may be forced to seek suboptimal solutions to keep humans employed. Over the past several decades, automation in factories hasn’t meaningfully improved productivity. One reason may be that robot installations approved by executives are done so with the goal of sustaining human jobs rather than maximizing total productivity.

Economic Realities. During his speech, Schmidt argued that not only could automation help create more jobs, but also raise wages. He said that if you “make people smarter” via computers, their wages should increase. In a vacuum this might be true, but in reality this seems to ignore supply and demand.

Let’s take an example in knowledge work automation. In the future, computers are going to be better accountants, financial advisors, actuaries, claims adjusters, etc. than humans. All of these are highly logical jobs driven by information. Per Schmidt’s argument, automating these functions should lead to more jobs, which may be true. Knowledge work jobs might still need a human front end to present the end results from the computer with a human touch (empathy); however, now you’re employing a good customer service operator, not an accountant. In fact, the human might not need to have much more than an entry level understanding of accounting and a positive demeanor to present the script that the computer provides them, which means that far more people are qualified to perform the job of presenting accounting results from a machine than are qualified to be an accountant. Given this great supply of potential workers, it’s hard to see how wages for workers replacing accountants would rival those of accountants today.

Knowledge work seems most sensitive to the economic realities of making humans smarter, but the same can apply to blue collar work. If autonomous trucking becomes a reality in the next decade, that might result in lower freight fees, which might result in increased demand for freight services, which might result in increased demand for dock workers or truck loaders; however, those skills are widely available and now there are a pool of unemployed truck drivers to fill those spots. Again, supply and demand would suggest businesses need not pay higher wages to attract workers to lower-skilled labor.

What We Can Agree On. Instead of debating whether automation will create jobs or eliminate them, we should instead start to set aside the modern dogma that humans must have jobs to survive and be productive. We should consider what a world would look like where humans don’t have to work. A world where all humans don’t need to worry about basic needs because of automation, freeing them to explore what it means to be human. Free to provide value through empathy, community, and creativity — the things robots cannot do.  Maybe we won’t agree on the outcome of automation, but one point we hopefully all agree on is that the future is bright because of automation, not in spite of it.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.