How the Future of Voice Search Affects Marketers Today

Written by guest author Lindsay Boyajian at Conductor

Since Amazon announced its acquisition of Whole Foods, the running joke across social media has been, “Jeff Bezos said to Alexa, ‘Buy me something on Whole Foods,’ and Alexa bought Whole Foods.”

This quip highlights the shortcomings that plague voice search. Today, voice recognition technology is very much flawed and often falls short in delivering on the user’s intent.

Despite its weaknesses, voice search is promising to be the user input of tomorrow. The major tech companies are investing heavily in the technology— Apple has Siri, Amazon has Alexa, Google has Google Assistant, and Microsoft has Cortana. Even with the technology in its nascency, Google reports 20 percent of queries on its mobile app and Android devices are voice searches.

And thanks to artificial intelligence and machine learning, voice search is improving quickly. It improves with every user interaction, becoming more apt at understanding user intent. With the technology advancing, more users will adopt voice search, fueling the growth cycle.

The work that is going into voice recognition technology today will power the next evolution in computing— augmented reality.

Augmented Reality & Voice Search

Augmented reality (AR) represents a new computing paradigm. Augmented reality overlays digital assets on the real-world environment. The technology promises to change how users interact with the digital world.

Soon, everything from office activities to shopping will be experienced through augmented reality. For instance, a shopper will be able to put on a lightweight pair of AR glasses to visualize in 3D what different couches will look like in her home. Some AR experiences like this are already offered today through head-mounted devices like Hololens and Meta. However, these devices are only available to developers and still have their limitations. They are not ready for mass consumer adoption.

The principal user input for augmented reality devices (excluding hardware input accessories like keypads and clickers) is gesture and voice. The issue with gesture controls is user discomfort and fatigue. Many experts agree that voice will be the primary input for these devices.

As the augmented reality space matures so will the importance of voice search.

The tech company with the most advanced voice recognition technology will have an advantage in augmented reality computing.

Optimizing Organic Search for the Future of Voice Search

Although mass consumer adoption of AR hardware is still years away, brands that optimize for voice search early will lead in organic and search marketing when the technology becomes ubiquitous.

Voice search behavior differs from traditional search patterns. Consumers approach voice search using natural, more conversational language. The queries are often longer and delivered as questions.

The result for marketers is that content optimized only for keywords will falter, while content that delivers value and matches the intent of the user will see improved organic search performance. To do this, marketers need to develop a deeper understanding of their customers to deliver content that provides relevant and timely value. This approach to marketing is known as customer first marketing.

Customer first marketing is not new. More and more brands are quickly adopting a customer-centric marketing approach. Relevant and contextual content drives traffic, fosters customer engagement, and builds loyalty. The rise of voice search and its link to the future of augmented reality only makes adopting a customer first marketing strategy even more advantageous for brands and marketers.

This piece originally appeared on LinkedIn. For more, follow Lindsay Boyajian on Twitter and LinkedIn

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Bad Culture Doesn’t Scale

The most important lesson from Uber’s travails is that bad culture doesn’t scale. Talented teams with bad culture can build fantastic businesses, but not businesses that last. A unicorn with bad culture is a unicorn with a bomb strapped to its back — it’s only a matter of time before bad culture catches up and forces disruptive change. Sometimes bad culture rears its ugly head quickly, as it did with Zenefits. Sometimes it doesn’t happen until after multi-billion dollar per year business is established, as it did with Uber.

The culture at Uber wasn’t a secret. It had always been known as an aggressive one, and that culture deserves some credit for helping Uber transform the ride hailing industry; however, the bigger and more established a company becomes, the harder it is to maintain bad culture. Rumors spread, lawsuits happen, and good hires leave because it wasn’t what they signed up for. The media will report every painstaking detail. Advanced companies like Uber also face public backlash from customers, impacting revenue. If Uber were a publicly traded company, the stock would be down at least 30% in the past month given the CEO turmoil. Maybe down 50% for the year adding in the Google lawsuit and other well-publicized troubles.

During our time as public equity analysts, we’ve had the opportunity to cover some great, lasting companies like Apple, Amazon, Google, and Facebook. A common thread between all four of those companies is great culture. When Steve Jobs passed away, we wrote that his greatest achievement wasn’t the iPhone, the iPod, or the Mac, but Apple itself. He left behind a culture of good people driving revolutionary innovation. That might sound simple, but not compromising on your values and consistently hiring the right people that share those values is hard. It’s especially hard for a startup trying to build quickly while bearing the pressure of venture investor expectations.

It’s hard to determine the long-term fallout of Uber’s culture problem. The company has “verbed” itself, much like Google, which allows it a significant brand advantage. One of our teammates has joked that he would, “uber us a Lyft.” With broad leadership change, including the departure of its CEO, Uber has a chance to grow new roots and overcome the negative culture that’s now detracting far more than it ever added.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Eric Schmidt is Wrong About Automation

At the Viva Tech conference in Paris, Google Chairman Eric Schmidt stated that he believes automation will create more jobs, not eliminate them. I think he’s wrong, and I hope he’s wrong. Disagreeing with the Chairman of the most advanced AI company in the world about automation is a dangerous game, but there are three things that can challenge his statement: timing, incentives, and economic realities. Let’s discuss his position through each of those lenses.

Timing. While Schmidt said he expects AI to create more jobs instead of eliminate them, it’s unclear what time frame he’s considering. When people talk about AI eliminating jobs, it’s almost always on how far out they’re looking into the future. Sometimes that’s five years, sometimes it’s 20, and sometimes it’s 50. At the same conference, GE CEO Jeff Immelt said the idea that robots would run factories in five years is “bullshit.” That I can agree with. The five-year picture of automation isn’t going to result in mass job loss, but the transition will start. Low-skill blue collar jobs will see continued automation. We should start to see autonomous vehicles and continued industrial automation.

Long term, automation isn’t bullshit. If we don’t have machines and software capable of performing most of the tasks we call labor in 30, 40, 50 years, then it will be a failure of Google and our technology ecosystem. We already have machines that can see and hear. We have machines that can roughly manipulate objects in the real world. We have machines that can “understand” enough at a base level to be useful at specialized tasks. Robots don’t get tired, they don’t need breaks, and they don’t get distracted. They will eventually be able to do things with greater precision and sophistication than humans, whether physical work or knowledge work. When robots get sick (broken), they’re much easier to fix or even replace. Robots don’t need to commute to jobs, which saves on energy costs. Robots don’t need paid vacation or catered lunches. For all these reasons, robots will eventually be the most competitive option for the majority of jobs. A few more decades of improvement on artificial intelligence and robotics should yield far more capable machines that can perform almost all work more effectively and more efficiently than humans.

Incentives. Any discussion around job creation or loss is highly political, which means it’s also highly emotional. Many people get scared or even angry when confronted with the possibility of mass automation and human “unemployment”. Eric Schmidt is a savvy politician, and Google is the world’s leader in artificial intelligence development. He doesn’t want the world associating Google with job loss because it could negatively impact their business. He may also have other desires to serve in public office and is setting the stage for those ambitions. Either way, he’s incentivized to be an automation unemployment denier.

But the issue of incentives flows similarly to all executives and CEOs, Immelt included. They’re in an impossible political situation. If Immelt, or any CEO, were to embrace robots capable of eliminating human jobs, there would be backlash among their employees. No worker would be happy to be viewed as a stop gap to automation. There would also be massive PR backlash.

This psychological reality of having to avoid these negative incentives can have real impact on the progress of automation adoption. Decision makers who are caught between looking for improved productivity from automation and maintaining jobs may be forced to seek suboptimal solutions to keep humans employed. Over the past several decades, automation in factories hasn’t meaningfully improved productivity. One reason may be that robot installations approved by executives are done so with the goal of sustaining human jobs rather than maximizing total productivity.

Economic Realities. During his speech, Schmidt argued that not only could automation help create more jobs, but also raise wages. He said that if you “make people smarter” via computers, their wages should increase. In a vacuum this might be true, but in reality this seems to ignore supply and demand.

Let’s take an example in knowledge work automation. In the future, computers are going to be better accountants, financial advisors, actuaries, claims adjusters, etc. than humans. All of these are highly logical jobs driven by information. Per Schmidt’s argument, automating these functions should lead to more jobs, which may be true. Knowledge work jobs might still need a human front end to present the end results from the computer with a human touch (empathy); however, now you’re employing a good customer service operator, not an accountant. In fact, the human might not need to have much more than an entry level understanding of accounting and a positive demeanor to present the script that the computer provides them, which means that far more people are qualified to perform the job of presenting accounting results from a machine than are qualified to be an accountant. Given this great supply of potential workers, it’s hard to see how wages for workers replacing accountants would rival those of accountants today.

Knowledge work seems most sensitive to the economic realities of making humans smarter, but the same can apply to blue collar work. If autonomous trucking becomes a reality in the next decade, that might result in lower freight fees, which might result in increased demand for freight services, which might result in increased demand for dock workers or truck loaders; however, those skills are widely available and now there are a pool of unemployed truck drivers to fill those spots. Again, supply and demand would suggest businesses need not pay higher wages to attract workers to lower-skilled labor.

What We Can Agree On. Instead of debating whether automation will create jobs or eliminate them, we should instead start to set aside the modern dogma that humans must have jobs to survive and be productive. We should consider what a world would look like where humans don’t have to work. A world where all humans don’t need to worry about basic needs because of automation, freeing them to explore what it means to be human. Free to provide value through empathy, community, and creativity — the things robots cannot do.  Maybe we won’t agree on the outcome of automation, but one point we hopefully all agree on is that the future is bright because of automation, not in spite of it.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

What Google I/O Means for Immersive Computing

This week Google hosted it’s annual I/O developer conference. On day one, the company focused on their innovation in the artificial intelligence space. On day two, they talked about new VR products. Here’s our take on what the latest out of Mountain View means for the future of AI and VR.

  • Day One: AI. At Google I/O, Google lived up to it’s commitment to be an AI-first company. The company announced a slew of AI innovations focused on making their platform easier to use with more natural interfaces including voice (Google Assistant) and vision (Google Lens). For example, Google Assistant now includes support for calendars, phone communication, and proactive alerts, closing a gap we identified in our work on home assistants. Proactive alerts for voice-based assistants is a big step towards a screenless future. Google Home now flashes when it has relevant and timely information. For example: [*Google Home flashes*] “Traffic is heavier than usual. Leave in the next five minutes to be sure you make it to Anna’s soccer game on time.” In the screenless future, friction-less information push represents the future of search technology. Google is still in the best position to own the category given its organization of the world’s information. Google’s progress in the fields of computer vision (Google Lens) and cloud-based supercomputing/machine learning (Google Compute Engine) positions the company for success as we transition to more natural and immersive computing. It’s no coincidence that day one ended with a tease for a standalone VR headset, untethered to a PC or a smartphone. For more, see the 10 min condensed version of all the day one announcements here.
  • Day Two: VR. The big news on day two was the announcement of a standalone VR headset untethered to a PC or smartphone for computing power. Google is partnering with Qualcomm to build a reference headset and announced partnerships with HTC and Lenovo to bring standalone VR headsets to market later this year. Google also addressed a common knock on VR: given the full enclosure of VR headsets, VR experiences are hard to share with others. Google is making VR more social with shared rooms and voice chat as replacements for the text-based comments familiar to PC users. These advancements will help make VR mainstream faster. The transition from PC- and smartphone-driven VR to standalone VR will take 3-5 years (we don’t expect real traction – 1m units – until 2019), but the transition has clearly begun.

Bottom line: Google’s investments in AI and VR will accelerate the transition from computing on PCs and touchscreen devices into the future marked by immersive computing.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Face Off: Siri vs. Google Assistant vs. Cortana

The importance of voice assistants in the screenless future is hard to overestimate. We see speech-driven user interfaces as a key component of the next computing paradigm, so it’s helpful to understand empirically where each platform stands today. In February, we compared Amazon Echo and Google Home to see which assistant was winning the race to become the centerpiece of the home. Google Home won that battle, but it was close. For our next digital assistant face off, we tested the three most prevalent digital assistants available for mobile devices: Siri, Google Assistant, and Cortana. This time, Google Assistant came out on top.

Methodology

We asked the same 800 queries to each assistant that we asked Amazon Echo and Google Home. We graded the queries on two metrics: First, did the assistant correctly understand the query? Second, did the assistant answer the query correctly?

The queries break down into five categories:

  • Local – Where is the nearest McDonald’s?
  • Commerce – Where can I buy more printer paper?
  • Navigation – How do I get to REI from here?
  • Information – What is Apple’s stock price?
  • Command – Remind me to call Mom at 2pm today.

Results

Google Assistant, the clear winner, understood 99.9% of the queries we asked and answered 74.8% of them correctly. Siri understood 94.4% of the queries we asked and answered 66.1% of them correctly. Finally, Cortana understood 97.3% of the queries we asked and answered 48.8% of them correctly.

By category, Google’s lead in navigation and information is demonstrable, but there’s more parity between Siri and Google Assistant in local, commerce and command related queries. Cortana lagged both Siri and Google Assistant in all categories, narrowly coming in second only in information.

Read More