The Rise of B2C CRM & Personalization: Retailers Combat Convenience

Written by guest author Carlos Castelan, Chief Strategy Officer at Conlego. 

In The Future of Retail, Loup Ventures laid out a vision for the future of retail amidst the continuing consumer shift from offline retail shopping to online.  To combat the shift towards pure convenience, and provide an enhanced in-store customer experience, retailers have started implementing business-to-consumer (B2C) customer-relationship management (CRM) programs to more effectively tailor product offers and services to its customers and drive traffic to their brick-and-mortar locations.

Traditionally, CRM systems and programs were utilized by large companies to manage sales cycles into other businesses (B2B sales).  However, retailers and consumer-focused companies have started to adopt the technology to better understand purchasing habits and interactions to improve customer engagement.  At some retailers, this personalization and data-capture is taking place through loyalty programs or branded credit cards but others are expanding these programs to better serve customers daily through a complete purchase profile.

A great example of CRM in the form of a loyalty program is Nordstrom’s which expanded its loyalty program in 2016 to include all customers (i.e. non-credit card holders).  Loyalty members earn points towards vouchers so long as they provide their phone number – which acts as their ID number.  With a profile and Guest ID/phone number, associates can easily view a guest’s purchase history so, for example, a customer can easily make a return without a receipt or associates can identify the customer’s size in a brand (if they purchased that brand before).  It’s not hard to imagine a future state where there’s a rich database the company can pull from to deploy evolving artificial intelligence (AI) and, at a meta-level, better predict buying trends and, on a personalized level, understand when its best customers walk into their stores and how to best cater to them.

These CRM programs and personalization are rapidly expanding, particularly among higher end retailers that focus on high-touch customer service.  A recent example of a company that is implementing a CRM system is lululemon athletica.  As laid out by one of its executives, Gregory Themelis, lululemon seeks to better understand the engagement consumers have with their brand across three levels: transactions, sweat, and engagement.  In this sense, lululemon is seeking a more holistic understanding of the customer (vs. just understanding sales/transactions).  Through data and being “informed” by it (vs. being driven by data), lululemon will be able to better engage customers by tailoring the right level of personalization along with creating seamless marketing across all channels.  lululemon is taking a much more brand-oriented approach to drive customer engagement through a personalized one-on-one experience to build a community.  In the CRM and personalization model, it’s easy to understand how a retailer, such as lululemon, could add more value to its customers in the future by sharing personalized suggestions (workouts, restaurants, etc.) to drive brand affinity and subsequently drive traffic and sales.

Whether it be through the implementation of loyalty programs or pure CRM, personalization is a concept that retailers and consumer brands are adopting to drive traffic to their locations and enhance the customer experience.  In a world that has come to value convenience, personalization and high-touch service is a way for these companies to continue to differentiate themselves and, in the future, use AI to predict customer behavior and serve them even more effectively.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Don’t Underestimate the Importance of 3D Mapping for Autonomy

As you drive down the road, you make countless subconscious micro-decisions and calculations built on past experience. You know what 40 mph feels like by observing how fast the trees are passing by, how hard you can hit the brakes to comfortably slow down at a traffic signal, and that you should coast down a steep hill to avoid speeding. Even if you are on an unfamiliar road, driving experience has built foundational expectations and awareness, so that you are not hurdling into the unknown waiting to react to situations that arise. In the case of autonomous vehicles, however, these decisions are made by software. Simply adding sensors like LiDAR and cameras to a vehicle allow it to perceive its surroundings, but, on their own, would fail to enable a safe ride. Enter 3D maps – a critical element of autonomy that is often overlooked.

Detailed, 3-dimensional, continuously updated maps are essential to true widespread adoption of self-driving cars. This is what separates a system that needs to be overseen by a human focused on the road, and one where you can fall asleep and wake up at your destination. While it is technically possible for a car to navigate an unfamiliar setting without a digital map, the information that 3D maps provide is critical to building the trust necessary for widespread adoption of autonomy. Maps effectively teach a self-driving vehicle the rules of the road. The car’s AI can learn the mechanics of driving like a human, but the map introduces things like bike and HOV lanes, speed limits, construction zones, train tracks, and pedestrian crosswalks. Maps also ease the burden on the car’s computers by giving them foresight and adding redundancy to its understanding of the situation it faces.

Civil Maps CEO, Sravan Puttagunta explains, “Radar and cameras cannot always recognize a stop sign if pedestrians are standing in the way or the sign has been knocked down. But if the map knows there is a stop sign ahead, and the sensors just need to confirm it, the load on the sensors and processor is much lower.”

Reducing the load on the car’s computing power must be considered, because a fully autonomous vehicle could produce as much as a gigabyte of data every second. By building an accurate and up-to-date, digital representation of the world around us, a car is able to process this data in conjunction with the data created by its sensors to create a safer, smoother, and more reliable driving experience. Maps allow a vehicle to see into the future – further than human drivers can see – anticipating instead of reacting to changes in their environment.

Maps allow a vehicle to see into the future – further than human drivers can see – anticipating instead of reacting to changes in their environment.

Maps are an important aspect of vehicle-to-vehicle (V2V) communication as well. Using maps as an extension of a car’s sensors requires a reliance on other cars for input information. This presents us with the consortium conundrum that we wrote about here. In the realm of V2V communication, where it does no good to ‘out-communicate’ the competition, we believe company collaboration would be beneficial, if not a requirement. Maps that are shared through a single cloud-based platform are updated frequently, adding exponentially to their utility. The minute details of roads are constantly changing – construction zones, fallen trees, or damaged roads are all things that must be mapped and updated to reflect current conditions. This can be accomplished using the cameras and sensors on each car including some element of automated Waze-like crowd sourcing from the vehicles, too. As a vehicle drives, its sensors are constantly comparing their inputs to the map. When a discrepancy is detected, it is corroborated by other vehicles and changed in the cloud, so every car in the network is up-to-date. Take, for example, the scenario pictured below.

Here, there are three layers of safety that come from V2V and mapping. As the black car drives by the wreck, it observes a discrepancy in its map and relays that message. The cars involved in the accident share their location and that their speed is zero, and the car blindly approaching the wreck knows to avoid its current lane and switches lanes accordingly. Sensors alone, which have limited range and therefore reaction time, would not have been able to detect and prevent a collision.

Navigation apps like Google Maps provide more than enough detail to find your way from A to B, but these maps are only able to locate your car within a margin of several meters – 3D maps must be accurate within centimeters. They must show the precise location of all street signs, lane markings, curbs, and even deep potholes that should be avoided. Moreover, if we want autonomous vehicles to be able to take us anywhere, we have to have detailed maps everywhere – and there are more than 4 million miles of roads in the U.S. How do we tackle such a monumental task? This question has provoked the attention and innovative efforts of a host of companies.

Lvl5, a startup from former Tesla and iRobot engineers, aims to crowdsource mapping data with their app called Payver. While not all cars are equipped with cameras, nearly all of their drivers carry smartphones with them. By mounting your phone aiming out the windshield, you can earn between 2 and 5 cents per mile depending on whether or not the road is common or uncharted. The process, which relies heavily on machine vision to stitch together and label every fragmented video, is a logical way to build maps early on, leveraging the user base of smartphones and the sizable number of people who drive for a living for a ridesharing, delivery, or freight service.

Waymo, who has a longer history of mapping tech and a large budget thanks to parent company Google, is taking the opposite approach. In its usual ‘do everything ourselves’ fashion, Waymo is building their maps by driving around in vehicles equipped with spinning LiDAR units. LiDAR provides a much more detailed image of its surroundings than a camera, but still requires substantial human input to label each object. Labeling things like traffic lights, street signs, and buildings is tedious, but is necessary so that a car can tell the difference between a tree and a yield sign. There is also promise of automation of this process by AI tech similar to Google Lens.

Here Mapping and Mobileye have combined many of their efforts around building, maintaining, and distributing maps to become the defacto leader in the space early on. Here, owned by a consortium of German automakers, (Audi, BMW, and Mercedes-Benz) has ambitions to build a digitized version of our world that can be interpreted by autonomous vehicles and other machines with what they call an open location platform. Here has built out their maps with LiDAR-equipped vehicles and will maintain them with an extensive network of cars outfitted with Mobileye hardware. Mobileye, purchased by Intel earlier this year for $15.3B, offers a range of services for self-driving cars like sensor fusion and camera tech, but has recently been focusing on mapping. The combined result will be a comprehensive 3D map that is aggregated in the cloud and maintained in near real-time by crowdsourcing data from a network of connected cars. The maps will be sold as a service that automakers with autonomous systems can subscribe to.

Tesla has a distinct advantage stemming from the sheer number of cars they have on the road equipped with the hardware necessary to build maps (cameras, RADAR, and ultrasonic sensors). 3D mapping presents a textbook network effect and Tesla, with thousands of vehicles already in play, is in a great position to take advantage of that market force. The question will be one of communication with other cars as more automakers begin to develop and test autonomous systems.

While the method of building sufficiently detailed maps varies, their importance in the self-driving equation is almost universally agreed upon. In contrast with lively debates over the correct combination of RADAR, LiDAR, and camera sensor arrays, or countless chipmakers jockeying to provide cars with computing power, mapping seem under-appreciated and underinvested. As Mobileye co-founder Amnon Shashua suggests, there are three core elements of self-driving cars – sensing the road, mapping the road, and negotiating your position on the road. 3D maps will be a key determinant of the long-term winners in autonomy.

Thanks to Will Thompson for his work on this note.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

UTM Deep Dive: A Multi-Billion Dollar Market You Can’t Ignore

Legislation and infrastructure are two of the biggest hurdles facing the unmanned aircraft industry. An Unmanned Traffic Management (UTM) system is an important solution. We recently had Jim Williams, who has deep expertise on the subject, guest author a piece about the importance of UTM (here). This post is a deep dive on the UTM opportunity that will identify the core UTM technologies, quantify the market opportunity, and discuss how UTM will facilitate the emergence of autonomous technologies. 

Quick Overview of UTM

Unmanned Aircraft Systems (UAS) Traffic Management (UTM) is a concept created by NASA to safely integrate manned and unmanned aircraft into low altitude airspace. In more basic terms, UTM is a system that allows drone operators to connect to a central coordinating service that manages unmanned operations at low altitudes (under 400 feet). This type of service is important for the future of unmanned aircraft because the FAA does not manage airspace below 400’, except near large airports, leaving the majority of the country’s low altitude airspace as uncontrolled.  UTM is a global initiative to offer an interoperable solution that will ultimately allow for routine beyond-visual-line-of-sight (BVLOS) flights and highly automated operations. The exhibit below highlights how information might flow between a UTM system and other airspace constituents:

Source: Loup Ventures

Key Technologies Enabling UTM

There are many technologies that are required to support the UTM concept. Each function will provide massive market opportunities for large individual companies as well as emerging startups. The 4 key technologies that will enable UTM include UAS Service Suppliers (USS), drone tracking and remote identification, vehicle-2-vehicle (V2V) communication, and detect and avoid (DAA) sensors.

UAS Service Suppliers (USS) – The core functionality of a UTM system will be managed by a UAS Service Supplier (USS). The role of the USS is still evolving, but we know that USS will be commercial entities with approval and oversight by a government agency, such as the FAA.  The USS would be the central hub, where all other stakeholders (drone operators, hobbyists, air traffic control, law enforcement, and the public) come for situational awareness regarding unmanned aircrafts. USS will also provide crucial information for commercial drone operators, which include airspace authorization, UAS identification, real-time aircraft tracking, conflict advisors, and geo-fencing.

The ideal USS will provide an independent, highly automated and scalable system that will manage and monitor drone flights, as well as factor in inputs from external sources such as terrain, weather, air traffic control, making this data available to all commercial drone operators or service providers. In addition, the USS will send notifications to external stakeholders like public safety and state agencies.

Drone Tracking & Remote Identification – For a USS to provide real-time situational airiness, it will need to be capable of tracking and identifying drones in-flight, which is attainable through familiar commercial wireless broadband solutions. These communication channels include LTE, radio frequency, Automatic Dependent Surveillance – Broadcast (ADS-B), and wifi. While the industry is still exploring which of these technologies is the best solution, we anticipate UTM will lean on a combination of radio frequency transmission and cellular networks. ADS-B is currently used to track manned aircraft and this technology will be mandated in all manned aircraft by 2020. ADS-B modules have historically been too large to fit on commercial grade drones, but recently a few companies have brought drone-tailored ADS-B to market. Given ADS-B is already standard aviation equipment, we see this as a practical solution for tracking. However, when the manned ADS-B system was built, it was not meant to incorporate millions of drones on the same network. Due to spectrum bandwidth saturation, we do not believe ADS-B will be mandatory in every drone, but only for more advanced autonomous applications. Telecom providers such as Verizon and AT&T have built their LTE networks to handle this type of density, and while LTE is not standard aviation approved equipment, we see LTE as a viable complementary solution. In areas where LTE coverage is poor, some combination of ADS-B, RF and wifi can fill the gap.

Currently, there are no established requirements or voluntary standards for electrically broadcasting information to identify an unmanned aircraft while it’s in the air. To help protect the public and the National Airspace System from “rogue” drones, government agencies, such as the FAA, have set up a new Aviation Rulemaking Committee (ARC) that will help create standards for remotely identifying and tracking unmanned aircraft during operations. The ARC will identify, categorize, and recommend available and emerging technologies for the remote identification and tracking of UAS. Based on conversations with industry contacts, we believe the technologies previously discussed are all applicable solutions for remote ID.

V2V Communication – Not only will unmanned aircrafts need to be able to communicate with the USS, but drones will also need to be able to communicate with other drones, which is better known as vehicle-2-vehicle (V2V) communication. While the same technologies that enable drone tracking, such as LTE and ADS-B, will likely be used to communicate with other drones, we believe there will be additional solutions available for shorter reach communication like DSRC (dedicated short range communication). DSRC based on radio frequency that will likely be used by self-driving cars to communicate with other vehicles on the road. We believe it makes sense for UAS Service Suppliers to communicate with all drones as well as other autonomous technologies such as self-driving cars, marine vehicles, and other mobile robots. While enabling information transmission between vehicles is important, we view the crucial aspect of V2V as cyber-security — making sure drones, self-driving cars and other autonomous systems can communicate with each other on a secure network.

Detect and Avoid (DAA) – UAVs’ ability to talk with each other via LTE, ADS-B, and other communication technologies will allow for autonomous drones to avoid obstacles known to the USS – but what does a drone do if it loses all connectivity? Many drones are now coming equipped with sensors such as LIDAR, radar, and 3D imaging. These sensors, coupled with advanced machine learning capabilities, allow drones to sense and avoid other UAS and manned aircraft without the need to be connected to a network. Though the sense and avoid technologies brought to market thus far are impressive, there is still a large opportunity for startups to bring better DAA solutions to market.

Regulation (Not Tech) Remains Biggest Hurdle For UTM

While innovation is still needed on the technology side to support the UTM concept, we believe the largest hurdle to full drone deployment in most countries remains regulation. In the US, commercial drone operations are limited to applications that are within visual line of sight, not over people that are not directly related with the drone operation (i.e. over crowds), and flights during the day. While the FAA allows for drone operators to file for waivers to perform these more advanced drone applications, this process is not efficient. Allowing for beyond-visual-line-of-sight (BVLOS) flights is the most important potential regulatory change. The US and other countries are working on bringing more favorable drone regulation to market; however, due to the time it takes to enact a policy, we don’t expect any meaningful regulatory changes to occur in the US until 2018 at the earliest.

UTM Key to Enabling BVLOS Applications

Implementation of UTM will positively impact the aircraft community in many ways; most importantly, UTM will enable new and larger market opportunities. Specifically, it will allow for routine aforementioned beyond-visual-line-of-sight (BVLOS) flights. Flying BVLOS, for example, is required for companies like Amazon to deliver packages via drones. That said, once a UTM system and favorable regulations are in place, it does not mean all drones will be able to fly BVLOS. Drones will likely need to be Type Certified with the FAA or other regulating bodies to perform BVLOS applications. While the current Type Certification (TC) process is costly (~$2M) and takes 2+ years to finish, we believe there is a new process currently being reviewed internally by legislative leaders that will significantly lower the cost and speed up the process. While this could be an important change, we do not believe the new process will be in place for at least two years, which gives current companies progressing through the traditional Type Certification process a near-term competitive advantage.

Industry Coming Together to Create a Global UTM

Due to the number of drones that will be operating in any given locale, the industry is going to need several UAS Service Suppliers for a single country, as well as hundreds across the globe to support the situational awareness needed for manned and unmanned aircrafts to operate together. While each nation will regulate drone use as it chooses, the industry needs to collaborate to set standards and protocols for drones to communicate with USS in different countries. The Convention on International Civil Aviation, also known as the Chicago Convention, established the International Civil Aviation Organization (ICAO), which is a specialized agency of the UN charged with coordinating and regulating international air travel. All countries that comply with the Chicago Convention have access to routinely fly into and out of other countries that comply. While this convention currently centers around manned aircraft, we anticipate certified drones will be treated the same way. The Global UTM Association (GUTMA) is a non-profit consortium of worldwide Unmanned Aircraft Systems Traffic Management (UTM) stakeholders, and is an example of an important industry collaboration to help bring safe, secure, and efficient integration of drones to national airspace systems across the globe.

Drone Registry Is A Key First Step To UTM

While it will take time until we have a global connected UTM system, we believe the first step will be creating a global registry for unmanned aircraft. At the 2017 Drone Enable ICAO’s UAS Industry Symposium, the ICAO suggested a compelling solution. Given the ICAO currently operates the Aircraft Registration System, which is used to register all manned aircraft across the global, ICAO recommend they also host an international drone registry system. This will allow all drone registry systems to connect with each other, as well as create a plug-and-play solution for countries that don’t have registration system in place. In the exhibit below, we demonstrate how we see an international drone registry connecting into the UTM.

Drone registration has been a significant topic of debate within the U.S. drone community after the U.S. Court of Appeals barred the FAA from requiring hobbyists to register their unmanned aircrafts. Until May 2017, when this decision was made, the FAA required all drone users to register their drone online, which cost $5 and only took minutes to complete. The registry was designed to help law enforcement identify rogue drones, but looking longer term, a registry establishes an early solution to remote identification. While we believe the U.S. will implement new regulations requiring all drones to be registered, it would be a headwind to UTM development if registration requirements were delayed.

Source: Loup Ventures

USS An Investable UTM Opportunity

Although we believe there are going to be many significant opportunities related to UTM, we view the UAS Service Suppliers as the most investable and sustainable theme. There are currently no companies that have stepped forward to take on the role of a USS, but several drone software companies have partnered with NASA and the FAA to support the development of the concept. These companies that are helping create the UTM solution and contributing to writing drone regulation have a large competitive advantage once UTM goes live. While there are many early stage startups that have the technology to make USS and UTM a reality, we anticipate companies like Amazon and Google will also have proprietary USS systems for their own vehicle networks. It is likely there are multiple USS providers, thus multiple winners in the space longer term.

How Big Can USS Be?

The pieces that will make up the UTM will represent a multi-billion-dollar market opportunity, but we anticipate the UAS Service Supplier market to represent one of the largest components of the system. We believe a USS would charge for their services as a monthly/annual subscription-type fee with providers charging businesses per drone connection. While early, a reasonable fee maybe anywhere from $100 – 300 per drone connection on an annual basis. We expect over 414k commercial drones will be sold globally in 2020, at which time we expect most UTM systems to go live. Over the following 10 years, we believe commercial units sold will increase on a 12.5% CAGR, and by 2030 we expect the industry to ship over 1.6M units annually. Based on our 14.6% CAGR assumption for commercial units leaving the base, we believe over 10.8M commercial drone units will be in the national airspace by 2030. A USS will also coordinate with manned aircraft, and based on FAA, as well as proprietary forecast we believe their will be ~280K manned aircraft regularly flying.

Based on a $200 and per connection fee USS will charge drone operators, we believe the commercial market opportunity is ~$261M in 2020. However, over the next 10 years as the install base grows we believe the USS market will generate $1.7B in revenue by 2030, representing a 20% CAGR over that time frame. That said, this estimate only factors in commercial drone units.  USS providers could charge as much as 2x more for manned aircraft to connect to the USS. When incorporating manned aircraft, the market opportunity expands in excess of $2B annually. However, we believe the USS will provide many other services but given the infancy of this market it is hard to quantify the exact dollar amount.

UTM Will Manage Much More Than Drones

Although UTM’s primary focus will be managing drone traffic, we believe these platforms will eventually manage other autonomous technologies, such as flying-cars and self-driving vehicles. Uber is working closely with NASA and FAA because they plan to operate their urban mobility aircraft without a pilot on board.  Their business model is to have their taxi aircraft operate autonomously similar to the way Amazon plans to operate their delivery aircraft.  In our Auto Outlook, we estimated that 95% of all new vehicles sold will be fully autonomous by 2040. For the auto industry to move to full autonomy, a similar UTM-like hub will need to be in place that monitors known environmental data, V2V communication, and detect and avoid, just like the UAV solution. It seems to make logical sense that there could be a centralized UTM that communicates with both unmanned aircraft as well as self-driving vehicles and other autonomous systems to maximize traffic efficiency. Factoring in the autonomous vehicle market leads us to believe the UTM and USS opportunity has significant upside optionality.

Where UTM is Today

Over the past two years since NASA launched its UTM project, the organization has conducted two technology demonstrations and plans two more to test the concepts described here. The third demonstration is planned for early 2018 and will focus on testing technologies that maintain safe spacing between cooperative aircraft (equipped with transponders or ADS-B) and non-cooperative aircraft (only detectable by primary radar) over moderately populated areas.  The final demonstration has not been scheduled, but should be close to an actual commercial application of the technology.  The plan is for the FAA to take over the program in 2019 to establish the policy needed to approve operations.

Many other countries are also testing UTM concepts. Europe seems to be the furthest along. They refer to their UTM project as U-Space, which is a set of new services and specific procedures designed to support safe, efficient, and secure access to airspace for large numbers of drones. While initial U-space services are expected to be operational by 2019, initial test partners suggest that half of the U-space services could be deployed today. For a deeper dive into U-Space see here.

Bottom Line

While the industry has advanced UTM in meaningful ways, there is still a lot that industry and government constituents need to do in order to make this concept a reality. Improvements are needed on the technology front, but the biggest challenge remains regulation. We believe the autonomous vehicle industry and governments across the globe see the benefits of UTM, and we anticipate favorable regulation will eventually be put in place. Based on NASA’s progress and the effort those participants are investing in their technical demonstrations, we believe the first implementations of UTM will go live around 2020. The time is now to invest in core technologies that will enable UTM, because it will play a major part in our automated future.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Google’s Hardware Push: Ubiquitous AI

Yesterday, Google released eight devices that all screamed a single message – Google’s hardware ambitions are both serious and AI-centric. Google is doubling down on hardware, as a means to democratize the use of AI. The company has recently poured resources into hardware development, and while hardware will always be a fraction of their advertising and cloud businesses, these products will be key factors in Google’s future as a competitor with Apple, Samsung, and Amazon.

Last year, Google charged former Motorola president Rick Osterloh with unifying the new hardware division, and more recently, the company paid $1.1 billion to acquire over 2,000 smartphone engineers and some valuable IP from HTC (although their efforts won’t be reflected in this product line). Now all-in on hardware, the question becomes one of marketing, manufacturing, and distribution power that will determine if they can elbow a position among some of the world’s largest and most successful consumer product companies.

They keynote was heavy on Google’s “AI first” message, but now that sentiment seemed to take on a more tangible form as each presenter explained how the hardware products will facilitate AI first computing. In 2017 Sundar has opened each of his public remarks stating Google’s goal of becoming and AI first company. This has obvious implications for Google’s advertising, Maps, YouTube, and cloud business, but we are adding hardware to this list.

We are most excited to see how well Google has woven their AI Assistant into the fabric of the device ecosystem. This is important because integrating an array of devices (i.e. the handoff between the home and the road) is what will push us toward the next generation of our interaction with machines. Don’t mistake Amazon’s early lead in smart speaker market share for a lead in the AI first world. While Google is lightyears behind Alexa’s install base, we believe Google has the best AI (see our comparison here), and their more robust product line could catch up quickly.

In an AI first world (as opposed to mobile first), Sundar explains, computers will adapt to the humans that use them, instead of the other way around. 4 things They must be conversational and sensory, ambient and multi-device, thoughtfully contextual, and they must learn and adapt. How is Google’s hardware collection enabling the AI first transition?

Google’s stepped-up efforts in hardware will have little impact on the broader smartphone market share. We estimate in the U.S. today that Pixel accounts for about 1% of the market, iPhone for 50%, and Samsung for 30%. We believe this increased effort around the Pixel 2 could inch up market share from 1% to 2-3% in the U.S., likely at the cost of other Android devices. As for the smart speaker market, Google’s efforts in the next few years could yield a measurable increase in market share. Today, we estimate various forms of Alexa account for roughly 70-80% and we envision Google’s smart speaker share increasing from about 25% today to greater than 35% in the next 3-5 years.

Pixel 2 and Pixel 2 XL. As smartphones more or less reach parity on technical specs, the Pixel rises above the noise with pure utilitarianism. The phone may not be as aesthetically pleasing as the iPhone X or other competitors, but it is simply built to work around you. It’s most compelling feature is the pervasiveness of Assistant that can be summoned by squeezing the phone’s sides.

Google Home Mini and Max. Both of these devices are answers to the Echo Dot and the HomePod (diversifying on size and sound quality are table steaks now), with the aim is to bring the Assistant to as many homes as possible. The Max ($400) appeals to higher-budget audiophiles, and the Mini ($50) is positioned as an impulse buy. Google sweetened the purchase of a Pixel 2 by offering a Mini for free. One of the Pixel 2’s strengths is its integration with Google Home, giving an ambient presence to Assistant.

Google Clips. Clips is a wearable camera for capturing candid memories. This $250 product is a head scratcher and fit’s better in Facebook’s product roadmap.

Pixel Buds. Google is not kidding about making Assistant a ubiquitous presence. Their answer to AirPods are yet another way to stay connected to Assistant. On the positive side, we liked the real-time language translation. On the negative side the design of the buds seem clumsy given they’re connected by a rope-like cable.

Other product announcements. Google also announced a new Pixelbook, an upgraded Daydream VR headset, and plenty of new software features.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Amazon Flooding Smart Speaker Market

With the smart speaker market still comfortably in its youth, and voice as a computing interface just beginning to take hold, no one is sure what the best use cases for the technology are, or what the device that best captures the tech (as the smartphone did for mobile apps) will look like. Amazon’s solution: make a device for just about everything and see what works.

Here’s a breakdown of the new devices announced on the 27th

Echo – $100

  • The new Echo offers much better sound quality thanks to a dedicated subwoofer and tweeter which means it will compete more directly with Sonos and HomePod. It also has six interchangeable metal or sport fabric finishes for a new look.

Echo Plus – $150

  • The Echo Plus looks the same as the original Echo, this time in silver, white, or black, that also includes upgraded sound. It will act as a hub for smarthome IoT devices like lightbulbs (it comes with a Philips Hue smart bulb), locks, or thermostats.

Echo Spot – $130

  • With a circular 2.5-inch display, the Spot is a cross between the Echo Dot and the Echo Show. It runs certain screen-based apps, makes video calls, and is geared towards use-cases like a smart bedside alarm clock.

Echo Connect – $35

  • This device more or less turns any Echo model into a landline phone that allows you to make VoIP and traditional calls from you home phone number on you echo device.

Echo Buttons – $20 for two

  • The buttons seem to be made for the singular purpose of interactive games like trivia or Simon Says. You can bet we will see designated games in the Alexa Skills market that require use of the Buttons shortly. It is also said to be “the first of many Alexa Gadgets.”

New Fire TV – $70

  • The streaming box is smaller than its previous iteration and supports 4K and HRD video at 60fps. You’ll be able to control the new Fire TV with any Alexa device.

These additions come on top of existing products like the Echo, Dot, Tap, Look, and Show. If Amazon’s device lineup seems experimental, that’s because it is. After effectively losing in the mobile space, Amazon’s real goal is to become the de facto platform for the voice-controlled smart home. By flooding the market with hardware for every conceivable use case, Alexa hopes that by the time people realize that voice computing is here to stay, she will already be in enough homes to be the go-to platform that all IoT devices run on. For third party developers of smart home hardware, this is a classic network effect – hardware manufacturers want to make devices compatible with the platform that has the most users, and users want to buy the software (on Alexa speakers) that can control the widest array of smart devices. Furthermore, each new Alexa device means another trove of user data that is used to constantly improve the underlying AI software.

Remember, this is still a miniscule portion of Amazon’s business. Even though they have more than quintupled the number of people working on Alexa to over 5,000, the vast majority of their revenue comes from other segments like ecommerce and web services. This means they are able to sell these devices at a narrow margin, focusing instead on penetrating as many homes as possible. This recent product-line revamp makes the flagship Echo less than half the price of Apple’s upcoming HomePod and about on par with Google’s $130 Home device. We have written about our long-term outlook on the smart speaker market (here), which remains unchanged – the winner will be the product that delivers frictionless connectivity between devices and real increased efficiency. Amazon’s sprawling product line and compelling price points are attractive early on, but they could face difficulties with their lack of an ecosystem of staple products like phones and computers in the future. Nonetheless, Amazon’s aggressive expansion into the space is exciting for both tech-hungry consumers and those of us watching voice-first computing take form before our eyes.