Amazon Go Extends Amazon’s Dominance to Brick and Mortar

After our first visit to Amazon Go, Amazon’s automated retail store in Seattle, we’re not surprised to hear the company has plans to open up to six more cashierless convenience stores later this year.

Our experience was flawless, leaving us increasingly confident that Amazon is best positioned to own the operating system of automated retail. Eventually, we expect Amazon to make this technology available to other retailers, as they have with Fulfillment by Amazon (FBA) and Amazon Web Services (AWS), expanding their dominance into brick and mortar.

The $50B automated retail opportunity. In 2016 there were 3.5 million cashiers in the U.S., according to the Department of Labor, with an average salary of $13,574, according to Data USA. That makes for a nearly $50 billion opportunity in cashierless retail that Amazon is well positioned to attack. Of those 3.5 million cashiers, 323,000 are convenience store or gas station employees, or 9% of the cashier workforce. The automated retail space is getting more and more crowded, but the Go store suggests that Amazon has the pole position.

Why we think Amazon will license the Go technology. Just as Amazon did with FBA and, to a lesser extent, AWS, Amazon is initially building a backend infrastructure for its own use with Amazon Go. And just like FBA and AWS, that infrastructure gets more valuable as it scales. The Amazon Go backend gives the company a trojan horse into the brick and mortar retail space, clearly an area of interest given the Whole Foods acquisition. Perhaps the more critical question is why a retailer would work with Amazon? Our answer is the same as it is with all of Amazon’s best offerings: convenience. Retailers would have a turnkey solution for automated retail. While larger stores like Walmart and Target may not want to use the technology for competitive reasons, branded retail stores (like a Nike store) may be a fit if Amazon can create a product that helps save the retailer labor and processing costs.

The Amazon Go experience. Amazon Go builds on the company’s core competence of convenience by automating the store with no cashiers or checkout lanes. Scan your phone on entry, grab your items, exit. In one test we bought a can of La Croix in 23 seconds. It felt like two parts magic and one part theft.

A few observations from our visit:

  • No cashiers, but lots of employees: mostly chefs assembling the prepared food, one ID checker in the beer and wine section, a greeter/security guard, and a few stockers replenishing shelves, bags, and plasticware.
  • Signage with instructions everywhere: download the app, scan your phone here, just walk out; clearly, there is a great deal of consumer education at work.
  • Quickly builds trust: By my second or third trip, I was certain that the store was capturing and changing my items as I grabbed and replaced items throughout a visit.
  • Felt more like a tourist destination than a convenience store: most shoppers were taking pictures or video inside the store.
  • All about speed: Signage, taglines, the Just Walk Out Technology, the app, even the receipt all focused on the trip time (my record: 23 seconds for 1 item).
  • No chat: I never spoke to anyone or interacted with a person during several visits to the store.
  • Don’t linger: I found a seat at a nearby Starbucks (notable) where I could jot down my observations after visiting the Go store. There were 10 people in line at Starbucks, waiting to order, and another 5 people waiting where 8 baristas behind a waist-high coffee bar called out customer names and handed them personalized cups of coffee. It was a stark contrast to the can of La Croix I had just grabbed off the shelf at Amazon Go and paid for via the magic of cameras and the internet.

We envision the future of retail in three categories:  1. online retail (e.g., Amazon.com), 2. automated retail (e.g., Amazon Go), and 3. Empathic retail (personalized services based on mutual understanding or empathy; more here). Amazon has already won the online space and Amazon Go could prove to be the operating system of automated retail. We’re bullish on the empathic retail space partly because it’s outside of Amazon’s core competency (convenience), leaving room for others to succeed.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Key Questions on the Evolving Future of Transportation

Advancements in self-driving car technology will eventually result in full-scale autonomous transportation. Considering the level of investment from deep-pocketed tech and auto companies and the caliber of human capital that accompanies it, the space has become “too big to fail.” This note explores three key questions we’re working through as we consider the autonomous future:

  • What will an automaker of the future look like?
  • What will the future of transportation look like for consumers?
  • Who is going to win in the future of transportation?

What will an automaker of the future look like?

In order for an automaker to succeed in the transition to autonomy, we see three core competencies: manufacturing capability, autonomous systems, and services.

Manufacturing Technology

Despite all the work going into and the hype around autonomous systems, expertise in manufacturing cars can’t be overlooked. We’ve seen the challenges Tesla has had scaling their production. Technology companies are at a significant deficit here and will likely rely on partnerships with traditional auto to bring a product to market.

Tesla is trying to solve the manufacturing problem on its own. Elon Musk said, “The biggest epiphany I’ve had this year is that what really matters is the machine that builds the machine, the factory, and that this is at least two orders of magnitude harder than the vehicle itself.”

To tackle this problem, Tesla has made acquisitions in the manufacturing space and has chosen to develop software and sensors in-house. We’ve written a lot about Tesla’s efforts (and shortfalls) in manufacturing the Model 3 at scale. We think they’ll get there.

Autonomous Systems

Software is the brains behind autonomous vehicles. This is both the most complex element and where the true value lies in autonomy. The winner in this space will have a good chance at owning the operating system of the car.

A few notable investments in this space: GM’s acquisition of Cruise, Ford’s investment in Argo AI, and Delphi’s acquisition of NuTonomy. Autonomous software investments are typically the largest in the space. We expect this trend to continue as traditional automakers, who already possess manufacturing skill, attempt to acquire or partner with the tech that will keep them relevant as the industry transitions.

Sensors are the eyes and ears of autonomous vehicles. We break the sensor category into LiDAR, radar, and cameras. Most autonomous solutions today require all three, but Tesla thinks it can reach full autonomy without LiDAR.

Many auto manufacturers and tech companies have made hardware acquisitions. Above and below are some of the investments that major auto companies have made in autonomous software and sensor companies:

Services

A significant part of current automakers’ revenue comes from servicing and maintaining the vehicles they have sold. As EV and autonomy play out, and ride-hailing fleets reduce car ownership, these service revenues will need to be replaced by software services. Down the road, connected cars will resemble a platform much like a mobile device. Owning the operating system and/or providing software services through that OS could more than make up for lost maintenance revenue.

One of these services could be in-car entertainment. With steering wheels, and eventually the need for driver attention, going away, the interior of a car will look much different. Seating arrangements and space will not resemble the current layout, but more importantly, we’ll be free to spend our time differently while in transit.

Tech companies will all be vying for the opportunity to provide in-car entertainment to consumers. Similar to smartphones today, there will be those that own the operating system (Apple, Google) and those that build on top of it to deliver content (Netflix, Snapchat). Outside of these opportunities, companies will also leverage the connected car platform to deliver targeted advertisements to riders. Imagine being prompted with a coupon for Starbucks while on your way to work. Companies will be able to target individuals with location-based advertisements much easier than through smartphones.

What will the future of transportation look like for consumers?

There are three themes that will impact what the future of auto will look like for consumers. We’ve written in-depth about these topics here: Auto Outlook and Detroit Auto Show.

Electric Vehicles will be prevalent. Electric vehicles currently account for ~1% of all vehicles today, but will reach 35% by 2030. As battery technology improves, range anxiety decreases for consumers. We’ve also learned that EVs can be fast.

Cars will drive themselves. Today, 99.9% of all vehicles have little to no automation. By 2040, 90% of vehicles sold will have Level 4 or 5 autonomy. Our transportation experience won’t change dramatically until autonomy becomes more prevalent.

Car ownership will decrease, giving way to more ride-hailing. Today, the current household has an average of 2.0 cars. We think that over the next 15-years, this number could go down to 1.25 cars per household and, longer-term, decrease even further. While some individuals may not like the idea of giving up ownership of a vehicle, there are plenty of benefits. For starters, people would not have to pay car insurance, worry about maintenance, store a vehicle, or for those of us in less favorable climates, scrape windows in the winter, or worry about parking during a snow emergency.

As ride-hailing networks become more reliable with autonomous vehicles, more people will be willing to decrease or give up household ownership of vehicles. Traditional auto and tech companies are making large bets on it, as outlined above and below:

Who is going to win in the future of transportation?

If the connected car is a platform like the smartphone, who will be the Apples and Googles of transportation? Waymo, Uber, and Tesla are early candidates for winning the operating system of the car, with each taking their own unique approach. Waymo has focused on building autonomous systems first and will seek to launch or partner with a ride-hailing network second. Uber has built a ride-hailing network first and is now racing to catch up in autonomy. Each will seek to partner with existing car manufacturers for producing vehicles. Tesla decided to manufacture vehicles first and is narrowing in on autonomy second; believing that a ride-hailing network is the last hurdle that needs to take place.

There are plenty of other entrants that could compete in this space including OEMs, who have invested large amounts in autonomy and certainly have the manufacturing scale component already solved, as well as a host of tech companies that could provide autonomous systems or software services to manufacturers. The bottom line is that the value chain in the transportation industry is being disrupted, and the massive opportunity to capture value in an industry transition will create a number of new winners.

At this point, it’s clear that one winner will be the consumer. With access to more ubiquitous, clean, and affordable transportation without the burden of car ownership, mobility will be more accessible than ever.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

AR and VR: Living, Breathing Storytelling

Written by guest author Jesse Damiani, Founder and CEO at Galatea Design

Story telling yesterday and in the future. For the past several millennia, our stories have lived in two dimensions. We translated our creative impulses into 2D formats—whether it was around a fire, painting, page, poster, motion picture or video game. But with VR and AR, that’s all changing, and fast; it’s no exaggeration to say that we can’t even begin to grasp what the storytelling content of 2028 will look like. The irony, of course, is that this shift to spatial media just means we have to revert back to our spatial understanding of the world—something we engage with every moment of every day—except now we’re no longer constrained by the physical laws of nature. The stories of the future are not just pieces of content, the spaces for immersive experiences.

The “Narrative Potential” of space. Ask any architect, interior designer, or DIY home-renovator: every space tells its own story. Take the example of a library. When you walk into it, what’s communicated to you? Lots of carpeting muffles sound, and often, high ceilings dissuade us from speaking too loudly. Shelves of books, ample desks, and fluorescent lighting imply a place intended for scholarship. These embedded details drive us to make automatic assumptions about how to behave and what to expect—the “story” of that space in time.These perceptual opportunities constitute “Narrative Potentiality,” the chance for creators to fill the space with information that will kickstart our brains’ native storytelling impulses. If I, as a VR experience designer, seat you in front of a table where a vase is positioned precariously close to the edge, I’m tapping narrative potential by making you think about it falling and shattering around you.

The space is the story. In other words, in VR/AR, the space is the story. It won’t be long before most of the digital materials we currently conceive of as 2D exist as 3D spaces. What might your favorite website or social media page look like as a “real” space?

Living, breathing stories. Buckle up: it gets wilder. Our understanding of stories is rooted in linear storytelling—the model we’ve had since we invented storytelling sitting around the fire with each other. In this model, a teller projects the story, and audiences receive it. It has a beginning, middle, and end. Audience participation (listening) doesn’t impact the outcome in any significant way. Where we’re headed is toward participatory stories that we share with each other in real time—whether we’re talking about AR or VR.

It’s a shift from linearity into semi-linearity and non-linearity, from pre-written stories experienced from a remove to stories optimized for impromptu co-creation (using narrative potential). Think about it, when you show up at a wedding, you have a general sense of what’s going to happen—but the fun of it is the experience of it spontaneously unfolding around you, and your ability to participate and impact it. The memory you leave with is your story of that event. Everybody else has a story too, both similar to yours and altogether unique.

In VR, reality is the medium. A friend of mine put it best: “In VR, reality is the medium.” Science tells us that our brains are incredibly plastic; they have space to carry multiple, simultaneous realities in them. If you’re in a story experience with your mom and she’s a purple alien, you carry two versions of her in your head: human and purple alien. Of course, she’ll be playing with her new identity as a purple alien, so your understanding of her will have to expand to include this new information. The point is: it’s all up for grabs. Want to be able to control a third arm using your pinky finger or two winks? Want to bend the laws of physics? That’s the narrative potential that VR and AR open up for us. The best part is we’ll be sharing in them together.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Eight Fun Facts About Computer Vision

Our experience of the world is intensely visual. Researchers suggest over half of our brain power is devoted to processing what we see. We talk a lot about how artificial intelligence will transform the world around us, automating physical and knowledge work tasks. In order for such a system to exist, it’s clear that we must teach it to see. This is called computer vision, and it is one of the most basic and crucial elements of artificial intelligence. At a high level, endowing machines with the power of sight seems simple, just slap on a webcam and press record. However, vision is our most complex cognitive ability, and machines must not only be able to see, but understand what they are seeing. They must be able to derive insights from the entirely new layer of data that lies all around them and act on that information.

Despite being an important driver of innovation today, computer vision is little understood by those outside of the tech world. Here are a handful of facts that help put some context around what computer vision is and how far we’ve come in developing it.

1.)  Computer scientists first started thinking about vision about 50 years ago. In 1966, MIT professor Seymour Papert gave a group of students an assignment to attach a camera to a computer and describe what it saw, dividing images into “likely objects, likely background areas, and chaos.” Clearly, this was more than a summer project, as we are still working on it half a century later, but it laid the groundwork for what would become one of the fastest growing and most exciting areas of computer science.

2.)  While computer vision (CV) has not reached parity with human ability, its uses are already widespread, and some may be surprising. Scanning a barcode, the yellow first down line while watching football, camera stabilization, tagging friends on Facebook, Snapchat filters, and Google Street View are all common uses of CV.

3.)  In some narrow use cases, computer vision is more effective than human vision. Google’s CV team developed a machine that can diagnose diabetic retinopathy better than a human ophthalmologist. Diabetic retinopathy is a complication that can cause blindness in diabetic patients, but it is treatable if caught early. With a model that has been trained on hundreds of thousands of images, Google uses CV to screen retinal photos in hopes of earlier identification.

4.)  One of the first major industries being transformed by computer vision is an old one you might not expect: farming. Prospera, a startup based in Tel-Aviv, uses camera tech to monitor crops and detect diseases like blight. John Deere just paid $305M for a computer-vision company called Blue River. Their technology is capable of identifying unwanted plants and dousing them in a focused spray of herbicide to eliminate the need for coating entire fields in harmful chemicals. Beyond these examples, there are countless aerial and ground based drones that monitor crops and soil, as well as robots that use vision to pick produce.

5.)  Fei-Fei Li, head of Stanford’s Vision Lab and one of the world’s leading CV researchers, compares computer vision today to children. Although computers can “see” better than humans in some narrow use cases, even small children are experts at one thing – making sense of the world around them. No one tells a child how to see. They learn through real-world examples. Considering a child’s eyes as cameras, they take a picture every 200 milliseconds (the average time an eye movement is made). So by age 3, the child will have seen hundreds of millions of pictures, which is an extensive training set for a model. Seeing is relatively simple, but understanding context and explaining it is extremely complex. That’s why over 50% of the cortex, the surface of the brain, is devoted to processing visual information.

6.)  This thinking is what led Fei-Fei Li to create ImageNet in 2007, a database of tens of millions of images that are labeled for use in image recognition software. That dataset is used in the ImageNet Large Scale Visual Recognition Challenge each year.  Since 2010, teams have put their algorithms to the test on ImageNet’s vast trove of data in an annual competition that pushes researchers and computer scientists to raise the bar for computer vision. Don’t worry, the database includes 62,000 images of cats.

7.)  Autonomous driving is probably the biggest opportunity in computer vision today. Creating a self-driving car is almost entirely a computer vision challenge, and a worthy one — 1.25 million people die a year in auto-related deaths. Aside from figuring out the technology, there are also questions of ethics like the classic trolley problem: Should a self-driving vehicle alter its path into a situation that would kill or injure its passengers to save a greater number of passengers in its current direction? Lawyers and politicians might have to sort that one out.

8.)  There’s an accelerator program specifically focused on computer vision, and we’re excited to be participating as mentors. Betaworks is launching Visioncamp, an 11-week program dedicated to ‘camera-first’ applications and services starting in Q1 2018. Betaworks wants to “explore everything that becomes possible when the camera knows what it’s seeing.”

We’re just scratching the surface of what computer vision can accomplish in the future. Self-driving cars, automated manufacturing, augmented and virtual reality, healthcare, surveillance, image recognition, helpful robots, and countless other spaces will all heavily employ CV. The future will be seen.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

The Rise of B2C CRM & Personalization: Retailers Combat Convenience

Written by guest author Carlos Castelan, Chief Strategy Officer at Conlego. 

In The Future of Retail, Loup Ventures laid out a vision for the future of retail amidst the continuing consumer shift from offline retail shopping to online.  To combat the shift towards pure convenience, and provide an enhanced in-store customer experience, retailers have started implementing business-to-consumer (B2C) customer-relationship management (CRM) programs to more effectively tailor product offers and services to its customers and drive traffic to their brick-and-mortar locations.

Traditionally, CRM systems and programs were utilized by large companies to manage sales cycles into other businesses (B2B sales).  However, retailers and consumer-focused companies have started to adopt the technology to better understand purchasing habits and interactions to improve customer engagement.  At some retailers, this personalization and data-capture is taking place through loyalty programs or branded credit cards but others are expanding these programs to better serve customers daily through a complete purchase profile.

A great example of CRM in the form of a loyalty program is Nordstrom’s which expanded its loyalty program in 2016 to include all customers (i.e. non-credit card holders).  Loyalty members earn points towards vouchers so long as they provide their phone number – which acts as their ID number.  With a profile and Guest ID/phone number, associates can easily view a guest’s purchase history so, for example, a customer can easily make a return without a receipt or associates can identify the customer’s size in a brand (if they purchased that brand before).  It’s not hard to imagine a future state where there’s a rich database the company can pull from to deploy evolving artificial intelligence (AI) and, at a meta-level, better predict buying trends and, on a personalized level, understand when its best customers walk into their stores and how to best cater to them.

These CRM programs and personalization are rapidly expanding, particularly among higher end retailers that focus on high-touch customer service.  A recent example of a company that is implementing a CRM system is lululemon athletica.  As laid out by one of its executives, Gregory Themelis, lululemon seeks to better understand the engagement consumers have with their brand across three levels: transactions, sweat, and engagement.  In this sense, lululemon is seeking a more holistic understanding of the customer (vs. just understanding sales/transactions).  Through data and being “informed” by it (vs. being driven by data), lululemon will be able to better engage customers by tailoring the right level of personalization along with creating seamless marketing across all channels.  lululemon is taking a much more brand-oriented approach to drive customer engagement through a personalized one-on-one experience to build a community.  In the CRM and personalization model, it’s easy to understand how a retailer, such as lululemon, could add more value to its customers in the future by sharing personalized suggestions (workouts, restaurants, etc.) to drive brand affinity and subsequently drive traffic and sales.

Whether it be through the implementation of loyalty programs or pure CRM, personalization is a concept that retailers and consumer brands are adopting to drive traffic to their locations and enhance the customer experience.  In a world that has come to value convenience, personalization and high-touch service is a way for these companies to continue to differentiate themselves and, in the future, use AI to predict customer behavior and serve them even more effectively.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.