iRobot’s Dominance Shows That Making A Robot Is Hard

Special thanks to Austin Bohlig for his work on this note. 

We believe the domestic robot market, which includes robotic vacuums, mops, and lawnmowers, is one of the most promising sub-categories within the robotics space. Following iRobot’s Q2 results, we are incrementally more bullish on domestic robots. iRobot recently released Q2 results, which exceeded expectations across the board, and the company raised full-year guidance, which now implies ~25% y/y growth in the consumer robot business (excluding the acquisition of a distributor, Robopolis). Following these positive results, as well as recent conversations we’ve had with other leading robotic vacuum cleaning companies, we see four key takeaways:

Takeaway #1 – Making A Robot is Hard!

Some argue that increased competition from legacy vacuum cleaner makers will threaten the opportunity for robotics startups, but iRobot has continued to flourish and we believe it comes down to the fact that making a robot is not easy. In other words, it’s easier for a robotics company to build a vacuum than it is for a vacuum company to build a robot.

It’s easier for a robotics company to build a vacuum than it is for a vacuum company to build a robot.

Although a Roomba vacuum cleaner may look simplistic on the outside, the advanced software programming, computer vision systems and engineering acumen that goes into developing a high-performing robot is difficult to replicate. We believe iRobot’s consistent outperformance validates our thesis. While iRobot has stated they are seeing increased competition in the low-end vacuum market, we believe iRobot remains a clear market leader in the high-end category. We do believe there are other companies bringing impressive domestic robots to market, including Neat0 Robotics, Ecovacs and Samsung, but similar to iRobot, these are companies with a competency in robotics, which provides them a distinct advantage over legacy vacuum players. And this applies to robots with other domestic functions, including lawn mowing, snow removal, etc. Again, it’s easier for a robotics company to design for a specific function than it is for a legacy player to build a robot, which is why we see so much opportunity in the robotics space.

Takeaway #2 – The Domestic Robot Market May Be Larger Than We Thought.

In early June, we published a 6-part series on the future of robotics: IntroIndustrialCommercialDomesticMilitary, Social and Entertainment, Software. In our domestic robot piece, we forecast the total domestic robot market to grow 17.1% in 2017 to $1.7B, including 16.5% y/y growth in robotic vacuum cleaners. However, iRobot’s total robot revenues increased 24.2% y/y and the company raised full year 2017 guidance, which now implies 25.0% y/y growth in their robot business.  Given iRobot’s positive outlook and other conversations we’ve had with leaders in the space, we believe our estimates are likely conservative.

Takeaway #3 – It’s Not Just About Vacuum Cleaners.

iRobot’s revenues from wet floor mops increased 80.0% y/y, which was driven by strong demand both domestically and internationally. Robotic vacuums are no longer the only form of automation entering the home. We believe consumers are becoming more comfortable with other kinds of domestic robots such as mops and lawnmowers. And domestic robots is just the beginning of the much larger connected home theme. Given these robots are now equipped with advanced computer vision technology, they can map an entire household. The CEO of iRobot recently highlighted how they are considering selling this data to larger companies like Amazon, Facebook and Google to create new consumer applications.

Takeaway #4 – Domestic Robots Is A Global Trend.

While iRobot saw the strongest growth for robots domestically (revs up 46% y/y), the company is also upbeat about the growth they are seeing internationally. For example, the company expects to see high-teens growth in EMEA and 20%+ growth in Japan in 2017. Due to market variances by geography, we believe different domestic robot categories will flourish in different parts of the world. For example, Asian households have more hardwood floors and less carpet, so demand for wet floor robots will outperform in this region. In addition, although iRobot doesn’t have a robotic lawnmower commercially available yet, this market is seeing strong adoption in European countries because of the relatively smaller lawn sizes. Regardless, we believe that, in aggregate, consumers are increasingly comfortable with robots in the home.

Bottom Line

iRobot’s results are encouraging for the entire robotics community, validating consumer demand for robotics and automation in the home. We believe the domestic robot market will be one of the strongest robotics sub-categories over the next 10 years. Following iRobot’s revised expectations for 2017, we believe that our near-term forecasts are likely conservative and that the best is yet to come.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Google Is Betting On The Right Long-Term Trends

Following the company’s Q2 earnings release, Google shares are down 3%, based on higher traffic acquisition costs (TAC). As a percentage of revenue, TAC increased to 11.1%, up from 8.8% a year ago. We think this is a classic example of investors looking at the near-term bumps rather than the long-term positives. We saw several positive themes in the quarter:

  1. Revenue growth has been stable over last 5 quarters. Google’s revenue grew 21% y/y. Over the last five quarters, revenue has grown between 20-22%, even though there has been anticipation that revenue growth would slow.
  2. AI is having a positive impact on Google. Sundar Pichai began his portion of the earnings call by saying: “Google continues to lead the shift to AI driven computing.” This was the third consecutive earnings call in which Sundar touched on AI during his commentary. In Q1 of this year, he said: “I’m really happy with how we are transitioning to an AI-first company.” In Q4 of 2016, Sundar stated: “Computing is moving from mobile-­first to AI­-first with more universal, ambient and intelligent computing that you can interact with naturally, all made smarter by the progress we are making with machine learning.” Google mentioned “AI” or “Machine Learning” 18 times during the Q4’16 call, 24 times on the Q1’17 call, and 21 times on the Q2’17 call. The focus on AI is important because AI will empower Google to have better, more targeted search results for consumers, higher ROI for advertisers (through Google’s smart bidding platform), lay the groundwork for natural language processing (the future of Google Home and Assistant), and improve computer vision-based search.
  3. Google remains heavily invested in the AR/VR theme. Google Lens, a computer vision platform driven by machine learning, is the foundation of Google’s future in Augmented Reality. Google is taking the long-term approach to Google Lens, as new computing form factors emerge (ie. AR Glasses) that lend themselves to input methods more natural than taking out a phone and snapping a picture. In addition, Google shared that by year end, there will be 11 Daydream-ready devices on the market. Most notable, Samsung’s Galaxy S8 and S8+ are Daydream-ready.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

The Gold Rush of ARKit

When Apple launched the iPhone SDK in March 2008, they correctly anticipated a gold rush for iOS developers selling apps on the new App Store. Another gold rush is about to begin with the debut of iOS 11 and ARKit.

Given how the App Store story has played out over the last decade it’s hard to believe it started like this:

Look at all those empty seats. On stage, from left to right, was Scott Forstall, Steve Jobs, and Phil Schiller. I don’t think anyone in the room, except perhaps John Doerr from Kleiner Perkins, who announced the launch of the $100m iFund during the event, understood the magnitude of what had just happened.

Since then, the App Store has been the single biggest driver behind the power of the iPhone to change the world. In order to better understand the iOS platform that has emerged over the last nine years, and the platform on which ARKit will sit, we took a look at the growth of the App Store, the number of apps available, and the money paid out to developers. The growth isn’t all that surprising, but the accelerating pace of growth of the App Store ecosystem is staggering.

The growth isn’t all that surprising, but the accelerating pace of growth of the App Store ecosystem is staggering.

As of June 2017, iOS users had downloaded over 180B apps, which represents nearly 150M app downloads per day, a rate 82% faster than it was a year prior, at 80M app downloads per day in June 2016. This implies that each of the over 1B active iOS devices downloads 1 app per week.

As of January 2017, there were over 2.2M apps available on the App Store, which represents about 1,000 new apps available per day, a rate that has also shown an accelerating trend.

And as of June 2017, Apple has paid out over $70B in app revenue to developers, which represents $68M paid out to developers per day, a rate 26% faster than it was a year prior, $54M paid out to developers per day in June 2016.

ARKit and the possibilities it represents now sits on the shoulders of a massive and fast-growing iOS platform. There are now well over 1B active iOS devices around the world, although not all will run iOS 11 and ARKit apps. Given the broad ARKit compatibility (backward compatible to the 2015 iPhone 6s), we estimate that Apple’s device ecosystem for iOS 11 and ARKit will be over 200m devices at the launch of iOS 11. And if just 5% of paid apps leverage ARKit, the augmented reality apps on iOS would generate $1.7B in gross revenue per year (before Apple’s 30% take and the developers’ 70% net revenue). But many of the applications for augmented reality are entirely new and will justify – or even necessitate – an entirely new app, suggesting that our estimate is likely conservative. See a few early examples of ARKit apps here.

Read More

The AI Portfolio: Non-Tech Companies Making AI Investments

Special thanks to David Kroger, Steve Van Sloun, and Austin Bohlig for their work on the Non-Tech AI Portfolio.

In ten years, every company will have to be an artificial intelligence company or they won’t be competitive. While traditional tech companies have been very forward about their advancements and investments in AI, there are many “non-tech” companies that are making investments in AI as well. As a fun exercise, we put together a portfolio of publicly-traded non-tech companies that are poised to benefit from their efforts in AI.

To build our portfolio, we scraped the last earnings call of every company in the S&P 500 to see which had specifically referenced “artificial intelligence” and/or “machine learning”. We also looked at companies that had been in the news talking about specific AI-related initiatives. Before we dive into highlighting individual companies, we want to clarify a few terms about AI that are sometimes confused:

Artificial Intelligence is a general term that refers to a machine exhibiting intelligent behavior, which may include reasoning, learning, and audio/visual processing. 

Machine Learning is a subset of artificial intelligence centered around giving a computer the ability to learn dynamically without human influence. 

Deep Learning is a subset of machine learning, and is one of several techniques that enable more effective outcomes. For example, visual perception models use deep learning, i.e. self-driving cars are powered by deep learning systems. 

Neural Networks are a component of deep learning in which a computer system is designed to mimic the way a human brain works. A neural network passes input data through a hidden layer of neuronal nodes with dynamic weights that change the output of the system.  

One additional disclaimer before we start: We were former stock analysts and still pay a lot of attention to the public markets, but we never covered any of these companies or sectors. Consider our opinions here highly uneducated. Our analysis centered purely around each company’s efforts in AI and not fundamentals of the underlying business. Do you own diligence prior to any investment decision.


  • IDEXX Laboratories (IDXX) – IDEXX manufactures and develops products for the animal healthcare sector. On the company’s last earnings call, management mentioned that its latest diagnostic products are utilizing machine learning so the instruments always have the ability to learn and train on new data. One product that leverages AI is their SediVue Dx analyzer.
  • GlaxoSmithKline (GSK) – GSK is one of the bigger pharmaceutical companies leveraging AI and machine learning to help reduce the amount of time it takes to research and bring new drugs to market. GSK recently signed a $43M deal with Exscientia, which is a Scotland-based startup that helps automated drug design. GSK stated that Exscientia’s platform will be applied up to ten different diseases to determine if the technology can help lead the way to developing new drugs.


  • Macys (M) – If any industry needs the help of AI, it’s retail. Macy’s has been strategically increasing headcount with AI expertise and has partnered with IBM Watson to create On Call. On Call is an AI-powered assistant that can answer questions about products and departments in stores. While still early, we believe that experimental AI technologies that transform the brick and mortar shopping experience will be crucial to winning in the post-Amazon traditional retail world.
  • Under Armour (UA) – Under Armour is leveraging artificial intelligence to better understand their customers. UA is using IBMs Watson’s machine learning platform to develop more personalized fitness and health apps that are designed to measure and manage your well-being. By gathering health data from these apps, Under Armour will be able to provide personalized marketing strategies based on an individual consumer’s lifestyle. The company is also using AI to help design new products, including shoes.


  • FedEx (FDX) FedEx is making strong investments in AI, robotics and self-driving vehicles. FedEx has created an AI-enabled Alexa app that allows consumers to activate orders through voice commands rather than filling out traditional forms. In addition, the company has teamed up with Peloton, a private company focused on semi-autonomous driving platforms, as well as leading automakers Daimler and Volvo, to research semi/fully autonomous driving technologies. We believe the future of delivery will be focused on autonomous systems, and FedEx appears to be preparing for the same future.

Professional Services 

  • Accenture (ACN) – Over the past two years, Accenture has made significant investment in developing AI for both its internal operations as well as client offerings. Recognizing the positive impact that a strong AI platform can provide a business, Accenture has been creating tools to use in its consulting practice across various industries including healthcare, public safety, and financial services. The company has established Accenture Labs, which includes an Artificial Intelligence Research and Development group driving AI solutions for the broader business.
  • Interpublic Group (IPG) – IPG is a leading advertising services company that has been aggressive in creating AI-driven businesses. The company’s Mediabrands subsidiary introduced an AI-focused arm called Society that leverages a proprietary system called HEART. HEART will enable advertisers to find “emotional resonance in social conversations” through NLP and machine learning. The company also launched the Marketing Tech Venture Studio, an accelerator that gives IPG early access to cutting edge AI marketing tech startups.


  • Northern Trust Corporation (NTRS) – NTRS, an asset management firm, highlighted investments in robotics, artificial intelligence, and blockchain on its last earnings that will allow the company to operate more efficiently and create superior solutions for their clients. Northern Trust is specifically trying to leverage AI and big data to decrease costs, which would allow the company to charge lower management fees in order to stay competitive.
  • Nasdaq (NDAQ) – Nasdaq has developed products that leverage machine learning, including their Trading Insights data analytics platform and SMARTS market surveillance platform. In a world now dominated by algorithmic trading, AI is an imperative to stay relevant as an exchange. The company expects these AI-focused products to contribute more meaningfully to results as they progress over the next few years. Nasdaq has also invested in machine learning companies, including Digital Reasoning.


  • Avis (CAR) – Avis recently announced a deal with Waymo, Alphabet’s self-driving car project, to maintain, store, and deploy its fleet of 600 self-driving Minivans in the Phoenix area. Avis was chosen because of its national presence and track record of efficiently maintaining and cleaning a fleet of vehicles. As Waymo expands its service, we expect it will expand its relationship with Avis.
  • Boeing (BA) – At the Paris Air Show last month, Boeing revealed plans to begin testing fully autonomous commercial jets. While many planes use auto-pilot for portions of flight today, reducing or removing pilots from the aircraft would be a significant change. In addition to autonomous jets, Boeing is also making other AI investments through its venture arm, HorizonX.


  • Haliburton (HAL) – Haliburton is working to make its data more usable, laying a foundation for impactful artificial intelligence applications in the future. Haliburton recently outlined a data analytics advantage it gained simply by organizing and reading data where it was able to identify that one of its pumps was ill-equipped to handle a specific climate. We believe that companies making “dark data” accessible for learnings is a core precursor to effectively leveraging AI.
  • Pioneer Natural Resources Company (PXD) – PXD, a petroleum and natural gas exploration company, has noted that predictive analytics have helped the company begin to use data in an effort to improve business outcomes. The company has plans to better leverage AI to determine where to drill and could eventually have artificially intelligent robots conduct drilling autonomously.


  • Domino’s (DPZ) – Domino’s has been investing in multiple technologies in an attempt to improve its business. In autonomous delivery, the company has been testing delivery robots in Europe. In AI, Domino’s has invested in a virtual assistant that’s integrated in its mobile application, which aims to expedite and simplify the ordering process. Domino’s has also utilized AI to better route deliveries in real-time by tracking its drivers through GPS, which also happened to reduce driving incidents by 50%.
  • Monsanto (MON) – Monsanto is using AI to improve crop protection techniques, recently partnering with Atomwise, a company that uses AI to accelerate the discovery and development process of medicines. Monsanto noted that the average crop protection product takes 11 years and $250 million to commercialize. AI should help to reduce both of those figures.


  • Caterpillar (CAT) – Caterpillar began integrating artificial intelligence into its business a few years ago by creating an analytics and innovation division. One key use case for AI in Caterpillar’s business is in preventative maintenance of its equipment, which can reduce down time and cost of operation. Additionally, Caterpillar’s venture arm has invested in Airware, a drone-tech startup that helps companies plan flights and analyze images gathered for insights, as well as other frontier tech startups.
  • Deere (DE) – In March, Deere announced a partnership with Kespry, a drone-tech startup that helps mining, construction, and other businesses gain insights from aerial imagery. Deere construction and forestry equipment dealers will offer their customers Kespry Aerial Intelligence systems for use on job sites around the world. We expect AI and robotics to have a significant impact on agriculture over the next decade plus and would expect continued exploration from Deere in the space.

Non-Tech AI Portfolio vs. S&P 500

Going forward, we will continue to track how our Non-Tech AI portfolio performs against the S&P 500. Each company will be equally weighted in the portfolio. Backtesting over the past year, our Non-Tech AI portfolio slightly outperformed the S&P 500. As detailed in the chart below, from July 1st, 2016 to present, the Non-Tech AI portfolio was up 17.1% versus the S&P 500 up 15.5%. While past performance is irrelevant in this comparison, we believe the non-tech winners in AI are going to start to demonstrate clear competitive advantages over the next several years, which will ultimately be reflected in their stock prices.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Loup Ventures employees may currently or in the future hold positions in any or all of the stocks listed here. 

VR Sickness Should Be Mostly Solved In 5 Years

The sickness adoption hurdle. The timing of most tech adoption curves can be anticipated by a combination of cost and utility. The lower the cost, the higher the utility, the faster the adoption. We define mass adoption as 500m or more monthly users. We estimate monthly VR users to be about 25m today. It’s still very early. In addition to the standard tech adoption factors of cost and utility, VR has a third factor: sickness. In order for VR to go mainstream, simulation sickness is a problem that needs to be solved. We expect that over the next 5 years, technology will solve most VR sickness.  It’s important to understand why we get sick in VR and what can be done to reduce VR nausea. We visited a VR arcade to further investigate the issue.

Why do we get sick in VR? Users primarily experience sickness in virtual reality simulations due to the imbalance of inputs in their vestibular and visual systems. This sensory imbalance is related to motion sickness. In a common example of motion sickness, a passenger on a boat may become sick when the visual inputs to their body appear as if they are not moving, but their vestibular inputs give their body the perception that they are moving. In this example, motion is not seen by the user, but is felt by the vestibular system. Virtual reality can cause a type of motion sickness, where motion is seen by the visual systems but not felt by the vestibular systems. The most common theory posits that the imbalance of sensory inputs causes the brain to incorrectly believe that a user is hallucinating due to poison, and will attempt to induce vomiting.

In addition to sensory and visual conflict, virtual reality sickness can also be caused between the lag in head movement and simulation refresh rate. If the simulation refresh rate can be brought down to within 5 to 10 milliseconds of our body’s movement (from on average 18 to 22 milliseconds today), sickness from refresh rate will be reduced or eliminated.

Potential fixes. As mentioned, one solution to address VR sickness is to decrease the latency, or the time between a user’s head movement and the updated display content reaching a user’s eyes, to a level between 5 to 10 milliseconds. Improving latency inside VR is challenging, but achievable, notably through the introduction of eye tracking and foveated rendering. Eye tracking is when a VR headset has the ability to identify the specific area of the screen that a user is looking at. Foveated rendering is the process of rendering the specific area of the screen a user is looking at with a higher resolution. Areas outside of focus are rendered at a lower resolution. As such, foveated rendering is dependent upon a quality eye tracking system. Foveated rendering takes stress off of the GPUs, reducing the bandwidth necessary and potentially decreasing the latency.

When it comes to sensory imbalance, there is still a lot of work to be done. There have been some VR applications that try to mimic movement in VR to limit the sensory imbalance, such as walking on a gaming treadmill. A less appealing, non-technological solution is for users to develop a tolerance for simulations and VR, slowly acclimating their sensory system to a simulated environment. Using VR more often for short periods of time can normalize one’s senses to the discrepancy between sensory inputs. We believe that over the next 5 years, technology will solve the vast majority of VR sickness.

Measuring for VR sickness, our visit to VR game location. Madeleine Winges, an intern at Loup Ventures, visited Smaaash, a VR game location at the Mall of America in Minneapolis. Madeleine’s not a gamer, which made her the right person to do the testing. Here’s her report:

A great experience, even thought I felt slightly nauseous.  Overall, I had a great experience at Smaaash.  It was my first time in a high definition VR headset, and I would recommend it to anyone, especially those curious about the world of virtual reality.  I did get slightly nauseous (average of 3.6 on my 0-10 nausea scale) in the hour I tested the different VR experiences, and the nauseous feeling lasted for 45 minutes after I left.  One complaint about the particular location-based VR experience: Players need to wait for an attendant to set up the game for you, compared to the simplicity of a classic, walk-up arcade. While I am not likely to return due to my lack of interest in gaming,  I can see why groups of people would enjoy this unique experience.Read More