iPhone X In-Store Availability Now 10%; Online Lead Times Unchanged at 3-4 Weeks

Conclusion. Based on our checks, iPhone X supply remains tight, and we expect it to remain in short supply for the next 4-8 weeks. We continue to expect the iPhone X to reach supply demand equilibrium sometime in January. That’s a slight positive for the outlook for Apple’s Mar-18 quarter.

10% In-Store Availability. iPhone X supply is tight in U.S. Apple Stores, but availability is improving. We’re monitoring iPhone X availability at U.S. Apple Stores daily, as well as global lead times. As of Friday morning (Nov 10th), 10% of iPhone X SKUs are available at Apple Stores in the U.S. for same day pick-up. Methodology: we checked 139 of the 271 Apple Stores in the U.S. across the four major carriers.

Below is the Verizon data for our in-store checks. The other 3 carriers’ data are not shown.

Online Lead Times Unchanged. As of Friday morning (Nov 1oth) we observed that global iPhone X lead times were unchanged compared to Sunday, Nov 5th. Prior to Sunday, we measured an improvement to a 3-4 week lead time from 5-6 weeks on Oct 31st.

Sample of 1, Pre-order Delivery Time Moved Up By 3 Weeks.  Steve Van Sloun from Loup ordered an iPhone X 64GB Space Gray through Verizon on October 27th at 5AM PT and was quoted a 5 week (Dec 2nd) shipping window.  Apple has moved up the delivery data to Nov 13th, 3 weeks earlier than expected.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Posted in Apple  • 

Nvidia Weighs In on Timing for Seismic Tech Shifts

Nvidia Corporation beat earnings today, posting revenues of $2.64B ($2.36B est.), up 32% y/y. Nvidia also posted $1.33 EPS ($0.94 est.) an increase of 60% y/y. In addition, Nvidia is raising its quarterly cash dividend 7 percent to $0.15 per share.

What we learned about the size and timing of seismic shifts in tech. Today, Nvidia’s bread and butter business is around data centers and gaming, but the company will evolve to become the hardware foundation beneath AI, autonomy, and cryptocurrencies.

Size. To put the significance of this shift into perspective, CEO Jensen Huang shared on the call:

“I happen to believe that everything that moves will be autonomous some day.” – Jensen Huang

As evidence of everything moving to autonomy, the company reported DHL is using its Drive PX chips for autonomous light trucks. Separately, the company outlined why the in-car infotainment is going to become an important market in the future. As drivers become passengers, their actions inside a car will change, increasing the need for living-room quality mobile entertainment.

Timing. Separately on the call, Nvidia offered their perspective regarding the timing of these upcoming seismic tech shifts.

  • Expect robotaxis in late 2019 or early 2020.
  • Consumer level 5 fully-autonomous vehicles on the road by late 2020 or 2021.
  • Largely absent from the earnings call was talk about the VR opportunity, suggesting Nvidia sees ESports gaming as a bigger opportunity in the near-term. This observation does not dampen Loup Ventures’ optimism around VR’s long-term potential.

CPUs passé, GPUs the future. As a starting point, Nvidia is a GPU company. Over the years, Moore’s Law related to CPUs has been the measurement of computing capacity. Jensen mentioned the well-documented breakdown in Moore’s Law multiple times on tonight’s call given his belief its coming to an end as CPU performance improvements plateau. Nvidia’s belief is that GPU improvement will replace CPU improvement as the measurement of computing capacity, giving the company an open-ended growth opportunity in the years to come.

The plot soon to thicken as Intel tries to catch up in GPUs. Intel has been a laggard in the past year with INTC shares are up 37%, compared to NVDA shares up 203%, illustrating investors’ optimism around Nvidia’s 5-year opportunity. However, Intel isn’t letting Nvidia run away with the GPU market, and this week, hired AMD’s GPUs head Raja Koduri to help establish Intel as a player in the GPU space.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Eight Fun Facts About Computer Vision

Our experience of the world is intensely visual. Researchers suggest over half of our brain power is devoted to processing what we see. We talk a lot about how artificial intelligence will transform the world around us, automating physical and knowledge work tasks. In order for such a system to exist, it’s clear that we must teach it to see. This is called computer vision, and it is one of the most basic and crucial elements of artificial intelligence. At a high level, endowing machines with the power of sight seems simple, just slap on a webcam and press record. However, vision is our most complex cognitive ability, and machines must not only be able to see, but understand what they are seeing. They must be able to derive insights from the entirely new layer of data that lies all around them and act on that information.

Despite being an important driver of innovation today, computer vision is little understood by those outside of the tech world. Here are a handful of facts that help put some context around what computer vision is and how far we’ve come in developing it.

1.)  Computer scientists first started thinking about vision about 50 years ago. In 1966, MIT professor Seymour Papert gave a group of students an assignment to attach a camera to a computer and describe what it saw, dividing images into “likely objects, likely background areas, and chaos.” Clearly, this was more than a summer project, as we are still working on it half a century later, but it laid the groundwork for what would become one of the fastest growing and most exciting areas of computer science.

2.)  While computer vision (CV) has not reached parity with human ability, its uses are already widespread, and some may be surprising. Scanning a barcode, the yellow first down line while watching football, camera stabilization, tagging friends on Facebook, Snapchat filters, and Google Street View are all common uses of CV.

3.)  In some narrow use cases, computer vision is more effective than human vision. Google’s CV team developed a machine that can diagnose diabetic retinopathy better than a human ophthalmologist. Diabetic retinopathy is a complication that can cause blindness in diabetic patients, but it is treatable if caught early. With a model that has been trained on hundreds of thousands of images, Google uses CV to screen retinal photos in hopes of earlier identification.

4.)  One of the first major industries being transformed by computer vision is an old one you might not expect: farming. Prospera, a startup based in Tel-Aviv, uses camera tech to monitor crops and detect diseases like blight. John Deere just paid $305M for a computer-vision company called Blue River. Their technology is capable of identifying unwanted plants and dousing them in a focused spray of herbicide to eliminate the need for coating entire fields in harmful chemicals. Beyond these examples, there are countless aerial and ground based drones that monitor crops and soil, as well as robots that use vision to pick produce.

5.)  Fei-Fei Li, head of Stanford’s Vision Lab and one of the world’s leading CV researchers, compares computer vision today to children. Although computers can “see” better than humans in some narrow use cases, even small children are experts at one thing – making sense of the world around them. No one tells a child how to see. They learn through real-world examples. Considering a child’s eyes as cameras, they take a picture every 200 milliseconds (the average time an eye movement is made). So by age 3, the child will have seen hundreds of millions of pictures, which is an extensive training set for a model. Seeing is relatively simple, but understanding context and explaining it is extremely complex. That’s why over 50% of the cortex, the surface of the brain, is devoted to processing visual information.

6.)  This thinking is what led Fei-Fei Li to create ImageNet in 2007, a database of tens of millions of images that are labeled for use in image recognition software. That dataset is used in the ImageNet Large Scale Visual Recognition Challenge each year.  Since 2010, teams have put their algorithms to the test on ImageNet’s vast trove of data in an annual competition that pushes researchers and computer scientists to raise the bar for computer vision. Don’t worry, the database includes 62,000 images of cats.

7.)  Autonomous driving is probably the biggest opportunity in computer vision today. Creating a self-driving car is almost entirely a computer vision challenge, and a worthy one — 1.25 million people die a year in auto-related deaths. Aside from figuring out the technology, there are also questions of ethics like the classic trolley problem: Should a self-driving vehicle alter its path into a situation that would kill or injure its passengers to save a greater number of passengers in its current direction? Lawyers and politicians might have to sort that one out.

8.)  There’s an accelerator program specifically focused on computer vision, and we’re excited to be participating as mentors. Betaworks is launching Visioncamp, an 11-week program dedicated to ‘camera-first’ applications and services starting in Q1 2018. Betaworks wants to “explore everything that becomes possible when the camera knows what it’s seeing.”

We’re just scratching the surface of what computer vision can accomplish in the future. Self-driving cars, automated manufacturing, augmented and virtual reality, healthcare, surveillance, image recognition, helpful robots, and countless other spaces will all heavily employ CV. The future will be seen.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Robot Fear Index: 30.9

Like many in the tech space, we believe robotics is changing the nature of work; however, public perception of robots is still a question mark. We developed our Robot Fear Index to measure and track the average consumer’s perception of robots. We asked over 500 US consumers about topics ranging from their use of robots at home to their comfort level with self-driving cars. Then we distilled the data down to an index value that we will publish regularly. An index value of 100 suggests widespread and extreme fear of robots; an index value of 0 suggests minimal fear of robots.

Robot Fear Index: 30.9. Consumer adoption of artificial intelligence and robotics is already quite broad, and yet, fear of robots is also pervasive. We fear that they’ll replace our jobs or somehow overthrow us; and to be blunt, those fears are valid. That said, our 2017 survey indicates acceptance for these technologies continues to grow. Our most recent Robot Fear Index value of 30.9 (vs. 31.5 in late 2016) suggests that public perception of robots is essentially unchanged over the last year despite increased awareness of artificial intelligence, robotics, and the potential impact of these technologies. Notably, the related increase in media coverage of these issue does not seem be causing the rise in fear that we might expect. In fact, the slight year-over-year decline in our index value suggests slightly less fear of automation technologies.

Our most recent Robot Fear Index value of 30.9 (vs. 31.5 in late 2016) suggests that public perception of robots is essentially unchanged over the last year despite recent media coverage and increased awareness of automation technologies.

Survey Demographics. Of the 433 US consumers that responded to our 2017 Robot Fear Survey, 54% were male and 46% were female. Our survey population was also equally weighted across all age demographics, as shown in the exhibits below.

Use of Digital Assistants Growing Slowly. We continue to see digital assistants as an onramp to AI and robotics for many consumers. Our 2017 survey shows 69% of consumers have used a digital assistant (Siri, Google Assistant, Alexa) and roughly one-third use a digital assistant once a day or more, which is in-line to our results last year. When asked how many digital assistant consumers own, 21% said 1, while 14% indicated greater than 1.

Comfort with Robots is Up Slightly. We believe the comfort with AI is driving comfort with robotics. We asked consumers on a scale of 1 – 10 (1 being the most) how comfortable they are with using robots in many different settings including house cleaning (robot vacuums), healthcare (surgical procedures) and travel (self-driving cars). We were encouraged to see that 7 of the 8 categories we track saw a modest increase in comfort levels around robotics.

Domestic Robot Adoption Large Catalyst. We believe that consumer awareness of robotics is closely correlated to the rise of domestic robots within households. Domestic robots are classified as robot vacuum cleaners, mops and lawn mowers, and over the next 10 years we believe this category will be one of the fastest growing robot markets in the world. Our data shows that 75% of US consumers have yet to buy a household robot. Although we do not have the historical data to show y/y comparisons, last week, iRobot, a leading robotic vacuum and wet floor company, reported better than expected Q3 results and raised their FY17 revenue guidance for a third consecutive quarter (see note here). Given iRobot’s results, we believe the domestic robot market is seeing strong adoption domestically and internationally.

What Is Keeping Consumers From Using Robots? Many consumers have not yet adopted AI or robotic technology. When asked what has kept you from using robots, 41% (36% in 2016) of consumers said they are just not interested, while 29% (21% in 2016) believes robots are too expensive. That said, it was encouraging only 6% of consumers don’t use robots because it makes them nervous, which is down from 11% in 2016. We believe one of the the bigger fears when it comes to AI and robotics, is the risk of taking jobs. When asked when will AI and robotics cause significant job loss, 27% said within 5 years, 31% believe in 10 years and 24% anticipate significant job loss in 20 years. The remaining 17% of consumers did not believe robots would ever take our jobs.

Bottom Line. Following our 2017 Robot Fear Index survey, we believe consumer fear of robots is essentially unchanged, despite growing awareness of the potential risks of automation. We think our index value of 30.9 quantifies this cautious comfort with robots and we’re looking forward to updating the Robot Fear Index regularly as we track the progress of the robotics theme.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

iPhone X Sold Out; Online Lead Times Improve

Sold Out. As expected, it appears iPhone X is sold out at U.S Apple Stores. This weekend we monitored iPhone X availability at U.S, Apple Stores, as well as global lead times. On Saturday afternoon we found a handful of Silver 64GB iPhone Xs available at Apple stores in the U.S. for same day pick-up. By Sunday evening (Nov 5th), all 139 of the 271 Apple Stores in the U.S. we checked were sold out. We expect iPhone X to be in tight supply for the next 4-8 weeks.

Online Leads Times Improved. As of Sunday evening we observed global iPhone X leads times have improved to 3-4 weeks from 5-6 weeks on Oct 31st. This 2 week improvement is slightly better supply than we had expected, given we anticipated a 1 week improvement by November 5th. We continue to expect iPhone X to reach supply demand equilibrium sometime in January.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Posted in Apple  •