Your Data Is Worth Less Than You Think

A few weeks ago, we started an analysis of what our Facebook data is really worth to debate the merits of a decentralized social platform where users got paid for their data. The general hypothesis was that our data isn’t as valuable as we think. Then Cambridge Analytica happened. So we asked 500 Facebook users how they think about their data and found that our data is, in fact, worth less than we think; at the same time, it’s priceless.

What’s Your Facebook Data Worth? In 2017, Facebook generated ~$19.5 billion in US ad revenue across average monthly active users of 237 million. This means that for every active user, Facebook generated $82.21 in ad revenue. That’s roughly what your data is “worth.”

(Note: Facebook would arguably make something on untargeted display ads even without your data, so the true number is something slightly less. We’ll ignore this point for simplicity’s sake). However, it costs Facebook money to make money from our data. Half of the revenue they generate goes to operating costs, then they have to pay taxes and some other costs. Net to Facebook, every US user generated about $29.60 in profit for the company. Let’s arbitrarily assume Facebook pays out 70% of that net profit. That would be $20.72 in value to each US user.

In our survey, we asked consumers what they thought their Facebook data was worth per year. Only 27% answered between $0-25, while 41% said something between $0-100 which would encompass the top line number of $82.21. Almost 44% thought their Facebook data was worth more than $500 per year. The majority of us have a significantly different perspective on what our data is worth compared to reality, but that highlights the fact that it really isn’t about money, it’s about the sanctity of our data and the violation we feel when it’s used against our implied wishes.

Will We Use Data Management Tools? We also asked users on a scale of 1-10 how likely they are to change how they use Facebook because of the data scandal, with 1 being no change to how I use Facebook and 10 being delete my Facebook account. The average score was a 4.9. While we don’t have a comparison to put that into perspective, the data point feels like a directional indicator that Facebook has serious work to do to regain user trust. It seems increasingly likely that Facebook may see some near-term challenges in user engagement.

We also asked users how much time they would spend using a tool from Facebook that gave them better control over their data, which they recently announced. 39% said they wouldn’t spend any time managing their data preferences, while 33% said they’d spend less than 10 minutes and only 10% said they’d spend over 20 minutes. Anecdotally, given the amount of controls in the privacy center now, I spent 5 minutes working on changing controls and didn’t even get through all the apps I had installed nonetheless changed any advertising controls. So despite our displeasure with how our data has been used, the majority of us aren’t interested in spending much more than a few minutes to fix it, if any time at all.

What’s the Solution? Giving us the ability to monetize our data isn’t a panacea because we wouldn’t be happy with what we got for it. Neither is giving us more control over it because most of us won’t do much about it. What users really want is to not have to think about managing their data in the first place. Users want to feel comfortable sharing everything and anything, as encouraged by Facebook, and they don’t want to have their privacy violated. This may seem like an illogical expectation given the desire to share everything, but the human psychology debate around data sharing is irrelevant. People want services that keep their best interests in mind and protect their data for them.

For a long time, Facebook’s most important asset was the network it built. Two billion plus monthly active users is difficult to replicate. Today, that’s no longer true. Facebook’s most important asset now is the trust of its users. The loss of user trust is the only true threat to the network. For Facebook to maintain user trust in the future, the company will likely have to consider changing how it does business. It might have to accept making even less off of our data than they do today.

Our data may not be worth as much as we think, but Facebook needs to protect it like it’s priceless.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Adrenaline Shots for Apple AI

  • Apple has been criticized for not doing enough in AI. Two recent announcements show the company is closing the gap.
  • In the past two weeks, the company has announced the hiring of Google’s AI head, and an AI partnership with IBM.
  • Google’s AI head (John Giannandrea) brings credibility to Apple AI, critical in recruiting, and is likely work on AI-powered interfaces and Apple’s self-driving car program.
  • IBM partnership allows iOS developers access to IBM Watson’s enterprise machine learning, and use it to make smarter AI apps.

Core ML 101. At WWDC 2017 Apple unveiled Core ML, a platform that allows developers to integrate machine learning into an app. The AI model runs locally on iOS and does not need the cloud. At the time of the announcement, Apple outlined 15 domains for which they have created ML models, such as face detection, text summarization, and image captioning.

IBM Watson and Apple announcement. Two weeks ago Apple and IBM announced they will integrate IBM Watson with Apple Core ML. Previously, developers could convert AI models built on other third-party platforms, like TensorFlow (Google) or Azure ML (Microsoft) into Core ML, and then insert that model into an iOS app. Now developers will be able to use Watson to build the machine learning model, convert it to Core ML, and then feed the data back to Watson’s cloud. The reason why this is important is it allows iOS developers to leverage Watson’s capabilities and ultimately improve the AI in iOS apps.

Watson works locally on iOS and improves apps. What’s unique about Core ML is it runs locally on mobile devices, meaning it doesn’t have to send data back to a server. This is different than other mobile AI approaches. Running locally is an advantage when the speed of AI is important, like image recognition in AR or natural language processing. What’s new is Watson will be able to “teach” Core ML to run the AI model built with Watson. Basically, Watson does the hard work of getting a usable AI model built and then teaches it to Core ML, who can then run the model locally on its own. The app can then send data on the model’s performance back to Watson, at any time, to be analyzed for available improvements.

Recent history of Apple and IBM. In July 2014, Apple and IBM partnered to create enterprise applications on iOS devices, leveraging IBM’s big data and analytics and Apple’s hardware-software integration. IBM started selling iPhones and iPads to clients that came with software and applications for enterprise designed with Apple’s help.

Summary of big tech’s machine learning services. 

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Apple’s AI Coup

  • Apple has hired John Giannandrea who formerly served as Google’s head of AI and Search.
  • Given the industry’s shortage of AI talent, Giannandrea brings expertise along with credibility, critical in recruiting.
  • Giannandrea will likely be working on AI-powered interfaces that will replace the touchscreen and iOS, like augmented reality wearable. Separately, AI related to Apple’s self-driving car program (PAIL) will likely fall under Giannandrea.

What this means for Apple, recruiting more AI talent. It’s a win. Talent follows talent, and John Giannandrea will no doubt help to build Apple’s AI brand and enhance future recruiting efforts. His shared vision on privacy is good news for a company who claims to be the vanguard of user security. In the meantime, Google will maintain its strength in AI, given they are still an “AI first company” and have tremendous AI and deep learning horsepower with their Google Brain and DeepMind teams. Jeff Dean, the founder of Google Brain, has taken over as the head of their AI department in a “reshuffling” making AI a more central part of their business. Will Google employees follow in Giannandrea’s footsteps? There will probably be a few, but the competition is fierce, and this will not be the last major AI trade.

Why did Giannandrea come to Apple? Most likely – projects, pay, and privacy. As one of the most senior experts in arguably the most in-demand field in the world, the conversation around compensation was probably short. Giannandrea may be given freedom to work on projects he is more passionate about and have the chance to build something new. In an email obtained by the New York Times, Cook praised Giannandrea saying, “John shares our commitment to privacy and our thoughtful approach as we make computers even smarter and more personal. Our technology must be infused with the values we all hold dear.” That affinity for privacy may have steered him to Apple at a time when concerns have never been higher.

What will he do? It’s easy to think about how Google uses AI (search, image rec., voice, etc.) but Apple’s use cases are more abstract. If you consider the user interfaces that will replace the touchscreen and iOS, like augmented reality wearables, it becomes more clear why AI is critical. Just as multi-touch was a core technology enabling the iPhone, AI will be a core technology enabling the operating systems of the future. For example, wearables like AR glasses or even AirPods will heavily rely on AI-driven functionality like image recognition, ambient listening, and smart notifications. In other words, these devices need to know what you want and when you want it. With our phones, we directly control the information that we want when we want it; in the future of computing, AI will anticipate the same information. We expect Giannandrea to address these opportunities as well as bolster Apple’s overall AI prowess, overseeing AI initiatives like Siri, Core ML, and the deliberately under-the-radar autonomy project.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Tesla Production: A Step in the Right Direction

  • While Tesla missed their original Model 3 production goal (exiting Mar-18 at 2,000 per week vs guidance of 2,500), the miss was not as severe as investors were expecting.
  •  Importantly, Tesla doubled the Model 3 production run rate in Mar-18 over Dec-17 and is now 20% of the way to its goal of 10,000 Model 3s per week, which we expect in mid-2019. We expect Model 3 production to double again in the Jun-18 quarter to 4,000 vehicles per week.
  • Tesla addressed the investor cash concern, predicting they will not need to raise money in 2018.
  • While on a bumpy road, we believe Tesla remains exceptionally positioned for the future around EV, autonomy, and sustainable energy.
  • Tesla reiterated their goal of exiting Jun-18 at a run rate of 5,000 Model 3s per week. We are modeling for 4,000 per week.
  • To factor in Mar-18’s miss, we are lowering our 2018 Model production estimate to 161k from 168k.

Perspective on expectations. Tesla continues to miss Model 3 production numbers. Mar-18 is the third out of three quarters that they have failed to meet this important target. Investors are up in arms over these misses and have lost confidence in Tesla’s production guidance. While we share some of the same frustration, this hyperfocus on missing high, self-imposed production targets causes investors to miss the bigger story, which involves the company nicely ramping production of a car that is exceptionally difficult to produce and could potentially usher in global adoption of EVs.

Changes to our numbers. The table below outlines the changes to our production estimates.

Note: given the ~20% gap between Model 3 production and delivery numbers, we are now adjusting our Model 3 estimates to be driven by deliveries. This results in a ~20% reduction in Model 3 deliveries, however, there is little change in our Model 3 production numbers. No changes to S and X modeling methodology given our model had been driven by deliveries. Link to updated model here.

Expecting Profitability in Sep-20. We continue to model for Tesla to reach profitability in ten quarters. We will publish an updated cash flow model in the next week, but conceptually, we expect the company to be cashflow positive before Sep-20.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Posted in Tesla  • 

Seeing the Road Ahead: The Importance of Cameras to Self-Driving Vehicles

This is the third in a series of notes based on our deep dive into computer perception for autonomous vehicles. Autonomy is a question of when? not if? In this series, we’ll outline our thoughts on the key components that will enable fully autonomous driving. See our previous notes (Computer Perception Outlook 2030, What You Need To Know About LiDAR).

If a human can drive a car based on vision alone, why can’t a computer?

This is the core philosophy companies such as Tesla believe and practice with their self-driving vehicle initiatives. While we believe Tesla can develop autonomous cars that “resemble human driving” primarily driven by cameras, the goal is to create a system that far exceeds human capability. For that reason, we believe more data is better, and cars will need advanced computer perception technologies such as RADAR and LiDAR to achieve a level of driving far superior than humans. However, since cameras are the only sensor technology that can capture texture, color and contrast information, they will play a key role in reaching level 4 and 5 autonomy and in-turn represent a large market opportunity.

Mono vs Stereo Cameras. Today, OEMs are testing both mono and stereo cameras. Due to their low-price point and lower computational requirements, mono-cameras are currently the primary computer vision solution for advanced driver assistance systems (ADAS). Mono-cameras can do many things reasonably well, such as identifying lanes, pedestrians, traffic signs, and other vehicles in the path of the car, all with good accuracy. The monocular system is less reliable in calculating the 3D view of the world. While stereo cameras can receive the world in 3D and provide an element of depth perception due to dual-lenses, the use of stereo cameras in autonomous vehicles could face challenges because it’s computationally difficult to find correspondences between the two images. This is where LiDAR and RADAR have an edge over cameras, and will be used for depth perception applications and creating 3D models of the car’s surroundings.

Camera Applications. We anticipate Level 2/3 and Level 4/5 autonomous passenger cars will be equipped with 6 – 8 and 10 – 12 cameras, respectively; most will be mono-cameras. These cameras will play a prominent role in providing nearly 360-degree perception and performing applications such as lane departure detection, traffic signal recognition, and park assistance.

$19B Market Opportunity. Given cameras are the primary computer perception solution for advanced driver assistance systems (ADAS), the camera market is currently the largest computer perception segment and represented a $2.3B opportunity in 2017. While growing adoption of ADAS enabled cars will continue to act a near-term catalyst, adoption of fully autonomous vehicles (Level 4/5) equipped with 10+ units per car, will be the tailwind taking this market to $19B by 2030 (18% CAGR 2017 – 2030). As displayed in the chart below, the bulk of sales will be in the form of mono-camera systems. Note our forecast is driven by our 2040 Auto Outlook and only includes passenger vehicles. Fully autonomous heavy-duty trucks will also leverage similar computer vision technology, and when factoring these sales, the total camera market could be 1.5x- 2x larger.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.