Facebook Pushes Further Into AR

In an interview with Recode following Facebook’s F8 conference, Mark Zuckerberg laid out his rationale for Facebook’s big bet on augmented reality:

“Think about how many of the things you use [that] don’t actually need to be physical. You want to play a board game? You snap your fingers, and here’s the board game. You want to watch TV? You don’t need a physical hardware TV, you buy a one-dollar app ‘TV’ and put it on the wall.” – Mark Zuckerberg

To push towards this future – and in an attempt to own the underlying technology – Facebook launched its “Camera Effects Platform,” an open platform for developers to build AR-features and lenses for the Facebook in-app camera. Zuckerberg also confirmed to Recode that Facebook is building “AR hardware” and shared his thoughts on the future of AR and VR; among them:

  • There will be demand for separate VR and AR products in the future.
  • The technology doesn’t yet exist to create the AR glasses that industry leaders are envisioning.
  • Building VR products today will help build the AR products of the future.
  • AR will be a bigger business than VR.

Our take: AR will enhance the smartphone, then replace it. It’s consensus that AR will be bigger than VR over at least the next 10 years — and we agree. AR will enhance the smartphone, then replace it in that time frame. But if you look out further than that, perhaps 30+ years, the immersiveness of VR has the potential to be so good that it rivals base reality. This will require advances in both artificial intelligence and neuroscience, not just digital enhancement. If VR can create alternate worlds as rich as the real one, we think the opportunity would surpass anything humans have created to date.

Facebook gets it, and they are investing accordingly. In fact, the biggest players in the space will collectively spend over $51B on R&D in 2017, of which we estimate $4B will be AR-related spend.

From Google’s work on Glass (2013) and Tango (2014) to Microsoft’s investment in Hololens to Apple’s uncharacteristically vocal pursuit of AR as a core technology, the biggest players are determined not to miss out on the next dominant computing platform and the AR technology underneath it. In fact, in our assessment, Facebook lags behind other incumbents including Google, Apple and Microsoft. But they’ve got a foothold in social and, today, AR is expanding through social – the most forward-thinking AR application is Snapchat. Everyone else is following fast and F8 is a clear indicator that Facebook is doubling down on AR in the race to own the OS of the future.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Don’t Miss the Importance of Diminished Reality

Written by guest author Lindsay Boyajian, CMO at Augment 

Pairing augmented reality with diminished reality provides a superior visual experience and could help grow the AR market.

Augmented reality, virtual reality and mixed reality are three realities that exist on the reality-virtuality continuum—and they are probably the three terms you have heard again and again.  However, there is a fourth reality you probably haven’t heard of—diminished reality.

Diminished reality can be thought of as the opposite of augmented reality. Augmented reality (AR) enhances our reality by overlaying digital elements like 3D models on the physical world.  Contrary to that, diminished reality (DR) diminishes parts of the physical world. It removes unwanted objects in our view.
Karen E. Hamilton (CC BY-NC-SA 3.0)

How does diminished reality enhance augmented reality?

Although DR doesn’t lie on the virtuality continuum, it can be used in combination with AR for a greater visual impact.

Let’s take the example of interior design. AR lends itself well to interior design because it allows us to try different pieces of furniture in our homes. Thanks to AR, we can see exactly how a new chair would fit and complement our existing space.

However, often the space we are trying to redesign is already crowded with old furniture. Placing the new chair in AR on top of the old chair doesn’t serve much value. You can’t appreciate it. If you first use DR to hide the old chair from view, then use AR to place the chair in the seemingly empty space, the visual experience is much improved and valuable for the end user.

 

This combination of augmented and diminished reality is referred to as mediated reality. The term mediated reality is attributed to MIT researcher Steven Mann in 1994. Mediated reality alters our perception of reality by adding and removing information through a device such as a headset or smartphone in real time.

Read More

Don’t Write Microsoft Off

Typically, when we talk about the future of AR and VR, the first companies that come to mind are Apple, Google, Facebook, and Snapchat; however, Microsoft does not receive enough credit for the strong positioning it has already built.

As shown in our Jump Ball for the Next OS chart, Microsoft sits in third place behind Google and Apple in terms of elements necessary for a complete AR OS.

In the past week, Microsoft has made three important announcements that show the advances it’s making in order to better position itself as a key platform for VR and AR the future.

Project Scorpio. Last week, Microsoft unveiled its final Xbox Project Scorpio specifications through Digital Foundry. Project Scorpio is a mid-generation console with 4K output and VR gaming capabilities. Gaming is one of the first areas where VR will have a big impact, and Microsoft is poised to benefit from it.

Of all of the companies vying to own VR and AR platforms of the future, Microsoft is the only one to have a gaming console. In January, Microsoft shared that it had reached 55 million monthly active users on its Xbox Live platform, up 15% from the previous year. The Project Scorpio console, set to be released this fall, is powerful enough to display VR content. Microsoft’s main competition in console gaming is Sony, who released an early VR system in November of 2016. Sony has since announced that Playstation VR has sold over 915,000 units as of late February. We view Playstation VR as a step behind the HTC Vive and Oculus Rift, but ahead of smartphone-powered experiences.

While Microsoft doesn’t produce any VR hardware, it sells the Oculus Rift headset in its stores and has included the Xbox controller in Oculus Rift bundles. Oculus seems like the logical choice for a VR headset partner for Project Scorpio, but Microsoft shared that the next console will also support the Mixed Reality Headsets from Microsoft in 2018, which include headsets manufactured by Lenovo, Dell, Acer, and HP.

Mixed Reality OS Support. Microsoft recently announced that its latest Windows 10 update, the Creator’s Update, will start rolling out to users beginning on April 11th. This update will include support for Mixed Reality (MR) headsets. While this doesn’t mean much to consumers now since MR headsets won’t be available until the holiday season, developers that are soon to receive their MR development kits will be able to work on creating content and applications now. Providing developers with this early window should lead to high-quality MR content being available on day one of the MR headset releases.

It’s also important to remember that Microsoft is leading the way when it comes to MR hardware, with the Hololens. While there are improvements that can be made, Microsoft has a commanding lead in the category. Its updates to Windows 10 will further benefit Hololens developers as well. We continue to view mixed reality as true augmented reality.

Sprinkles. Microsoft has also released a photo application for iOS called Sprinkles, which is a foray into AR on a mobile platform. Sprinkles gives users photo editing tools, allowing them to add filters, stickers, and emojis. In addition, it utilizes facial recognition to position stickers and recommend celebrity look-a-likes. This app is similar to Apple’s recently released Clips.

While Microsoft clearly missed an opportunity in the shift to the mobile computing paradigm, it seems heavily invested in positioning itself as a strong company in the future computing paradigm based on its investments in AR and VR.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Feedback Loup: Google Daydream

Google’s smartphone-powered VR platform, Daydream, represents the company’s most significant push to date in its effort to accelerate the adoption of VR. We’ve spent the last few weeks testing the platform with a Pixel phone and a Daydream View headset. Bottom line: Daydream isn’t there yet, but the platform establishes a solid foundation for the future of “low-immersion” VR.

Along with Samsung’s Gear VR  platform and Google Cardboard, we continue to believe these smartphone-powered, low-immersion platforms will drive the global VR user base above 100m by 2018. We expect the vast majority of VR users will be using low-immersion VR over the next several years. Low-immersion platforms are the on ramp for high-immersion VR platforms like the Oculus Rift and HTC Vive, so it is important to understand the low-immersion platforms of today in order to anticipate broader high-immersion use and the future of VR more broadly.

Hardware, software and content are all critical components for the future of VR, but our experience with Daydream left us feeling that content represents the biggest near-term opportunity to show the power of VR.

Hardware: Daydream is powered by Daydream-ready Android phones running the Nougat operating system. Currently, there are 4 Daydream-ready phones, including Pixel, with (many) more on the way. After a month-long wait, we used a Pixel ($649) for our testing of the Daydream platform. These phones pair with the Daydream View headset ($79), which is the best smartphone-powered VR headset we’ve ever used.

Daydream View is the best VR headset we’ve ever used.

Unlike some headsets we’ve tried, Daydream View is wearable. The soft fabric and angled head strap are clear signs of thoughtful design, built for wearability. Plus, in what seems like an industry first, it’s even comfortable for users with glasses. Daydream View comes with a remote control that we found easy to set up and intuitive to use. The remote conveniently nests in the viewer when not in use. Sound can be heard directly from the phone’s speakers, but it’s more immersive to use the easily accessible headphone jack on the Pixel.

Read More

The Five Senses of Computing

The trend in computing towards more natural user interfaces is unmistakable. Graphical user interfaces have long been dominant, but machines driven by more intuitive inputs, like touch and voice, are now mainstream. Today, audio, motion, and even our thoughts, are the basis for the most innovative computer-user interaction models powered by advanced sensor technology. Each computing paradigm maps to one or more of the five human senses; exploring each sense gives us an indication of the direction in which technology is heading.

Sight – Graphical User Interface

The introduction of the graphical user interface (GUI) drove a step function change in computers as productivity tools, because users could rely heavily on sight, our dominant sense. The GUI was then carried forward and built on with the advent of touchscreen devices. The next frontier for visual user interfaces lies in virtual reality and augmented reality. Innovations within these themes will further carry forward the GUI paradigm. VR and AR rely heavily on sight, but combine it more artfully with other inputs like audio, motion, and touch to create immersive interfaces.

Touch – Touchscreen Devices

PCs leveraged basic touch as a foundational input via the keyboard and the mouse. The iPhone then ushered in a computing era dominated by touch, rejecting the stylus in favor of, as Steve Jobs put it, “the best pointing device in the world” – our fingers.  Haptics have pushed touchscreen technology further, making it more sensory, but phones and tablets fall well short of truly immersive computing. Bret Victor summarized the shortcomings of touchscreen devices in his 2011 piece, A Brief Rant on the Future of Interaction Design, which holds up well to this day.

More fully integrating our sense of touch will be critical for the user interfaces of the future. We think that haptic suits are a step we will take on the journey to full immersion, but the best way to trick the user into believing he or she is actually feeling something in VR is to manipulate the neurochemistry of the brain. This early field is known as neurohaptics.

Hearing – Digital Assistants & Hearables

Computers have been capable of understanding a limited human spoken vocabulary since the 1960s. By the 1990s, dictation software was available to the masses. Aside from limited audio feedback and rudimentary speech-to-text transcription, computers did not start widely leveraging sound as an interface until digital assistants began to be integrated into phones.

As digital assistants continue to improve, more and more users are integrating them into their daily routines. In our Robot Fear Index, we found that 43% of Americans had used a digital assistant in the last three months. However, our study of Amazon Echo vs. Google Home showed that Google Home answered just 39.1% of queries correctly vs. the Echo at 34.4%. Clearly we’re early in the transition to audio as a dominant input for computing.

Hearables, like Apple’s AirPods, represent the next step forward for audio as a user interface.

Read More