Defining the Future of Human Information Consumption

Human evolution depends on an ever-increasing rate of information creation and consumption. From communication to entertainment to education, the more information we create and consume, the stronger our society in total. Communication enhances community. Entertainment encourages creativity. Education builds knowledge. All of these elements build on top of one another like an upside-down pyramid, each new layer built a little bigger on top of the prior. It’s no coincidence that the Information Age of the last several decades has marked both the greatest period of increased information creation and consumption as well as, arguably, the greatest period of human progress.

The explosion of information created in the Information Age came with the tacit understanding that more information was good. The volume of information available to us is unprecedented, so the firehose is beneficial even if some (or perhaps much) of the water misses its target. The advancements of our technological devices that convey information have endeavored to bring the firehose closer and closer to us; from the non-portable PC, to the semi-portable laptop, to the nearly-omnipresent smartphone, to the emerging omnipresent wearable.

Now we’re continuously drowning in information.

The average global consumer spends 82 hours per week consuming information. Assuming an average of seven hours of sleep per night, this means that 69% of our waking hours are engaged in consuming information. For many consumers in developed markets, that number is likely closer to 100%. It’s not often many people disconnect from information sources. Even when we’re not in front of a screen, the nearest one is always in our pocket, or there’s music or a podcast playing in the background. As a result, we consume almost 90x more information in terms of bits today than we did in 1940 and 4x more than we did less than twenty years ago.

Source: Carat, Loup Ventures

Not only are we maxing out time available to spend with information, we’re creating so much information that it’s impossible to keep up. The ratio of the amount of information consumed per year by the average person to the amount created per year in total is the lowest it’s ever been by our analysis. There’s always more information to consume, and the trend toward autonomous systems is only going to amplify that.

Source: Loup Ventures

These two observations don’t paint a very positive picture of our relationship with information or our prospects for continued evolution, but in every challenge there’s an opportunity. We’re now preparing to exit the Information Age and enter two separate eras: The Automation Age and the Experiential Age. The Automation Age represents the natural continuation of the Information Age where artificially intelligent systems act on the vast amounts of information produced and quantity continues to dominate. The Experiential Age will define how the human relationship with information changes, creating new paradigms for communication, education, and entertainment. Out of evolutionary necessity, we will demand information with greater relevance, density, and usefulness than ever before. Meaning and efficiency, not quantity, will be the metrics on which we will measure information for human use over the next several decades.

Reintroducing Meaning + Information

Our current relationship with information is measured in quantity. Internet speeds are calculated in bits per second, as are hard drive speeds. Processor speeds are based on how many instructions they can perform per second, an instruction being a manipulation of bits. We even talk about how many messages we process a day, whether email, text, Snaps, etc. as a derivative of bits. We expect all of these quantitative measurements to improve over time, so we can consume more information faster.

One of the reasons we think about information in quantity is because of how information theory evolved. Claude Shannon developed information theory, built on work from Boole, Hartley, Wiener, and many others (upside down pyramid), to establish a method for accurately and efficiently transferring information in the form of a message from creator to receiver. His work on information theory yielded the bit and created the basis for modern computing. As Shannon developed his theory, he realized that the meaning of a message (the semantic problem) was irrelevant for establishing the efficient transfer of the characters that created the message, and therefore focused on the quantifiable aspects of the message instead.

It’s time to reintroduce meaning to information.

All meaning is established as a consumer’s interpretation of the creator’s intent (i.e. the creator’s version of meaning). The meaning of the same picture or movie or article may differ from person to person, even if the creator of the message might intend a single meaning. Since all interpreted meaning is some derivative of intended meaning, to contemplate “meaning”, you must contemplate both sides.

To help demonstrate the flow of meaning, it’s helpful to overlay the transfer of meaning on Shannon’s diagram from his groundbreaking piece on information theory:

The flow of meaning is analogous to the flow of information as diagrammed

by Shannon in “A Mathematical Theory of Communication”

A creator (human or machine) intends meaning in some message, which is conveyed to the consumer (human for our purposes) for interpretation via information transmission channels. A message is a container of information, which can be digital (e.g. a file) or physical (e.g. a book).

All intentionally transmitted messages must have some intended meaning, or the creator wouldn’t be able to conceive the message. Even a blank message is a reflection of the creator’s current mindset. Likewise, a message of random characters might imply boredom, or that a machine was programmed to do it, or it was a byproduct of some machine-learned behavior, etc. Messages transmitted by accident are free of this rule.

Since every message must be created with some initial meaning, all messages must therefore be interpreted because there is some underlying meaning to contemplate. A blank message might be interpreted as a mistake or a sign of trouble, a scrambled message might be a pocket dial, etc. This interpretation establishes the meaning from the message to the consumer.

In the process of transferring meaning, holistic noise impacts the interpretation of the intended meaning. It represents the natural disturbance between what a message creator intends and what the message consumer interprets. Holistic noise is created by differing contexts, that is, differing experiential, emotional, and physiological states between the creator and consumer.

Exploring meaning is a philosophical abyss — its precise characterization is engaging to discuss, but extremely hard to define. Nonetheless, the definition we’ve arrived at thus far is sufficient for establishing information utility, and further debate would distract us from the focus of defining utility.

Meaning/Time — The New Measure of Information

Thanks to Shannon, the quantity of information we can transfer is no longer a gating factor to the human consumption of information; the quantity we can process is. Therefore, the amount of meaning we derive from a message is now paramount, as is the time it takes to consume the message. To this end, we would propose that the value of a message in the Experiential Age should not be measured by bits in any way, but by meaning divided by time. We call this measure information utility:

Utility (U) of information is equal to the value of the meaning interpreted by the consumer of the information (m) divided by time it takes to consume the information (t).

Of course, meaning is abstract. How could you ever quantify it?

The closest we may be able to get is to borrow from marginal utility in economics. Just as we all have some innate scale with which we rank the value of consumer goods and that scale theoretically determines where we spend our money, we have a similar innate scale relative to the meaning of messages that should determine where we spend our time. We therefore measure meaning by time to help determine where best to spend our scarce asset of time just as we measure marginal utility by money to determine where best to spend our money.

Any individual’s scale for valuing meaning will likely be different, but that doesn’t matter. It only matters that we all have a scale that can consistently be used to compare the “meaningfulness” of the meaning we consume. In other words, we interpret the meaning of a message and then rank the perceived satisfaction or benefit we got from consuming it on an innate scale based on the aggregate of our prior experiences and expectations therefrom.

In reality, the scale is likely to be fuzzy (ordinal) rather than precise (cardinal). Most of us in any given moment wouldn’t be able to say message A is 11.7625% more meaningful than message B, but we would be able to say message A has more meaning than message B. That said, an information consumer can assign theoretical cardinal values to different messages based on the fuzzy scale, just as in marginal utility. Those cardinal values can be used to create a mathematical determination of information utility for the consumer.

Relevance and Compression: improving Information Utility

The next logical question is: How do we improve information utility?

The underlying intent of a message will always be the greatest factor in utility because all interpreted meaning (m) is a derivative of intent. A message of minimally useful or irrelevant intent will rarely be interpreted to have great meaning and thus usually have minimal utility. The answer may seem to be to improve the intent of the message; however, if you change the intent of a message, you change the message entirely, which is like telling the message creator to just say something else, ignoring their desire to convey their intended meaning.

Improving information utility should not rely on changing the intent of a message and instead focus on matching a consumer’s need or want with a creator’s intent and then optimizing the conveyance of that intent. This leaves us with two core opportunities to improve utility: relevance and compression. Relevance ensures that a consumer is engaging with useful messages while compression enhances the presentation of a specific message to increase meaningfulness. To maintain our hose analogy: Relevance improves the hydrant that the firehose is attached to and compression distills the water itself.

One may argue that relevance is just another measure of meaning since by definition for some message have meaning, it must be relevant to the consumer. However, to argue that relevance is meaning assumes that it can only be applied post message consumption. The problem is that once a message has been consumed, its determined relevance (and by transference, meaning m) is immutable for that given point in time; utility cannot be changed after the fact. Therefore, leveraging relevance to improve utility must happen prior to the message consumption.

We define relevance as the reduction of the total pool of potential messages available for consumption (the consideration set) prior to consumption.

Relevance reduces the information consideration set, compression enhances the information itself

Today, relevance is optimized largely by data companies like Google or Facebook. These platforms reduce some large consideration set of messages that may be relevant to the user into something more focused to help reduce the user’s time seeking meaning. The more these systems know about our preferences, the more relevant information they can funnel to us. Of course, this creates a double-edged sword related to privacy, but all technology comes with tradeoffs. While it seems the data wars have been won by the information giants, the privacy wars have just begun. The hidden opportunity here may be for some platform to put privacy first and balance the careful line between keeping consumer data safe while still presenting the most relevant information.

Less time searching for relevant information means more meaning divided by time by avoiding less useful messages in the consideration set. The time saved can be spent consuming other relevant information, creating a greater aggregate value utility given time spent. Note that our version of relevance does not influence m/t for any specific message, only the set of messages considered; only compression can do that.

We define compression as a direct improvement to a given message that either enhances the meaning intended by the creator and/or decreases the time spent consuming it.

While the interpretation of the consumer is what ultimately establishes meaning for calculating utility, the compression of meaning depends on enhancing the intention of the creator and its consequent effect on the consumer’s interpretation. Compression doesn’t change the intent of a message, but makes it clearer and more useful, usually reducing holistic noise in the process. An example might be a set of written directions to a store vs a map. The underlying intent is the same, but the latter message should offer more utility provided the map is legible and accurate.

Historically, compression has happened via transitions to different information formats. With the advent of the written language, we went from the imprecise spoken word to the more precise written word. Writing a book or a letter is a form of compression relative to the spoken word because more thought and contemplation generally goes into writing vs discourse. Greater contemplation of the message should result in greater meaning of the message. More recently, the shift from written word to video is another example of relative compression. Video conveys relatively more meaning with less holistic noise than written words because the images help an information consumer see body language, emotion, and other factors that can be difficult to ascertain through words alone. This doesn’t mean that video is the perfect medium of transmission for every message — the written word can be effective for certain purposes — but there is relative compression when comparing writing as a message format vs video as a message format.

Compressing relatively more meaning into less time will be a prominent factor in the Experiential Age. If we can get more meaning in less time, we will spend more time with the technology that delivers that benefit. To that end, there are several emerging opportunities around compression:

Stacking Information Purposes. Content formats like GIFs and emoji combine communication and entertainment by replacing text. When a message creator picks a specific GIF or emoji to express themselves, it carries not only the underlying intended message, but an added element of emotion and entertainment. These added elements not only increase the meaning transferred, but also reduce the holistic noise between the message creator and consumer. Most obviously, when someone sends a GIF of a shaking fist or a coffin emoji, they probably aren’t seriously mad.

Platform-Enforced Time Restriction. While some content platforms are moving away from time restriction, like Instagram and Twitter, placing an artificial cap on the length of a message will always compress meaning. When an information creator is forced to deliver his or her message in limited time like a 140-character tweet or a six-second Vine (maybe now v2), it demands creativity and precision that results in greater meaning for the consumer. This incurs an additional cost of time to the creator to make their message succinct but still meaningful; however, the burden of quality in a message is always on the creator, and perhaps many messages aren’t worth the time to send in the first place.

Consumer-Enforced Time Restriction. In some cases, information consumers put the restrictions on time they’re willing to spend consuming a creator’s message. This is most apparent in the shift to digital assistants. When an assistant answers a consumer’s query, the consumer is not willing to listen to a painfully long list of ten possible answers. They expect one answer; the right answer. In this sense, the consumer shifts the burden of time he or she previously spent deciding the best answer from a set of options back to the creator that established the options in the first place (i.e. Google, Amazon, Apple, etc.). Relevance has always been important in the consumer Internet, but increasingly so here. Companies with extremely large customer datasets are the only ones able to play in this game because data allows the digital assistant companies to infer the context of the consumer and tailor messages appropriately.

Information Layering. The emergence of augmented reality (AR) has enabled the concept of information layering. Now a picture or video can be enhanced by stickers or lenses that add additional information related to the original purpose of the content, most commonly entertainment today. The layered information acts somewhat similarly to stacking information purposes in reducing holistic noise by adding context. As AR continues to evolve, the layered information can become more educational or instructional. For example, AR can compress knowledge about how to do something from a book or 2D video that suffered from the friction of transferring that meaning to the real world, to overlaying that meaning on the real world itself; another example of reducing the holistic noise between creator and consumer and improving meaning in time.

Scarcity. By limiting the number of times someone can send a message or components of a message, content platforms can create artificial scarcity similar to a time restriction. In this case, the restriction applies to the meaning side of the equation. If a message creator chooses to leverage a limited resource in a message (other than time), it should confer greater meaning to the consumer because it comes at an opportunity cost to the creator. An example might be some of the frameworks emerging around the ownership of unique digital goods, which could extend into transference of meaning through those goods. When a creator includes a one-of-a-kind component to a message, the intended meaning increases, which should also increase interpreted meaning.

Time Manipulation. Since the majority of content we consume is digital, much of it can be sped up or slowed down according to the consumer’s preference (i.e. videos, audiobooks, podcasts, etc.). Time manipulation requires no additional effort on the side of the creator; it only requires the consumption tool to enable it. While time manipulation techniques are effective on an individual level, they’re less interesting than the other opportunities because they’re optional to the consumer where the others are common across all consumers.

Brain-Computer Interface (BCI). The development of brain-computer interfaces will, in theory, create a direct way to receive information from and transmit information to the brain. A deeper understanding of how we create meaning from a neurological perspective is embedded in this future, thus BCI should reduce holistic noise and enhance meaning. An example may be that a message transmission mechanism could understand the contextual state of both the message creator and consumer and alter the message as appropriate in an effort to match states. While BCI is exciting technology, it has a long clinical path that is just beginning before we can consider using it for meaningful augmentation.

It bears repeating the message creator’s intent is paramount to utility. A useless message is still useless no matter how you dress it up. However, a more compressed version of a given message should always be more useful than a less compressed one given the same baseline intent, so we should strive to compress messages as much as possible.

The current opportunities in compressing information should be relevant for the next 5-10 years or more, particularly in the case of BCI. That said, the aforementioned concepts and technologies compress information relative to how we consume it today. As they become the standard for future information transmission, the bar must move correspondingly higher, and we will need to find new ways to compress information even more. Therefore, while the human desire for increasing information utility is eternally valid, how we compress more meaning into less time must always evolve.

Embracing Information Utility

It’s popular to lament our overexposure to information, ever-shortening attention spans, and superficiality of what we create and consume, but technology has always been a Pandora’s Box. Now that we’ve gone down the path of near-constant connectivity, it’s impossible to walk it back. While it can be beneficial to spend less time with information when possible, we have to embrace the path we’ve created; embrace the unending flow of information. Discovering ways to leverage new content formats and technologies in ways to transmit useful information is our future. Demanding more meaning from that information we choose to spend time with is the next evolutionary step for humanity.

Improvements in relevance will continue to be driven by technology, but as we think about the opportunities in compression, it’s clear that utility will not be driven purely by hardcore technologists, but also by hardcore creatives. Injecting more creativity into how we convey meaning, something that is unique to the human experience, make up some of the biggest opportunities in improving information utility. After all, the humans that consume information determine the utility of it, not robots.

Some people fear the coming Automation Age will devalue humanity, removing purpose and meaning from our lives. Luckily, we have this survival instinct — this innate need for more information, more knowledge. That curiosity means there will always be a purpose for us. As automation frees us from the constraints of labor, the Experiential Age will bring about a golden era of exploring what it means to be human, and information utility will guide the way.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Amazon Go Extends Amazon’s Dominance to Brick and Mortar

After our first visit to Amazon Go, Amazon’s automated retail store in Seattle, we’re not surprised to hear the company has plans to open up to six more cashierless convenience stores later this year.

Our experience was flawless, leaving us increasingly confident that Amazon is best positioned to own the operating system of automated retail. Eventually, we expect Amazon to make this technology available to other retailers, as they have with Fulfillment by Amazon (FBA) and Amazon Web Services (AWS), expanding their dominance into brick and mortar.

The $50B automated retail opportunity. In 2016 there were 3.5 million cashiers in the U.S., according to the Department of Labor, with an average salary of $13,574, according to Data USA. That makes for a nearly $50 billion opportunity in cashierless retail that Amazon is well positioned to attack. Of those 3.5 million cashiers, 323,000 are convenience store or gas station employees, or 9% of the cashier workforce. The automated retail space is getting more and more crowded, but the Go store suggests that Amazon has the pole position.

Why we think Amazon will license the Go technology. Just as Amazon did with FBA and, to a lesser extent, AWS, Amazon is initially building a backend infrastructure for its own use with Amazon Go. And just like FBA and AWS, that infrastructure gets more valuable as it scales. The Amazon Go backend gives the company a trojan horse into the brick and mortar retail space, clearly an area of interest given the Whole Foods acquisition. Perhaps the more critical question is why a retailer would work with Amazon? Our answer is the same as it is with all of Amazon’s best offerings: convenience. Retailers would have a turnkey solution for automated retail. While larger stores like Walmart and Target may not want to use the technology for competitive reasons, branded retail stores (like a Nike store) may be a fit if Amazon can create a product that helps save the retailer labor and processing costs.

The Amazon Go experience. Amazon Go builds on the company’s core competence of convenience by automating the store with no cashiers or checkout lanes. Scan your phone on entry, grab your items, exit. In one test we bought a can of La Croix in 23 seconds. It felt like two parts magic and one part theft.

A few observations from our visit:

  • No cashiers, but lots of employees: mostly chefs assembling the prepared food, one ID checker in the beer and wine section, a greeter/security guard, and a few stockers replenishing shelves, bags, and plasticware.
  • Signage with instructions everywhere: download the app, scan your phone here, just walk out; clearly, there is a great deal of consumer education at work.
  • Quickly builds trust: By my second or third trip, I was certain that the store was capturing and changing my items as I grabbed and replaced items throughout a visit.
  • Felt more like a tourist destination than a convenience store: most shoppers were taking pictures or video inside the store.
  • All about speed: Signage, taglines, the Just Walk Out Technology, the app, even the receipt all focused on the trip time (my record: 23 seconds for 1 item).
  • No chat: I never spoke to anyone or interacted with a person during several visits to the store.
  • Don’t linger: I found a seat at a nearby Starbucks (notable) where I could jot down my observations after visiting the Go store. There were 10 people in line at Starbucks, waiting to order, and another 5 people waiting where 8 baristas behind a waist-high coffee bar called out customer names and handed them personalized cups of coffee. It was a stark contrast to the can of La Croix I had just grabbed off the shelf at Amazon Go and paid for via the magic of cameras and the internet.

We envision the future of retail in three categories:  1. online retail (e.g.,, 2. automated retail (e.g., Amazon Go), and 3. Empathic retail (personalized services based on mutual understanding or empathy; more here). Amazon has already won the online space and Amazon Go could prove to be the operating system of automated retail. We’re bullish on the empathic retail space partly because it’s outside of Amazon’s core competency (convenience), leaving room for others to succeed.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Key Questions on the Evolving Future of Transportation

Advancements in self-driving car technology will eventually result in full-scale autonomous transportation. Considering the level of investment from deep-pocketed tech and auto companies and the caliber of human capital that accompanies it, the space has become “too big to fail.” This note explores three key questions we’re working through as we consider the autonomous future:

  • What will an automaker of the future look like?
  • What will the future of transportation look like for consumers?
  • Who is going to win in the future of transportation?

What will an automaker of the future look like?

In order for an automaker to succeed in the transition to autonomy, we see three core competencies: manufacturing capability, autonomous systems, and services.

Manufacturing Technology

Despite all the work going into and the hype around autonomous systems, expertise in manufacturing cars can’t be overlooked. We’ve seen the challenges Tesla has had scaling their production. Technology companies are at a significant deficit here and will likely rely on partnerships with traditional auto to bring a product to market.

Tesla is trying to solve the manufacturing problem on its own. Elon Musk said, “The biggest epiphany I’ve had this year is that what really matters is the machine that builds the machine, the factory, and that this is at least two orders of magnitude harder than the vehicle itself.”

To tackle this problem, Tesla has made acquisitions in the manufacturing space and has chosen to develop software and sensors in-house. We’ve written a lot about Tesla’s efforts (and shortfalls) in manufacturing the Model 3 at scale. We think they’ll get there.

Autonomous Systems

Software is the brains behind autonomous vehicles. This is both the most complex element and where the true value lies in autonomy. The winner in this space will have a good chance at owning the operating system of the car.

A few notable investments in this space: GM’s acquisition of Cruise, Ford’s investment in Argo AI, and Delphi’s acquisition of NuTonomy. Autonomous software investments are typically the largest in the space. We expect this trend to continue as traditional automakers, who already possess manufacturing skill, attempt to acquire or partner with the tech that will keep them relevant as the industry transitions.

Sensors are the eyes and ears of autonomous vehicles. We break the sensor category into LiDAR, radar, and cameras. Most autonomous solutions today require all three, but Tesla thinks it can reach full autonomy without LiDAR.

Many auto manufacturers and tech companies have made hardware acquisitions. Above and below are some of the investments that major auto companies have made in autonomous software and sensor companies:


A significant part of current automakers’ revenue comes from servicing and maintaining the vehicles they have sold. As EV and autonomy play out, and ride-hailing fleets reduce car ownership, these service revenues will need to be replaced by software services. Down the road, connected cars will resemble a platform much like a mobile device. Owning the operating system and/or providing software services through that OS could more than make up for lost maintenance revenue.

One of these services could be in-car entertainment. With steering wheels, and eventually the need for driver attention, going away, the interior of a car will look much different. Seating arrangements and space will not resemble the current layout, but more importantly, we’ll be free to spend our time differently while in transit.

Tech companies will all be vying for the opportunity to provide in-car entertainment to consumers. Similar to smartphones today, there will be those that own the operating system (Apple, Google) and those that build on top of it to deliver content (Netflix, Snapchat). Outside of these opportunities, companies will also leverage the connected car platform to deliver targeted advertisements to riders. Imagine being prompted with a coupon for Starbucks while on your way to work. Companies will be able to target individuals with location-based advertisements much easier than through smartphones.

What will the future of transportation look like for consumers?

There are three themes that will impact what the future of auto will look like for consumers. We’ve written in-depth about these topics here: Auto Outlook and Detroit Auto Show.

Electric Vehicles will be prevalent. Electric vehicles currently account for ~1% of all vehicles today, but will reach 35% by 2030. As battery technology improves, range anxiety decreases for consumers. We’ve also learned that EVs can be fast.

Cars will drive themselves. Today, 99.9% of all vehicles have little to no automation. By 2040, 90% of vehicles sold will have Level 4 or 5 autonomy. Our transportation experience won’t change dramatically until autonomy becomes more prevalent.

Car ownership will decrease, giving way to more ride-hailing. Today, the current household has an average of 2.0 cars. We think that over the next 15-years, this number could go down to 1.25 cars per household and, longer-term, decrease even further. While some individuals may not like the idea of giving up ownership of a vehicle, there are plenty of benefits. For starters, people would not have to pay car insurance, worry about maintenance, store a vehicle, or for those of us in less favorable climates, scrape windows in the winter, or worry about parking during a snow emergency.

As ride-hailing networks become more reliable with autonomous vehicles, more people will be willing to decrease or give up household ownership of vehicles. Traditional auto and tech companies are making large bets on it, as outlined above and below:

Who is going to win in the future of transportation?

If the connected car is a platform like the smartphone, who will be the Apples and Googles of transportation? Waymo, Uber, and Tesla are early candidates for winning the operating system of the car, with each taking their own unique approach. Waymo has focused on building autonomous systems first and will seek to launch or partner with a ride-hailing network second. Uber has built a ride-hailing network first and is now racing to catch up in autonomy. Each will seek to partner with existing car manufacturers for producing vehicles. Tesla decided to manufacture vehicles first and is narrowing in on autonomy second; believing that a ride-hailing network is the last hurdle that needs to take place.

There are plenty of other entrants that could compete in this space including OEMs, who have invested large amounts in autonomy and certainly have the manufacturing scale component already solved, as well as a host of tech companies that could provide autonomous systems or software services to manufacturers. The bottom line is that the value chain in the transportation industry is being disrupted, and the massive opportunity to capture value in an industry transition will create a number of new winners.

At this point, it’s clear that one winner will be the consumer. With access to more ubiquitous, clean, and affordable transportation without the burden of car ownership, mobility will be more accessible than ever.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

AR and VR: Living, Breathing Storytelling

Written by guest author Jesse Damiani, Founder and CEO at Galatea Design

Story telling yesterday and in the future. For the past several millennia, our stories have lived in two dimensions. We translated our creative impulses into 2D formats—whether it was around a fire, painting, page, poster, motion picture or video game. But with VR and AR, that’s all changing, and fast; it’s no exaggeration to say that we can’t even begin to grasp what the storytelling content of 2028 will look like. The irony, of course, is that this shift to spatial media just means we have to revert back to our spatial understanding of the world—something we engage with every moment of every day—except now we’re no longer constrained by the physical laws of nature. The stories of the future are not just pieces of content, the spaces for immersive experiences.

The “Narrative Potential” of space. Ask any architect, interior designer, or DIY home-renovator: every space tells its own story. Take the example of a library. When you walk into it, what’s communicated to you? Lots of carpeting muffles sound, and often, high ceilings dissuade us from speaking too loudly. Shelves of books, ample desks, and fluorescent lighting imply a place intended for scholarship. These embedded details drive us to make automatic assumptions about how to behave and what to expect—the “story” of that space in time.These perceptual opportunities constitute “Narrative Potentiality,” the chance for creators to fill the space with information that will kickstart our brains’ native storytelling impulses. If I, as a VR experience designer, seat you in front of a table where a vase is positioned precariously close to the edge, I’m tapping narrative potential by making you think about it falling and shattering around you.

The space is the story. In other words, in VR/AR, the space is the story. It won’t be long before most of the digital materials we currently conceive of as 2D exist as 3D spaces. What might your favorite website or social media page look like as a “real” space?

Living, breathing stories. Buckle up: it gets wilder. Our understanding of stories is rooted in linear storytelling—the model we’ve had since we invented storytelling sitting around the fire with each other. In this model, a teller projects the story, and audiences receive it. It has a beginning, middle, and end. Audience participation (listening) doesn’t impact the outcome in any significant way. Where we’re headed is toward participatory stories that we share with each other in real time—whether we’re talking about AR or VR.

It’s a shift from linearity into semi-linearity and non-linearity, from pre-written stories experienced from a remove to stories optimized for impromptu co-creation (using narrative potential). Think about it, when you show up at a wedding, you have a general sense of what’s going to happen—but the fun of it is the experience of it spontaneously unfolding around you, and your ability to participate and impact it. The memory you leave with is your story of that event. Everybody else has a story too, both similar to yours and altogether unique.

In VR, reality is the medium. A friend of mine put it best: “In VR, reality is the medium.” Science tells us that our brains are incredibly plastic; they have space to carry multiple, simultaneous realities in them. If you’re in a story experience with your mom and she’s a purple alien, you carry two versions of her in your head: human and purple alien. Of course, she’ll be playing with her new identity as a purple alien, so your understanding of her will have to expand to include this new information. The point is: it’s all up for grabs. Want to be able to control a third arm using your pinky finger or two winks? Want to bend the laws of physics? That’s the narrative potential that VR and AR open up for us. The best part is we’ll be sharing in them together.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Eight Fun Facts About Computer Vision

Our experience of the world is intensely visual. Researchers suggest over half of our brain power is devoted to processing what we see. We talk a lot about how artificial intelligence will transform the world around us, automating physical and knowledge work tasks. In order for such a system to exist, it’s clear that we must teach it to see. This is called computer vision, and it is one of the most basic and crucial elements of artificial intelligence. At a high level, endowing machines with the power of sight seems simple, just slap on a webcam and press record. However, vision is our most complex cognitive ability, and machines must not only be able to see, but understand what they are seeing. They must be able to derive insights from the entirely new layer of data that lies all around them and act on that information.

Despite being an important driver of innovation today, computer vision is little understood by those outside of the tech world. Here are a handful of facts that help put some context around what computer vision is and how far we’ve come in developing it.

1.)  Computer scientists first started thinking about vision about 50 years ago. In 1966, MIT professor Seymour Papert gave a group of students an assignment to attach a camera to a computer and describe what it saw, dividing images into “likely objects, likely background areas, and chaos.” Clearly, this was more than a summer project, as we are still working on it half a century later, but it laid the groundwork for what would become one of the fastest growing and most exciting areas of computer science.

2.)  While computer vision (CV) has not reached parity with human ability, its uses are already widespread, and some may be surprising. Scanning a barcode, the yellow first down line while watching football, camera stabilization, tagging friends on Facebook, Snapchat filters, and Google Street View are all common uses of CV.

3.)  In some narrow use cases, computer vision is more effective than human vision. Google’s CV team developed a machine that can diagnose diabetic retinopathy better than a human ophthalmologist. Diabetic retinopathy is a complication that can cause blindness in diabetic patients, but it is treatable if caught early. With a model that has been trained on hundreds of thousands of images, Google uses CV to screen retinal photos in hopes of earlier identification.

4.)  One of the first major industries being transformed by computer vision is an old one you might not expect: farming. Prospera, a startup based in Tel-Aviv, uses camera tech to monitor crops and detect diseases like blight. John Deere just paid $305M for a computer-vision company called Blue River. Their technology is capable of identifying unwanted plants and dousing them in a focused spray of herbicide to eliminate the need for coating entire fields in harmful chemicals. Beyond these examples, there are countless aerial and ground based drones that monitor crops and soil, as well as robots that use vision to pick produce.

5.)  Fei-Fei Li, head of Stanford’s Vision Lab and one of the world’s leading CV researchers, compares computer vision today to children. Although computers can “see” better than humans in some narrow use cases, even small children are experts at one thing – making sense of the world around them. No one tells a child how to see. They learn through real-world examples. Considering a child’s eyes as cameras, they take a picture every 200 milliseconds (the average time an eye movement is made). So by age 3, the child will have seen hundreds of millions of pictures, which is an extensive training set for a model. Seeing is relatively simple, but understanding context and explaining it is extremely complex. That’s why over 50% of the cortex, the surface of the brain, is devoted to processing visual information.

6.)  This thinking is what led Fei-Fei Li to create ImageNet in 2007, a database of tens of millions of images that are labeled for use in image recognition software. That dataset is used in the ImageNet Large Scale Visual Recognition Challenge each year.  Since 2010, teams have put their algorithms to the test on ImageNet’s vast trove of data in an annual competition that pushes researchers and computer scientists to raise the bar for computer vision. Don’t worry, the database includes 62,000 images of cats.

7.)  Autonomous driving is probably the biggest opportunity in computer vision today. Creating a self-driving car is almost entirely a computer vision challenge, and a worthy one — 1.25 million people die a year in auto-related deaths. Aside from figuring out the technology, there are also questions of ethics like the classic trolley problem: Should a self-driving vehicle alter its path into a situation that would kill or injure its passengers to save a greater number of passengers in its current direction? Lawyers and politicians might have to sort that one out.

8.)  There’s an accelerator program specifically focused on computer vision, and we’re excited to be participating as mentors. Betaworks is launching Visioncamp, an 11-week program dedicated to ‘camera-first’ applications and services starting in Q1 2018. Betaworks wants to “explore everything that becomes possible when the camera knows what it’s seeing.”

We’re just scratching the surface of what computer vision can accomplish in the future. Self-driving cars, automated manufacturing, augmented and virtual reality, healthcare, surveillance, image recognition, helpful robots, and countless other spaces will all heavily employ CV. The future will be seen.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.