Key Apple Supplier Reiterates Favorable Outlook Ahead of Fall Launch

  • Finisar (FNSR) is Apple’s 3rd largest VCSEL array supplier, accounting for about 5% of iPhone X VCSEL arrays.
  • Tonight the company reported earnings for the Apr-18 quarter.
  • Finisar continues to expect a ramp in VCSEL demand in their Oct-18 quarter, suggesting more iPhone models will incorporate Face ID and 3D sensing for AR.

Finisar is Apple’s 3rd largest VCSEL array supplier, which enables 3D sensing applications such as facial recognition through a flood immolator and dot projector on the front of the iPhone X. While VCSEL revenues were suppressed in the quarter due to seasonality, Management’s commentary remains mostly in-line with the previous quarter, with the company still expecting to see a ramp in demand in the Oct-18 quarter from Apple.

What they said. In line with Management’s expectations, Finisar’s VCSEL revenue (about $5 -7M in the quarter) was down sequentially as a result of seasonality in the quarter. While demand for VCSELs will be flat in the Jul-18 quarter, the company highlighted they continue to anticipate a ramp in the Oct-18 quarter in anticipation of a key customer’s (Apple) new product launches. Finisar also highlighted the new Sherman facility remains on track to go live in Fall 2018. Note this facility is the new 700,000 square foot capacity expansion initiative the company announced shortly before Apple awarded Finisar a $390M contract for future orders.

Read on iPhones. Finisar’s comments are in-line with previous comments, so we are not making any changes to our iPhone estimates. That said, we believe Management’s comments around an Oct-18 quarter ramp suggests this fall the Apple will introduce multiple new iPhones incorporating Face ID 3D sensing technology. In addition, with the Sherman facility going online this fall, we believe demand for VCSELs will accelerate into 2019 as more phones and other products integrate 3D sensing technologies. This, in turn, will be a key adoption driver to augmented reality (AR) applications.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

A New Hope: The Nervous System

Brave early adopters are the hallmark of an innovation set to create immense human value. In this story, the innovation is cutting-edge technology for interfacing with the nervous system, and the brave early adopter is Ian Burkhart.

Ian, a young man in his mid-twenties, suffered a C5 spinal cord injury (SCI) during a diving accident in 2010. The result was bilateral paralysis below his elbows. With an admirably positive attitude and a wonderfully helpful community, Ian was able to make modifications to his life and still live with a sense of meaning.

One of the primary factors motivating Ian forward through the years of rehabilitation was the strong hope that, at some point in his lifetime, there would be biomedical advances capable of giving him back use of his hands.

As it happened, such a possibility crossed his path.


In 2014, Ian was presented with an incredible but uphill opportunity led by Dr. Chad Bouton of Battelle (he’s now with the Feinstein Institute): a team of researchers was looking for a patient to test out a system for restoring movement to the hands of a paralyzed patient. The catch was that it required brain surgery and extensive training.

There are 5.4 million Americans living with some form of paralysis, and Ian is acutely aware of the challenges they face. But optimism is one of his defining characteristics: given the chance to pioneer a technology that could bring massive human value, he was willing to make a sacrifice. And so, in 2014, Ian risked his life by undergoing electable brain surgery, implanting a manmade electrode array into the left motor cortex of his brain.

Over the four years since the operation, and hundreds of hours of tedious and extremely difficult training, Ian has collaborated closely with a team of researchers to improve the brain-machine interface system. Ian can now use the technology, which acts as a “bypass” to his damaged spinal cord, to play Guitar Hero using his right hand.

Image: PBS

The body has two information pathways, the vascular system and the nervous system. The typical conception of physiological medical care is that treatments should leverage the body’s vascular system, as in the case of pharmaceuticals; Ian’s system, in contrast, is based on the principle that medical care can be given by interacting with the nervous system. The names of the fields that embody this innovative approach are bioelectronic medicine (BEM) and brain-machine interfaces (BMI).

Brain-machine interfaces are technological systems that read from and/or write to the brain. Bioelectronic medicine systems are technological systems that read from or write to the nerves in your body (the peripheral nervous system). Both of these types of neural interfaces can provide therapeutic value through the insight to target medical interventions at the nervous system rather than just the bloodstream.

At Loup Ventures, we invest in domains of frontier technology that offer strong investment opportunities built on delivering unique human value-add. Bioelectronic medicine and brain-machine interfaces check both boxes with bold strokes. We’re here to explain these technologies and why you should care about them.

Starting with a story

Ian Burkhart was originally approved by the FDA to participate in a year-long trial using a Utah Array electrode implanted in his left motor cortex. When fully hooked up in the laboratory, the signals recorded from the Utah Array, a type of intracortical multielectrode array, are decoded by a neural decoding algorithm and used to control 130 surface electrodes wrapped around his right forearm (which he can’t move on his own). When current is run through the surface electrodes, it causes muscles in Ian’s forearm to contract in particular ways, ultimately controlling his hand movement. In other words, the researchers aim to understand what Ian’s neural activity looks like when he’s thinking about moving his hand. With this understanding, they can use his neural activity to decipher when he wants to move his hand. Then, knowing that he wants to move his hand, a computer program controls his muscles via the electrodes so he can actually move his hand. In essence, they skip right over the spinal cord injury responsible for his paralysis.


Ian and his team of researchers have been so successful that the FDA has extended the experiment multiple times; he’s now in his fourth year of participation.

Loup Ventures recently had the pleasure to talk with Ian about his experience and his outlook. Here’s some of what we learned (listen to one of our conversations with him on our Neurotech Podcast here):

  • Each day he uses the system, Ian has to retrain the algorithms that decode his neural activity. This is because the specific neural activation representing his thoughts changes frequently due to neural plasticity. When Ian first started with the BMI system, he would leave training sessions mentally exhausted. Over the years, however, it’s become much easier; training isn’t something he has to pay much attention to, now. This has implications for future BEM/BMI technologies: it will be critical to minimize training time and repetition and to make the necessary training as pleasant as possible for the user.
  • Ian has spent time learning about the neural decoding algorithms themselves to the extent that he understands the different parameters the research engineers can tune. Now, when Ian is trying to make the BMI perform as well as it can, he can suggest that the researchers adjust specific parameters. This is similar, in a sense, to adjusting the speed of your mouse cursor on your computer screen. In general, this insight from Ian suggests that BEM/BMI systems ought to be designed like any piece of software, where there’s a symbiosis between the user and the machine. The better the user can intuit how the machine will understand him/herself (and vice versa), the more effective the system will be. Ian is very insistent that in order for mass adoption, the technology must be user-friendly; just because its value proposition (i.e. returned movement) is so large doesn’t mean that normal human-centric design considerations can be thrown out the door.
  • Even though Ian is very excited about the progress he’s making with the research system in a laboratory setting, he can’t take the system out of the lab yet—partly due to regulation and partly due to the unwieldiness of the technology.
  • Even amongst spinal cord injury patients, Ian sees resistance to the adoption of brain-machine interfaces. This paper has great data about which types of interfaces paralysis patients would be comfortable with, given specific types of benefits they can receive. For example, as compared to typing, controlling a cursor, and moving a robotic arm, patients were overwhelmingly most interested in using neural interfaces to control the movement of their own limbs.
  • Ian emphasized to us over and over again that brain-machine interfaces and bioelectronic medicine systems must be designed to provide as much value to the user as possible; this bears repetition since most systems are currently in the hands of research scientists and research engineers, not user experience designers.
  • We’re discussing a technology that plugs directly into a human brain; this conversation would be woefully incomplete without discussing ethics. Ian pushed this point, since he clearly sees the potential for good in neurotechnology, but acknowledges there’s also potential for bad. One example we think is particularly relevant given current conversation about software platforms is user privacy. User privacy takes on a whole new meaning when the data isn’t just behavioral (like clicking on an ad), but neural. Through the talks he’s given and the other SCI patients he’s spoken with, Ian recognizes that there will be big pushback on technologies that interface with the nervous system, and we think the pushback can be healthy if addressed constructively. With current screen-based interfaces, there have clearly been both positives and negatives. In general, as technology becomes more directly able to “extract” information from, and “input” information into us, the privacy and digital wellbeing stakes get higher. Any foray into the therapeutic benefits of BMI/BEM—whether as an interested onlooker, an investor, a researcher, or a company—would be ill-advised without strong considerations for, and safeguards to protect, the ethical implications of the BMI/BEM product.

In general, we take inspiration from Ian’s conviction about the positive value of nervous system-based interventions, and we take direction from his emphasis on the necessity for BMI/BEM engineers to focus on the practical value of their products. In speaking about his work to improve his own system for movement restoration so that other patients can use it, he says, “I see my involvement with the study as a job that I have to succeed in.” He sees no other choice; the upside is just too large to ignore.

The general bioelectronic medicine approach

Now that we’ve discussed bioelectronic medicine and brain-machine interfaces from the perspective of a real user, we’ll describe these related domains and their benefits to conclude this introductory article. Although the fields of bioelectronic medicine and brain-machine interfaces speak to different use-cases (bioelectronic medicine involves the peripheral nervous system and treats or diagnoses disease in the body; brain-machine interfaces involve the central nervous system and address sensory deficits and cognitive/affective disorders), we’re going to discuss both under the header of “bioelectronic medicine” for the sake of brevity and comprehensibility.

At its highest level, bioelectronic medicine simply uses electronics to interface with the body where previously, chemicals/drugs would have been used. There are three basic components to any BEM system: a body, a device (either inside or outside of the human body), and a computer. In plain English, there are two general ways to use a BEM system:

  1. A device is used to record electrical or chemical signals from the body, and a computer interprets those signals in order to understand the state of the body or a specific system within it.
  2. A device is used to stimulate electrical (or other) signals within the body in order to have a desired therapeutic or enhancing outcome, and a computer controls the stimulation.

Some examples of BEM systems include: brain-machine interfaces for restoring movement to paralyzed patients, as in the case of Ian Burkhart; retinal prostheses to return vision to the blind; stimulating the vagus nerve to treat rheumatoid arthritis; and recording electrical signals from the vagus nerve to understand the state of the body’s inflammatory response.

Image: Tracey, 2007

The latter two use-cases exemplify an aspect of bioelectronic medicine that differs from central nervous system-focused brain-machine interfaces: bioelectronic medicine leverages reflexes. In this sense of the word “reflex,” we refer to the idea that the brain both keeps track of and controls organ function using peripheral nerves. This keeping track and controlling is a reflex to preserve homeostasis.

Regulatory and cost advantage

Aside from providing novel treatments and diagnostic opportunities that have yet to be achieved with pharmaceuticals, bioelectronic medicine products offer significant advantages over pharmaceuticals in terms of the FDA regulatory process. All medical devices and drugs are subject to regulation by the FDA. However, it takes between 30–50 months for a medical device (BEM devices classify as medical devices) to get through the FDA, as compared with 144 months for a drug to get through the FDA. A direct correlate of this is that BEM device development is significantly less expensive than pharmaceutical development: BEM devices cost ~$100M for high-risk devices, whereas drug development can require on the order of ~$1B (although the exact cost is uncertain and fluctuates).


There are several frontiers of bioelectronic medicine: technological, scientific, and social/political. Technologically, devices to record and stimulate the nervous system need to be able to record and stimulate at smaller spatial scales and to operate wirelessly (without a wire running from the implanted device, out of the skin, and to a power source). Additionally, neural recordings will be fused with sensors under development that focus on biological signals such as the activity within individual cells. In terms of basic science, researchers need a better understanding and more detailed atlas of the peripheral nervous system and are additionally working to characterize other nervous system reflexes within the body (“reflex” in the sense we described above). Finally, on the social/political front: although public forum discussion of BEM and BMIs is rather small right now, it’s likely that conversation picks up over the next ~5 years as ambitious interface companies like Neuralink, Kernel, and Paradromics hopefully bring progress to bear; almost certainly, the public conversation will focus on privacy and ethics.

Market size: understanding the prospects

To understand the massive market for therapeutic devices that interface with the nervous system, we’ll list six example opportunities and their respective market sizes:

  • Spinal Cord Injury – 280,000 patients in the US as of 2016, with a $2.3B global market as of 2017.
  • Alzheimer’s Disease – 5.7M patients in the US, with a $1.7B market in North America as of 2017.
  • Amyotrophic Lateral Sclerosis – 30K patients in the US, $16M market in the US as of 2018.
  • Stroke – 795,000 strokes per year in the US (144,000 deaths per year caused by strokes); $6.5B market in the US as of 2016.
  • Rheumatoid Arthritis – 1.5M patients in the US; $11.5B market in the US as of 2016.
  • Mental Health/Substance Abuse – 43.8M adults in the US experience mental illness annually; based on data through 2013, the US mental health and substance abuse services industry was expected to produce $53.2B in revenues in 2017.

Expanding beyond these specific use-cases, any new class of treatment and diagnostic methodology like bioelectronic medicine is likely to have effects that trickle down and impact a large portion of the $3.3 trillion dollar US healthcare industry.

In Conclusion

In this piece, we’ve introduced the concept of bioelectronic medicine and brain-machine interfaces: providing therapeutic value to patients by interfacing with their nervous system instead of their bloodstream. We see huge opportunity in this space, driven by novel interfaces to the nervous system and new disease treatment paradigms. As we saw with Ian Burkhart, bioelectronic medicine and brain-machine interfaces are domains with enormous value to add, but whose development will be intricate and require careful ethical considerations. Stay tuned as we dive in-depth to understand the science, technology, and business of bioelectronic medicine.

Special thanks to Ian Burkhart for sharing with us his story and insights. Through the Ian Burkhart Foundation, he works to improve the lives of individuals with spinal cord injuries.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Tesla Reorg: Aligning Profit and Vision

  • Today, Tesla gave details on its previously announced company reorganization. The 9% workforce cut was more than the 5% reduction we were expecting.
  • We believe this reorg brings Tesla a measurable step closer to long-term sustainability.
  • Reading between the lines, there is now a higher probability that they will be profitable in the Sep-18 quarter.
  • We remain positive on the Tesla story given our belief that Model 3 will scale, and the company will achieve its mission of accelerating the world’s transition to sustainable energy.

Source: (Screenshot) Bloomberg

Critical production metric likely unchanged. Elon Musk addressed the elephant in the room in his letter to employees, clarifying that the cuts will not impact Model 3 production. This, of course, is the most important near-term metric to the story, even more so than cash.

Framing up the cost savings. Tesla currently employs about 37,000 people, which will be reduced to about 33,500. For starters, we expect a one-time charge of $130-$150M split between cash and stock, detailed on the Jun-18 earnings call. More importantly, the quarterly op-ex savings going forward should be about $80M ($320M annually). This is estimated using a $100,000 average salary and a 6-quarter average tenure. In the context of the company’s high cash burn rate, $80M  per quarter may not sound like enough to have an impact, but as the next several months may decide the fate of the company, every dollar counts.

The Road to Profitability. Tesla previously said that they will be GAAP profitable in the second half of this year. Conventional wisdom suggests we should discount any of Musk’s predictions on timing, but given the magnitude of the reorg, it’s clear he is serious about reaching profitability. The Street is generally looking for GAAP profitability in early 2019. In rare form, Musk directly aligns the company’s mission with its ability to make money, saying, “we will never achieve [our] mission unless we eventually demonstrate that we can be sustainably profitable. That is a valid and fair criticism of Tesla’s history to date.” We believe this reorg will bring Tesla one step closer to profitability – and to achieving their mission.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make. 

Posted in Tesla  • 

Defining the Future of Human Information Consumption

Human evolution depends on an ever-increasing rate of information creation and consumption. From communication to entertainment to education, the more information we create and consume, the stronger our society in total. Communication enhances community. Entertainment encourages creativity. Education builds knowledge. All of these elements build on top of one another like an upside-down pyramid, each new layer built a little bigger on top of the prior. It’s no coincidence that the Information Age of the last several decades has marked both the greatest period of increased information creation and consumption as well as, arguably, the greatest period of human progress.

The explosion of information created in the Information Age came with the tacit understanding that more information was good. The volume of information available to us is unprecedented, so the firehose is beneficial even if some (or perhaps much) of the water misses its target. The advancements of our technological devices that convey information have endeavored to bring the firehose closer and closer to us; from the non-portable PC, to the semi-portable laptop, to the nearly-omnipresent smartphone, to the emerging omnipresent wearable.

Now we’re continuously drowning in information.

The average global consumer spends 82 hours per week consuming information. Assuming an average of seven hours of sleep per night, this means that 69% of our waking hours are engaged in consuming information. For many consumers in developed markets, that number is likely closer to 100%. It’s not often many people disconnect from information sources. Even when we’re not in front of a screen, the nearest one is always in our pocket, or there’s music or a podcast playing in the background. As a result, we consume almost 90x more information in terms of bits today than we did in 1940 and 4x more than we did less than twenty years ago.

Source: Carat, Loup Ventures

Not only are we maxing out time available to spend with information, we’re creating so much information that it’s impossible to keep up. The ratio of the amount of information consumed per year by the average person to the amount created per year in total is the lowest it’s ever been by our analysis. There’s always more information to consume, and the trend toward autonomous systems is only going to amplify that.

Source: Loup Ventures

These two observations don’t paint a very positive picture of our relationship with information or our prospects for continued evolution, but in every challenge there’s an opportunity. We’re now preparing to exit the Information Age and enter two separate eras: The Automation Age and the Experiential Age. The Automation Age represents the natural continuation of the Information Age where artificially intelligent systems act on the vast amounts of information produced and quantity continues to dominate. The Experiential Age will define how the human relationship with information changes, creating new paradigms for communication, education, and entertainment. Out of evolutionary necessity, we will demand information with greater relevance, density, and usefulness than ever before. Meaning and efficiency, not quantity, will be the metrics on which we will measure information for human use over the next several decades.

Reintroducing Meaning + Information

Our current relationship with information is measured in quantity. Internet speeds are calculated in bits per second, as are hard drive speeds. Processor speeds are based on how many instructions they can perform per second, an instruction being a manipulation of bits. We even talk about how many messages we process a day, whether email, text, Snaps, etc. as a derivative of bits. We expect all of these quantitative measurements to improve over time, so we can consume more information faster.

One of the reasons we think about information in quantity is because of how information theory evolved. Claude Shannon developed information theory, built on work from Boole, Hartley, Wiener, and many others (upside down pyramid), to establish a method for accurately and efficiently transferring information in the form of a message from creator to receiver. His work on information theory yielded the bit and created the basis for modern computing. As Shannon developed his theory, he realized that the meaning of a message (the semantic problem) was irrelevant for establishing the efficient transfer of the characters that created the message, and therefore focused on the quantifiable aspects of the message instead.

It’s time to reintroduce meaning to information.

All meaning is established as a consumer’s interpretation of the creator’s intent (i.e. the creator’s version of meaning). The meaning of the same picture or movie or article may differ from person to person, even if the creator of the message might intend a single meaning. Since all interpreted meaning is some derivative of intended meaning, to contemplate “meaning”, you must contemplate both sides.

To help demonstrate the flow of meaning, it’s helpful to overlay the transfer of meaning on Shannon’s diagram from his groundbreaking piece on information theory:

The flow of meaning is analogous to the flow of information as diagrammed

by Shannon in “A Mathematical Theory of Communication”

A creator (human or machine) intends meaning in some message, which is conveyed to the consumer (human for our purposes) for interpretation via information transmission channels. A message is a container of information, which can be digital (e.g. a file) or physical (e.g. a book).

All intentionally transmitted messages must have some intended meaning, or the creator wouldn’t be able to conceive the message. Even a blank message is a reflection of the creator’s current mindset. Likewise, a message of random characters might imply boredom, or that a machine was programmed to do it, or it was a byproduct of some machine-learned behavior, etc. Messages transmitted by accident are free of this rule.

Since every message must be created with some initial meaning, all messages must therefore be interpreted because there is some underlying meaning to contemplate. A blank message might be interpreted as a mistake or a sign of trouble, a scrambled message might be a pocket dial, etc. This interpretation establishes the meaning from the message to the consumer.

In the process of transferring meaning, holistic noise impacts the interpretation of the intended meaning. It represents the natural disturbance between what a message creator intends and what the message consumer interprets. Holistic noise is created by differing contexts, that is, differing experiential, emotional, and physiological states between the creator and consumer.

Exploring meaning is a philosophical abyss — its precise characterization is engaging to discuss, but extremely hard to define. Nonetheless, the definition we’ve arrived at thus far is sufficient for establishing information utility, and further debate would distract us from the focus of defining utility.

Meaning/Time — The New Measure of Information

Thanks to Shannon, the quantity of information we can transfer is no longer a gating factor to the human consumption of information; the quantity we can process is. Therefore, the amount of meaning we derive from a message is now paramount, as is the time it takes to consume the message. To this end, we would propose that the value of a message in the Experiential Age should not be measured by bits in any way, but by meaning divided by time. We call this measure information utility:

Utility (U) of information is equal to the value of the meaning interpreted by the consumer of the information (m) divided by time it takes to consume the information (t).

Of course, meaning is abstract. How could you ever quantify it?

The closest we may be able to get is to borrow from marginal utility in economics. Just as we all have some innate scale with which we rank the value of consumer goods and that scale theoretically determines where we spend our money, we have a similar innate scale relative to the meaning of messages that should determine where we spend our time. We therefore measure meaning by time to help determine where best to spend our scarce asset of time just as we measure marginal utility by money to determine where best to spend our money.

Any individual’s scale for valuing meaning will likely be different, but that doesn’t matter. It only matters that we all have a scale that can consistently be used to compare the “meaningfulness” of the meaning we consume. In other words, we interpret the meaning of a message and then rank the perceived satisfaction or benefit we got from consuming it on an innate scale based on the aggregate of our prior experiences and expectations therefrom.

In reality, the scale is likely to be fuzzy (ordinal) rather than precise (cardinal). Most of us in any given moment wouldn’t be able to say message A is 11.7625% more meaningful than message B, but we would be able to say message A has more meaning than message B. That said, an information consumer can assign theoretical cardinal values to different messages based on the fuzzy scale, just as in marginal utility. Those cardinal values can be used to create a mathematical determination of information utility for the consumer.

Relevance and Compression: improving Information Utility

The next logical question is: How do we improve information utility?

The underlying intent of a message will always be the greatest factor in utility because all interpreted meaning (m) is a derivative of intent. A message of minimally useful or irrelevant intent will rarely be interpreted to have great meaning and thus usually have minimal utility. The answer may seem to be to improve the intent of the message; however, if you change the intent of a message, you change the message entirely, which is like telling the message creator to just say something else, ignoring their desire to convey their intended meaning.

Improving information utility should not rely on changing the intent of a message and instead focus on matching a consumer’s need or want with a creator’s intent and then optimizing the conveyance of that intent. This leaves us with two core opportunities to improve utility: relevance and compression. Relevance ensures that a consumer is engaging with useful messages while compression enhances the presentation of a specific message to increase meaningfulness. To maintain our hose analogy: Relevance improves the hydrant that the firehose is attached to and compression distills the water itself.

One may argue that relevance is just another measure of meaning since by definition for some message have meaning, it must be relevant to the consumer. However, to argue that relevance is meaning assumes that it can only be applied post message consumption. The problem is that once a message has been consumed, its determined relevance (and by transference, meaning m) is immutable for that given point in time; utility cannot be changed after the fact. Therefore, leveraging relevance to improve utility must happen prior to the message consumption.

We define relevance as the reduction of the total pool of potential messages available for consumption (the consideration set) prior to consumption.

Relevance reduces the information consideration set, compression enhances the information itself

Today, relevance is optimized largely by data companies like Google or Facebook. These platforms reduce some large consideration set of messages that may be relevant to the user into something more focused to help reduce the user’s time seeking meaning. The more these systems know about our preferences, the more relevant information they can funnel to us. Of course, this creates a double-edged sword related to privacy, but all technology comes with tradeoffs. While it seems the data wars have been won by the information giants, the privacy wars have just begun. The hidden opportunity here may be for some platform to put privacy first and balance the careful line between keeping consumer data safe while still presenting the most relevant information.

Less time searching for relevant information means more meaning divided by time by avoiding less useful messages in the consideration set. The time saved can be spent consuming other relevant information, creating a greater aggregate value utility given time spent. Note that our version of relevance does not influence m/t for any specific message, only the set of messages considered; only compression can do that.

We define compression as a direct improvement to a given message that either enhances the meaning intended by the creator and/or decreases the time spent consuming it.

While the interpretation of the consumer is what ultimately establishes meaning for calculating utility, the compression of meaning depends on enhancing the intention of the creator and its consequent effect on the consumer’s interpretation. Compression doesn’t change the intent of a message, but makes it clearer and more useful, usually reducing holistic noise in the process. An example might be a set of written directions to a store vs a map. The underlying intent is the same, but the latter message should offer more utility provided the map is legible and accurate.

Historically, compression has happened via transitions to different information formats. With the advent of the written language, we went from the imprecise spoken word to the more precise written word. Writing a book or a letter is a form of compression relative to the spoken word because more thought and contemplation generally goes into writing vs discourse. Greater contemplation of the message should result in greater meaning of the message. More recently, the shift from written word to video is another example of relative compression. Video conveys relatively more meaning with less holistic noise than written words because the images help an information consumer see body language, emotion, and other factors that can be difficult to ascertain through words alone. This doesn’t mean that video is the perfect medium of transmission for every message — the written word can be effective for certain purposes — but there is relative compression when comparing writing as a message format vs video as a message format.

Compressing relatively more meaning into less time will be a prominent factor in the Experiential Age. If we can get more meaning in less time, we will spend more time with the technology that delivers that benefit. To that end, there are several emerging opportunities around compression:

Stacking Information Purposes. Content formats like GIFs and emoji combine communication and entertainment by replacing text. When a message creator picks a specific GIF or emoji to express themselves, it carries not only the underlying intended message, but an added element of emotion and entertainment. These added elements not only increase the meaning transferred, but also reduce the holistic noise between the message creator and consumer. Most obviously, when someone sends a GIF of a shaking fist or a coffin emoji, they probably aren’t seriously mad.

Platform-Enforced Time Restriction. While some content platforms are moving away from time restriction, like Instagram and Twitter, placing an artificial cap on the length of a message will always compress meaning. When an information creator is forced to deliver his or her message in limited time like a 140-character tweet or a six-second Vine (maybe now v2), it demands creativity and precision that results in greater meaning for the consumer. This incurs an additional cost of time to the creator to make their message succinct but still meaningful; however, the burden of quality in a message is always on the creator, and perhaps many messages aren’t worth the time to send in the first place.

Consumer-Enforced Time Restriction. In some cases, information consumers put the restrictions on time they’re willing to spend consuming a creator’s message. This is most apparent in the shift to digital assistants. When an assistant answers a consumer’s query, the consumer is not willing to listen to a painfully long list of ten possible answers. They expect one answer; the right answer. In this sense, the consumer shifts the burden of time he or she previously spent deciding the best answer from a set of options back to the creator that established the options in the first place (i.e. Google, Amazon, Apple, etc.). Relevance has always been important in the consumer Internet, but increasingly so here. Companies with extremely large customer datasets are the only ones able to play in this game because data allows the digital assistant companies to infer the context of the consumer and tailor messages appropriately.

Information Layering. The emergence of augmented reality (AR) has enabled the concept of information layering. Now a picture or video can be enhanced by stickers or lenses that add additional information related to the original purpose of the content, most commonly entertainment today. The layered information acts somewhat similarly to stacking information purposes in reducing holistic noise by adding context. As AR continues to evolve, the layered information can become more educational or instructional. For example, AR can compress knowledge about how to do something from a book or 2D video that suffered from the friction of transferring that meaning to the real world, to overlaying that meaning on the real world itself; another example of reducing the holistic noise between creator and consumer and improving meaning in time.

Scarcity. By limiting the number of times someone can send a message or components of a message, content platforms can create artificial scarcity similar to a time restriction. In this case, the restriction applies to the meaning side of the equation. If a message creator chooses to leverage a limited resource in a message (other than time), it should confer greater meaning to the consumer because it comes at an opportunity cost to the creator. An example might be some of the frameworks emerging around the ownership of unique digital goods, which could extend into transference of meaning through those goods. When a creator includes a one-of-a-kind component to a message, the intended meaning increases, which should also increase interpreted meaning.

Time Manipulation. Since the majority of content we consume is digital, much of it can be sped up or slowed down according to the consumer’s preference (i.e. videos, audiobooks, podcasts, etc.). Time manipulation requires no additional effort on the side of the creator; it only requires the consumption tool to enable it. While time manipulation techniques are effective on an individual level, they’re less interesting than the other opportunities because they’re optional to the consumer where the others are common across all consumers.

Brain-Computer Interface (BCI). The development of brain-computer interfaces will, in theory, create a direct way to receive information from and transmit information to the brain. A deeper understanding of how we create meaning from a neurological perspective is embedded in this future, thus BCI should reduce holistic noise and enhance meaning. An example may be that a message transmission mechanism could understand the contextual state of both the message creator and consumer and alter the message as appropriate in an effort to match states. While BCI is exciting technology, it has a long clinical path that is just beginning before we can consider using it for meaningful augmentation.

It bears repeating the message creator’s intent is paramount to utility. A useless message is still useless no matter how you dress it up. However, a more compressed version of a given message should always be more useful than a less compressed one given the same baseline intent, so we should strive to compress messages as much as possible.

The current opportunities in compressing information should be relevant for the next 5-10 years or more, particularly in the case of BCI. That said, the aforementioned concepts and technologies compress information relative to how we consume it today. As they become the standard for future information transmission, the bar must move correspondingly higher, and we will need to find new ways to compress information even more. Therefore, while the human desire for increasing information utility is eternally valid, how we compress more meaning into less time must always evolve.

Embracing Information Utility

It’s popular to lament our overexposure to information, ever-shortening attention spans, and superficiality of what we create and consume, but technology has always been a Pandora’s Box. Now that we’ve gone down the path of near-constant connectivity, it’s impossible to walk it back. While it can be beneficial to spend less time with information when possible, we have to embrace the path we’ve created; embrace the unending flow of information. Discovering ways to leverage new content formats and technologies in ways to transmit useful information is our future. Demanding more meaning from that information we choose to spend time with is the next evolutionary step for humanity.

Improvements in relevance will continue to be driven by technology, but as we think about the opportunities in compression, it’s clear that utility will not be driven purely by hardcore technologists, but also by hardcore creatives. Injecting more creativity into how we convey meaning, something that is unique to the human experience, make up some of the biggest opportunities in improving information utility. After all, the humans that consume information determine the utility of it, not robots.

Some people fear the coming Automation Age will devalue humanity, removing purpose and meaning from our lives. Luckily, we have this survival instinct — this innate need for more information, more knowledge. That curiosity means there will always be a purpose for us. As automation frees us from the constraints of labor, the Experiential Age will bring about a golden era of exploring what it means to be human, and information utility will guide the way.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Investing in Paradromics

Neurotechnology has long been a passion for Loup Ventures because we think it represents a vast and open opportunity to improve human life via both restoration and augmentation. Brain-computer interfaces are one of the most compelling areas in the neurotech space and we’re excited to invest in Paradromics as they work to capitalize on that opportunity.

To date, many of the BCI solutions have been relatively low bandwidth, leveraging surface electrodes or other non-invasive methods to collect signals from the brain. Paradromics, along with Neuralink (Elon Musk), is one of only a few companies that are attacking an emerging theme around high-bandwidth, invasive brain-computer interfaces.

We’re investing in Paradromics as a play on two of our thematic focus areas: AI and experience.

AI. Paradromics is a hardware and software company. The company creates a device that consists of an implantable microwire array that allows for the high-density collection of neuronal data. The brain has ~100 billion neurons. The current standard in implantable BCI devices, the Utah Array, has about 100 channels (connection points to the brain) and each channel connects with a handful of neurons. Paradromics’ solution currently has over 65,000 channels, and the company’s goal is to connect with over a million neurons.

The collection of such a large array of data presents a unique challenge in that tens of thousands of connection points with the brain yields massive amounts of data to process (tens of GB/sec). To put that in perspective, average high-speed Internet connections are around 20 MB/sec or more than 1,000 times slower. To address this, Paradromics has developed a proprietary tool that allows them to more efficiently process certain elements of the signal to allow for high-resolution analysis of a neuronal signal while compressing the data transferred to more manageable levels.

Experience. In our Manifesto, we outlined the future of computer interfaces moving from phones, to wearables, to implants. Paradromics is also a bet on the last of the three stages. To the extent that we can read (and eventually write to) the brain, we can achieve our long-term vision of a world where it’s possible to create life-like experiences directly via brain-computer interface. This augmentative future is far away, and initial use cases will center on therapeutic outcomes, but we believe the groundwork will be established now to achieve this goal.

Paradromics is attacking the true frontier of frontier technology. Their service has the potential to change human lives near-term for those afflicted with disorders of the brain, as well as long-term as an augmentative tool. We believe the therapeutic market is likely several billion dollars in opportunity, while the augmentative opportunity may be significantly larger.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.