Grading Ourselves on Ray Dalio’s Principles

Our conference room has a small library. In it are 19 books (so far) that are important to us in some way, be it a gift, a guide, a reminder, or a fundamental truth. We also have a small shoe collection. Anyway, one of those books is Ray Dalio’s Principles. As a way to self reflect, and to encourage you to explore these principles, we thought it would be helpful to grade ourselves on Dalio’s principles using his eight episode mini-series on YouTube.

Grading yourself objectively at anything is difficult, but we gave ourselves at least a passing grade in every category. Loup Ventures has a clear vision and an active culture built on values that we’ve embraced and developed over 10+ years of working together.

That said, we’re in year two of – as we like to say – building our firm for the next 40 years, so we have a lot of work left to do. Dalio’s principles provide a helpful framework against which we can measure our values, strengths, weaknesses, processes, and outcomes. And, as you’ll see, there are principles on which we need to reflect and make changes.

Think for yourself about what is true: C+

Dalio leads with the importance of establishing your own principles based on 1) what you want, 2) what you think is true, and 3) what you should do to achieve #1 in light of #2. At Loup, our compass is set – we know what we want (to be the leading frontier tech investment platform). We’re fairly clear on what we think is true and have shown the courage to act on it. The one issue we’ve had here is that by combining three persistent founders with the reality that this is our first fund, which will lay the groundwork for future funds, we can sometimes be slow to disagree and commit. We sometimes wrestle with decisions for too long by over-thinking and over-worrying instead of moving forward with confidence based on our principles. Recognizing this will allow us to improve over time.

Embrace reality and deal with it: B

“Truth is the essential foundation for producing good outcomes,” explains Dalio. We’ve made a habit of looking to nature as a guide to the fundamental truths of reality. Just recently, we made an investment in the artificial intelligence space based on this very habit. Embracing reality and our mistakes, especially as a team, requires radical honesty, one of our four core values. Moving forward, we can improve at reflecting on our mistakes and the problems we encounter in order to evolve.

Use the five-step process to get what you want out of life: A-

It takes a lot of practice to do it well, but our weekly partners’ meeting gives us the opportunity to engage in Dalio’s five-step process: 1. Know your goals and run after them; 2. Encounter problems that stand in the way of you getting to your goals; 3. Diagnose problems to get at their root causes; 4. Design a plan to eliminate problems; 5. Execute those designs.

We use an operating system for our business that requires a long-term vision, an annual plan, quarterly goals, and weekly to-dos. And during our weekly partners’ meetings, we have the opportunity to identify, discuss, and solve the issues standing in our way.

The abyss: I (incomplete)

In our short history as a firm, we haven’t gone through the abyss, but we have faced failure. Frequently. We’ve failed to secure certain investors and form certain partnerships that we believed, and still believe, will be important to our future. The roller coaster of a startup (and investing in startups) is an insane ride because the risk/reward can feel so extreme. On the upswing, you think you’ll be a billionaire next year and on the downswing, you feel like you’ll be broke tomorrow. At Loup, we embrace learning from our mistakes. As we continue to build, we’ll need to be more systematic about reflecting on our failures. Some of our favorite thinkers across disciplines (investing, philosophy, sports) agree that pain is the best opportunity for growth, and we look forward to it.

Everything is a machine: B-

Dalio argues that everything works in cycles, “Everything is ‘just another one of those.'” The five-step process outlined above allows us to fine tune the Loup machine so that we are better prepared when we encounter a certain challenge again. However, as a startup, it’s incredibly difficult to take off our near-term, fear-based blinders to see our situation as “just another one of those.” According to PitchBook, 2017 saw more than 300 venture capital funds raised. We are, in fact, just another one of those.

We’re students of history – an important input for much of our research – but we need to do a better job reflecting on the patterns of history that affect our business, then implement processes to better anticipate them.

Your two biggest barriers: B+

Dalio argues that your two biggest barriers are 1. your ego, and 2. your blind spot. We’ve been wrong a lot, which has helped us develop and institutionalize humility. As a group, we make it a habit to under promise and over deliver in all we do, lead with bad news, and acknowledge when we’re wrong. We’re building a culture that encourages people to embrace the difficult things, which goes hand in hand with barrier #2, your blind spot. Identifying and encouraging our team’s superpowers is a strength, and we need to be even more aggressive in positioning our team to focus on their strengths and avoid their weaknesses.

Be radically open-minded: A-

“Replace the joy of being proven right with the joy of learning what’s true.” We’ve embraced this principle through a value we call contrarian curiosity. By combining a respect for different opinions with endless curiosity built out of our research roots, we’re comfortable with looking at life through the perspective of others. We’ve set out to build something that each of us could not build on his own, and we’re building a team that further complements us. We will always strive to build a team with diverse perspectives and strengths for as long as Loup Ventures exists, but we’ve experienced the power of, as Dalio describes it, a shared mission with extraordinary people.

Struggle well: B

“Success is not a matter of attaining one’s goals… [the] struggle toward personal evolution with others is the reward.” Disassociating the journey from the results is a difficult thing in such a results-driven business, but results can hide the reward that Dalio talks about: the journey. Our embrace of contrarian thinking lends itself well to enjoying the journey, and we regularly appreciate how lucky we are to be surrounded by great people on a shared mission, struggling together. We’ve even tried to operationalize decision making to remove emotion tied to the expectation of a result, particularly for investment decisions. Dalio’s challenge to struggle and evolve well is a journey that we are collectively honored to be on.

If you haven’t read Dalio’s Principles yet, we highly recommend it. Or at least read through the full list of principles, which he’s outlined here. Hopefully, this exercise has encouraged you to reflect on your own life and your own work through the lens of Dalio’s Principles. Embrace the open-minded struggle!

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Defining the Future of Human Information Consumption

Human evolution depends on an ever-increasing rate of information creation and consumption. From communication to entertainment to education, the more information we create and consume, the stronger our society in total. Communication enhances community. Entertainment encourages creativity. Education builds knowledge. All of these elements build on top of one another like an upside-down pyramid, each new layer built a little bigger on top of the prior. It’s no coincidence that the Information Age of the last several decades has marked both the greatest period of increased information creation and consumption as well as, arguably, the greatest period of human progress.

The explosion of information created in the Information Age came with the tacit understanding that more information was good. The volume of information available to us is unprecedented, so the firehose is beneficial even if some (or perhaps much) of the water misses its target. The advancements of our technological devices that convey information have endeavored to bring the firehose closer and closer to us; from the non-portable PC, to the semi-portable laptop, to the nearly-omnipresent smartphone, to the emerging omnipresent wearable.

Now we’re continuously drowning in information.

The average global consumer spends 82 hours per week consuming information. Assuming an average of seven hours of sleep per night, this means that 69% of our waking hours are engaged in consuming information. For many consumers in developed markets, that number is likely closer to 100%. It’s not often many people disconnect from information sources. Even when we’re not in front of a screen, the nearest one is always in our pocket, or there’s music or a podcast playing in the background. As a result, we consume almost 90x more information in terms of bits today than we did in 1940 and 4x more than we did less than twenty years ago.

Source: Carat, Loup Ventures

Not only are we maxing out time available to spend with information, we’re creating so much information that it’s impossible to keep up. The ratio of the amount of information consumed per year by the average person to the amount created per year in total is the lowest it’s ever been by our analysis. There’s always more information to consume, and the trend toward autonomous systems is only going to amplify that.

Source: Loup Ventures

These two observations don’t paint a very positive picture of our relationship with information or our prospects for continued evolution, but in every challenge there’s an opportunity. We’re now preparing to exit the Information Age and enter two separate eras: The Automation Age and the Experiential Age. The Automation Age represents the natural continuation of the Information Age where artificially intelligent systems act on the vast amounts of information produced and quantity continues to dominate. The Experiential Age will define how the human relationship with information changes, creating new paradigms for communication, education, and entertainment. Out of evolutionary necessity, we will demand information with greater relevance, density, and usefulness than ever before. Meaning and efficiency, not quantity, will be the metrics on which we will measure information for human use over the next several decades.

Reintroducing Meaning + Information

Our current relationship with information is measured in quantity. Internet speeds are calculated in bits per second, as are hard drive speeds. Processor speeds are based on how many instructions they can perform per second, an instruction being a manipulation of bits. We even talk about how many messages we process a day, whether email, text, Snaps, etc. as a derivative of bits. We expect all of these quantitative measurements to improve over time, so we can consume more information faster.

One of the reasons we think about information in quantity is because of how information theory evolved. Claude Shannon developed information theory, built on work from Boole, Hartley, Wiener, and many others (upside down pyramid), to establish a method for accurately and efficiently transferring information in the form of a message from creator to receiver. His work on information theory yielded the bit and created the basis for modern computing. As Shannon developed his theory, he realized that the meaning of a message (the semantic problem) was irrelevant for establishing the efficient transfer of the characters that created the message, and therefore focused on the quantifiable aspects of the message instead.

It’s time to reintroduce meaning to information.

All meaning is established as a consumer’s interpretation of the creator’s intent (i.e. the creator’s version of meaning). The meaning of the same picture or movie or article may differ from person to person, even if the creator of the message might intend a single meaning. Since all interpreted meaning is some derivative of intended meaning, to contemplate “meaning”, you must contemplate both sides.

To help demonstrate the flow of meaning, it’s helpful to overlay the transfer of meaning on Shannon’s diagram from his groundbreaking piece on information theory:

The flow of meaning is analogous to the flow of information as diagrammed

by Shannon in “A Mathematical Theory of Communication”

A creator (human or machine) intends meaning in some message, which is conveyed to the consumer (human for our purposes) for interpretation via information transmission channels. A message is a container of information, which can be digital (e.g. a file) or physical (e.g. a book).

All intentionally transmitted messages must have some intended meaning, or the creator wouldn’t be able to conceive the message. Even a blank message is a reflection of the creator’s current mindset. Likewise, a message of random characters might imply boredom, or that a machine was programmed to do it, or it was a byproduct of some machine-learned behavior, etc. Messages transmitted by accident are free of this rule.

Since every message must be created with some initial meaning, all messages must therefore be interpreted because there is some underlying meaning to contemplate. A blank message might be interpreted as a mistake or a sign of trouble, a scrambled message might be a pocket dial, etc. This interpretation establishes the meaning from the message to the consumer.

In the process of transferring meaning, holistic noise impacts the interpretation of the intended meaning. It represents the natural disturbance between what a message creator intends and what the message consumer interprets. Holistic noise is created by differing contexts, that is, differing experiential, emotional, and physiological states between the creator and consumer.

Exploring meaning is a philosophical abyss — its precise characterization is engaging to discuss, but extremely hard to define. Nonetheless, the definition we’ve arrived at thus far is sufficient for establishing information utility, and further debate would distract us from the focus of defining utility.

Meaning/Time — The New Measure of Information

Thanks to Shannon, the quantity of information we can transfer is no longer a gating factor to the human consumption of information; the quantity we can process is. Therefore, the amount of meaning we derive from a message is now paramount, as is the time it takes to consume the message. To this end, we would propose that the value of a message in the Experiential Age should not be measured by bits in any way, but by meaning divided by time. We call this measure information utility:

Utility (U) of information is equal to the value of the meaning interpreted by the consumer of the information (m) divided by time it takes to consume the information (t).

Of course, meaning is abstract. How could you ever quantify it?

The closest we may be able to get is to borrow from marginal utility in economics. Just as we all have some innate scale with which we rank the value of consumer goods and that scale theoretically determines where we spend our money, we have a similar innate scale relative to the meaning of messages that should determine where we spend our time. We therefore measure meaning by time to help determine where best to spend our scarce asset of time just as we measure marginal utility by money to determine where best to spend our money.

Any individual’s scale for valuing meaning will likely be different, but that doesn’t matter. It only matters that we all have a scale that can consistently be used to compare the “meaningfulness” of the meaning we consume. In other words, we interpret the meaning of a message and then rank the perceived satisfaction or benefit we got from consuming it on an innate scale based on the aggregate of our prior experiences and expectations therefrom.

In reality, the scale is likely to be fuzzy (ordinal) rather than precise (cardinal). Most of us in any given moment wouldn’t be able to say message A is 11.7625% more meaningful than message B, but we would be able to say message A has more meaning than message B. That said, an information consumer can assign theoretical cardinal values to different messages based on the fuzzy scale, just as in marginal utility. Those cardinal values can be used to create a mathematical determination of information utility for the consumer.

Relevance and Compression: improving Information Utility

The next logical question is: How do we improve information utility?

The underlying intent of a message will always be the greatest factor in utility because all interpreted meaning (m) is a derivative of intent. A message of minimally useful or irrelevant intent will rarely be interpreted to have great meaning and thus usually have minimal utility. The answer may seem to be to improve the intent of the message; however, if you change the intent of a message, you change the message entirely, which is like telling the message creator to just say something else, ignoring their desire to convey their intended meaning.

Improving information utility should not rely on changing the intent of a message and instead focus on matching a consumer’s need or want with a creator’s intent and then optimizing the conveyance of that intent. This leaves us with two core opportunities to improve utility: relevance and compression. Relevance ensures that a consumer is engaging with useful messages while compression enhances the presentation of a specific message to increase meaningfulness. To maintain our hose analogy: Relevance improves the hydrant that the firehose is attached to and compression distills the water itself.

One may argue that relevance is just another measure of meaning since by definition for some message have meaning, it must be relevant to the consumer. However, to argue that relevance is meaning assumes that it can only be applied post message consumption. The problem is that once a message has been consumed, its determined relevance (and by transference, meaning m) is immutable for that given point in time; utility cannot be changed after the fact. Therefore, leveraging relevance to improve utility must happen prior to the message consumption.

We define relevance as the reduction of the total pool of potential messages available for consumption (the consideration set) prior to consumption.

Relevance reduces the information consideration set, compression enhances the information itself

Today, relevance is optimized largely by data companies like Google or Facebook. These platforms reduce some large consideration set of messages that may be relevant to the user into something more focused to help reduce the user’s time seeking meaning. The more these systems know about our preferences, the more relevant information they can funnel to us. Of course, this creates a double-edged sword related to privacy, but all technology comes with tradeoffs. While it seems the data wars have been won by the information giants, the privacy wars have just begun. The hidden opportunity here may be for some platform to put privacy first and balance the careful line between keeping consumer data safe while still presenting the most relevant information.

Less time searching for relevant information means more meaning divided by time by avoiding less useful messages in the consideration set. The time saved can be spent consuming other relevant information, creating a greater aggregate value utility given time spent. Note that our version of relevance does not influence m/t for any specific message, only the set of messages considered; only compression can do that.

We define compression as a direct improvement to a given message that either enhances the meaning intended by the creator and/or decreases the time spent consuming it.

While the interpretation of the consumer is what ultimately establishes meaning for calculating utility, the compression of meaning depends on enhancing the intention of the creator and its consequent effect on the consumer’s interpretation. Compression doesn’t change the intent of a message, but makes it clearer and more useful, usually reducing holistic noise in the process. An example might be a set of written directions to a store vs a map. The underlying intent is the same, but the latter message should offer more utility provided the map is legible and accurate.

Historically, compression has happened via transitions to different information formats. With the advent of the written language, we went from the imprecise spoken word to the more precise written word. Writing a book or a letter is a form of compression relative to the spoken word because more thought and contemplation generally goes into writing vs discourse. Greater contemplation of the message should result in greater meaning of the message. More recently, the shift from written word to video is another example of relative compression. Video conveys relatively more meaning with less holistic noise than written words because the images help an information consumer see body language, emotion, and other factors that can be difficult to ascertain through words alone. This doesn’t mean that video is the perfect medium of transmission for every message — the written word can be effective for certain purposes — but there is relative compression when comparing writing as a message format vs video as a message format.

Compressing relatively more meaning into less time will be a prominent factor in the Experiential Age. If we can get more meaning in less time, we will spend more time with the technology that delivers that benefit. To that end, there are several emerging opportunities around compression:

Stacking Information Purposes. Content formats like GIFs and emoji combine communication and entertainment by replacing text. When a message creator picks a specific GIF or emoji to express themselves, it carries not only the underlying intended message, but an added element of emotion and entertainment. These added elements not only increase the meaning transferred, but also reduce the holistic noise between the message creator and consumer. Most obviously, when someone sends a GIF of a shaking fist or a coffin emoji, they probably aren’t seriously mad.

Platform-Enforced Time Restriction. While some content platforms are moving away from time restriction, like Instagram and Twitter, placing an artificial cap on the length of a message will always compress meaning. When an information creator is forced to deliver his or her message in limited time like a 140-character tweet or a six-second Vine (maybe now v2), it demands creativity and precision that results in greater meaning for the consumer. This incurs an additional cost of time to the creator to make their message succinct but still meaningful; however, the burden of quality in a message is always on the creator, and perhaps many messages aren’t worth the time to send in the first place.

Consumer-Enforced Time Restriction. In some cases, information consumers put the restrictions on time they’re willing to spend consuming a creator’s message. This is most apparent in the shift to digital assistants. When an assistant answers a consumer’s query, the consumer is not willing to listen to a painfully long list of ten possible answers. They expect one answer; the right answer. In this sense, the consumer shifts the burden of time he or she previously spent deciding the best answer from a set of options back to the creator that established the options in the first place (i.e. Google, Amazon, Apple, etc.). Relevance has always been important in the consumer Internet, but increasingly so here. Companies with extremely large customer datasets are the only ones able to play in this game because data allows the digital assistant companies to infer the context of the consumer and tailor messages appropriately.

Information Layering. The emergence of augmented reality (AR) has enabled the concept of information layering. Now a picture or video can be enhanced by stickers or lenses that add additional information related to the original purpose of the content, most commonly entertainment today. The layered information acts somewhat similarly to stacking information purposes in reducing holistic noise by adding context. As AR continues to evolve, the layered information can become more educational or instructional. For example, AR can compress knowledge about how to do something from a book or 2D video that suffered from the friction of transferring that meaning to the real world, to overlaying that meaning on the real world itself; another example of reducing the holistic noise between creator and consumer and improving meaning in time.

Scarcity. By limiting the number of times someone can send a message or components of a message, content platforms can create artificial scarcity similar to a time restriction. In this case, the restriction applies to the meaning side of the equation. If a message creator chooses to leverage a limited resource in a message (other than time), it should confer greater meaning to the consumer because it comes at an opportunity cost to the creator. An example might be some of the frameworks emerging around the ownership of unique digital goods, which could extend into transference of meaning through those goods. When a creator includes a one-of-a-kind component to a message, the intended meaning increases, which should also increase interpreted meaning.

Time Manipulation. Since the majority of content we consume is digital, much of it can be sped up or slowed down according to the consumer’s preference (i.e. videos, audiobooks, podcasts, etc.). Time manipulation requires no additional effort on the side of the creator; it only requires the consumption tool to enable it. While time manipulation techniques are effective on an individual level, they’re less interesting than the other opportunities because they’re optional to the consumer where the others are common across all consumers.

Brain-Computer Interface (BCI). The development of brain-computer interfaces will, in theory, create a direct way to receive information from and transmit information to the brain. A deeper understanding of how we create meaning from a neurological perspective is embedded in this future, thus BCI should reduce holistic noise and enhance meaning. An example may be that a message transmission mechanism could understand the contextual state of both the message creator and consumer and alter the message as appropriate in an effort to match states. While BCI is exciting technology, it has a long clinical path that is just beginning before we can consider using it for meaningful augmentation.

It bears repeating the message creator’s intent is paramount to utility. A useless message is still useless no matter how you dress it up. However, a more compressed version of a given message should always be more useful than a less compressed one given the same baseline intent, so we should strive to compress messages as much as possible.

The current opportunities in compressing information should be relevant for the next 5-10 years or more, particularly in the case of BCI. That said, the aforementioned concepts and technologies compress information relative to how we consume it today. As they become the standard for future information transmission, the bar must move correspondingly higher, and we will need to find new ways to compress information even more. Therefore, while the human desire for increasing information utility is eternally valid, how we compress more meaning into less time must always evolve.

Embracing Information Utility

It’s popular to lament our overexposure to information, ever-shortening attention spans, and superficiality of what we create and consume, but technology has always been a Pandora’s Box. Now that we’ve gone down the path of near-constant connectivity, it’s impossible to walk it back. While it can be beneficial to spend less time with information when possible, we have to embrace the path we’ve created; embrace the unending flow of information. Discovering ways to leverage new content formats and technologies in ways to transmit useful information is our future. Demanding more meaning from that information we choose to spend time with is the next evolutionary step for humanity.

Improvements in relevance will continue to be driven by technology, but as we think about the opportunities in compression, it’s clear that utility will not be driven purely by hardcore technologists, but also by hardcore creatives. Injecting more creativity into how we convey meaning, something that is unique to the human experience, make up some of the biggest opportunities in improving information utility. After all, the humans that consume information determine the utility of it, not robots.

Some people fear the coming Automation Age will devalue humanity, removing purpose and meaning from our lives. Luckily, we have this survival instinct — this innate need for more information, more knowledge. That curiosity means there will always be a purpose for us. As automation frees us from the constraints of labor, the Experiential Age will bring about a golden era of exploring what it means to be human, and information utility will guide the way.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Aristotle, Automation, and Empathy

You might wonder what Aristotle, automation, and empathy have to do with one another. The answer is simple: the future of happiness.

Aristotle believed that man could achieve happiness by working on and developing the things unique to his nature, which he specifically believed to be thought and reason. Humans, at least in the time of Aristotle, were the only things capable of complex thought. That set us above all other animals. With the evolution of AI, that paradigm is changing. Computers can think and reason now, albeit for the most part rudimentarily. It seems only a matter of time before their capacity to think and reason surpasses ours.

If Aristotle is correct in his base assumption that humans derive happiness by developing the things that make us uniquely human, we need to reconsider our unique skillset. There are three things we have written about frequently that seem to be reserved for humankind; at least for now. Those are creativity, community, and empathy.

Of these three, empathy is the king of uniquely human traits. It’s the ability to “understand and share feelings of another.” Given that definition, empathy is arguably the enabler of creativity and community. Creativity represents our ability to transfer some experience or idea we have about the world into something that benefits our fellow man or woman. This requires a compelling understanding of the human condition. Community seems even clearer. We build relationships with people that have things in common with us, people we presumably understand on a fairly specific level.

Though thought and reason may remain uniquely human for a little while longer, it seems rare that people use these capacities for happiness anyway. Arguments have been made about how the Internet has reduced our need and even capacity for thought. Content online lives by encouraging the baser instinct of emotion because it’s easier to share and easier to get likes, and thus easier to generate revenue, than something thoughtful and rational. Emotion blinds empathy and, consequently, thought and rationality. If you cannot empathize with another, you cannot think rationally. Empathy is thus also the king of true thought and perhaps what we should have been optimizing for all along.

Aristotle was right: Happiness does come from developing our uniquely human traits, but not those of thought and rationality. We need to instead focus on empathy because that is the core of understanding each other and the core of what it means to be human. That’s why empathy will be the most important, and most monetizable quality in the automated future.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make. 

All Technology is Good and Evil

Ready Player One showcased both the promise and the pitfalls of our technological future. A virtual world that enables your wildest dreams, on demand, on top of a real world that rots in decay because the virtual one is so much better. All great sci-fi achieves this balance — a healthy observation about what can go right and how right can evolve to wrong.

The core insight of science fiction is that all technologies live on a spectrum of good and evil, useful and harmful, and our perception of their place on that spectrum vacillates over time. It’s a truth that we’ve long known innately but are now being forcefully reminded of in our real world. In just the past couple months, we’ve dealt with major Facebook data privacy issues, multiple self-driving car accidents, and increasing discussion about smartphone addiction. These technologies that were largely accepted, if not embraced, have turned on us. Perhaps it’s more correct to say we turned on them.

All technologies live on a spectrum of good and evil, useful and harmful.

So, what is it that turns technology from good to evil in our eyes? It seems to happen for one of two reasons.

First, in the early adoption phases of any technology, the majority tends to have a healthy skepticism laced with fear. It’s the reason why most people don’t adopt new technology, only innovators. When a new technology has early failings, the skeptical, fearful majority find reason to double down. They feel validation and their skepticism grows, allowing them to make a case for why some new technology is more evil than good; why it should never exist. This creates an even higher hurdle for a new technology to move into the early adoption phase. Autonomous vehicles seem to be living through a mild version of this scenario now. In fact, the discussion about the perils of AI in general also fits here.

Second, in the later stages of adoption when a majority of people use a given technology, consumers tend to view the technology with a dose of fear laced with resignation that can easily flip to anger. When people believe that too much power is consolidated in any endeavor, technology included, there lurks a possibility of revolt. That possibility turns to reality when power is perceived to be abused and anger takes over. Evil is perceived to outweigh good, and people question whether they want to continue to engage in using the technology. Facebook is living through anger-driven revolt now. You could argue that the firearm debate also fits in this category.

If a technology avoids the anger phase for a long enough period, it can move into a stable acceptance of the good and the bad. An example might be the car, which enables large scale movement of humanity and suburbanization, even though over a million people die every year in car accidents and gasoline-powered cars pollute the environment. The Internet fits here too, even if the smartphone as an extension of the Internet does not yet. Apple is doing all it can to demonstrate its respect of the power it wields in bringing highly addictive Internet services to everyone, all the time.

Just as Viktor Frankl observed that, “No group consists entirely of decent or indecent people,” no technology is purely decent or indecent; none is purely good or bad, which are human judgements anyway. The lesson from Ready Player One as well as our situation today is that we should always be willing to accept good with evil as it comes to technology. We should think about what all technologies will mean to humans first, not how exciting the technology is or how much money it could make or some other measure about what the technology could do. Our guiding light should be to ask, “How sure can we be that this technology will improve human life?” If we can’t get comfortable with that answer, we should be prepared to revolt. If we can get comfortable with it, we should be prepared to accept.

Disclaimer: We actively write about the themes in which we invest: virtual reality, augmented reality, artificial intelligence, and robotics. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

The Empathy Economy

Op-ed published February 7, 2018 on Business Insider

Throughout history, different eras have begotten different heroes of productivity in industry. In the 80s, the stock broker was the rock star of the business world. In the late 90s and 2000s, it was the computer programmer. For the last decade or so, it’s been the data scientist. As the work of data scientists and engineers creates the Automation Age, the next industrial rock star will be the customer service specialist.

Before you scoff at the idea of what may be considered a lower-level job today, ask yourself what happened to the stock broker? When’s the last time you talked to one or even heard of one? Jobs ebb, flow, and disappear. The importance of a function today is not equivalent to the importance of that same function tomorrow, and it never will be.

Humans have three core capabilities with which robots cannot compete: creativity, community, and empathy. As we enter the Automation Age, where the fear of robots replacing human work is likely to come true, those three skills will enable the future of human productivity. The last of the three, empathy, should well be considered the most important.

Empathy is what most makes us human – the capacity for mutual understanding. As the Automation Age eliminates rote and some not-so-rote tasks, it will create an opportunity for humans to capitalize on empathy. The manifestation of empathy in industry is through unique and memorable customer service, no matter the business. Welcome to the Empathy Economy.

The Empathy Economy is an intentional spin on the Sharing Economy. Just as the Sharing Economy was a byproduct of a super connected world via the Internet and smartphones, the Empathy Economy will arise through the result of job loss from automation. Uber, Airbnb, WeWork, and countless other business have changed the way humans think about asset ownership and even asset leasing. If users own assets, they want to get more out of them. If users need assets, they want instant access to them on demand without the burden of ownership. The Sharing Economy, as with all functional economies, is efficient in matching two complementary desires. The Empathy Economy will similarly match humans or businesses who desire empathic services with those willing to offer them.

We see 3 core opportunities within the Empathy Economy:

  1. Services that augment human empathy: For example, a lightweight CRM tool that enables employees to instantly recognize customers when they walk in the door, remember details about their lives, and know their preferences for service at the business.
  2. Services that build empathy: For example, a simulated environment that puts trainees through various situations to help them understand why another person feels a certain way and how to best serve them.
  3. Marketplaces that match buyers and sellers of empathy: For example, a platform that makes freelance customer service experts available for various tasks that might require a human touch to differentiate and enhance a particular service.

Today’s businesses must adopt automation technologies and embrace the Empathy Economy simultaneously by leveraging empathic customer service specialists as the face of their automated tools. In other words, people will act as a truly human skin on the work being produced by robots.

In the future, H&R Block will leverage AI to automate every customer’s taxes, but it’s also likely that they’ll need a human, who may only have cursory knowledge about accounting, present the sensitive reality that a customer owes the government a few thousand dollars in taxes; or perhaps the joy that they’ll be getting a few thousand dollars in refund. Either way, the human presentation creates a differentiated customer experience that can be distinctly H&R Block. Using only automation as their selling point, which every other tax prep service will also have and may only vary slightly, will necessitate a race to the bottom in price. In this example, H&R Block could benefit by adopting services that help augment and build empathy as the core skill of their customer service specialists.

Another outcome of the Empathy Economy could be Target leveraging a marketplace for freelance workers with specific product expertise and high empathic qualities to deliver orders to local customers with personalized service. Similar to the tax example, this moves the discussion away from price towards experience, which can command a premium.

You may be wondering why empathy is the greatest opportunity in the triumvirate of uniquely human traits. Creativity and community already exist in a structured sense in our societies. Creativity has always been a democracy, but the Internet made the distribution of creativity available to all. There are numerous ways, both online and offline, to share creativity and get paid for it, YouTube and Patreon as examples. These platforms will only become more important in the Automation Age. As for community, traditional institutions provide this now – governments, churches, schools, local businesses, etc. Technology will help these institutions continue to evolve with automation; however, trusting relationships between people will remain the heart of community because, by definition, it has to.  Empathy doesn’t yet seem to have a defined structure for application in our world. We know it’s important and the best businesses find ways to implement empathy into their culture, but it’s still a nebulous, unmeasurable thing. The Empathy Economy will change that.

It’s cliché to say that empathy is in short supply today because every generation probably has the same sentiment. The good news is that automation will force humans to be more human, and the Empathy Economy will create opportunities for humans to monetize a uniquely human capability. True empathy isn’t easy, but it’s the most powerful expression of humanity. In a world full of robots, empathy can only become more valuable.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.