Eight Fun Facts About Computer Vision

Our experience of the world is intensely visual. Researchers suggest over half of our brain power is devoted to processing what we see. We talk a lot about how artificial intelligence will transform the world around us, automating physical and knowledge work tasks. In order for such a system to exist, it’s clear that we must teach it to see. This is called computer vision, and it is one of the most basic and crucial elements of artificial intelligence. At a high level, endowing machines with the power of sight seems simple, just slap on a webcam and press record. However, vision is our most complex cognitive ability, and machines must not only be able to see, but understand what they are seeing. They must be able to derive insights from the entirely new layer of data that lies all around them and act on that information.

Despite being an important driver of innovation today, computer vision is little understood by those outside of the tech world. Here are a handful of facts that help put some context around what computer vision is and how far we’ve come in developing it.

1.)  Computer scientists first started thinking about vision about 50 years ago. In 1966, MIT professor Seymour Papert gave a group of students an assignment to attach a camera to a computer and describe what it saw, dividing images into “likely objects, likely background areas, and chaos.” Clearly, this was more than a summer project, as we are still working on it half a century later, but it laid the groundwork for what would become one of the fastest growing and most exciting areas of computer science.

2.)  While computer vision (CV) has not reached parity with human ability, its uses are already widespread, and some may be surprising. Scanning a barcode, the yellow first down line while watching football, camera stabilization, tagging friends on Facebook, Snapchat filters, and Google Street View are all common uses of CV.

3.)  In some narrow use cases, computer vision is more effective than human vision. Google’s CV team developed a machine that can diagnose diabetic retinopathy better than a human ophthalmologist. Diabetic retinopathy is a complication that can cause blindness in diabetic patients, but it is treatable if caught early. With a model that has been trained on hundreds of thousands of images, Google uses CV to screen retinal photos in hopes of earlier identification.

4.)  One of the first major industries being transformed by computer vision is an old one you might not expect: farming. Prospera, a startup based in Tel-Aviv, uses camera tech to monitor crops and detect diseases like blight. John Deere just paid $305M for a computer-vision company called Blue River. Their technology is capable of identifying unwanted plants and dousing them in a focused spray of herbicide to eliminate the need for coating entire fields in harmful chemicals. Beyond these examples, there are countless aerial and ground based drones that monitor crops and soil, as well as robots that use vision to pick produce.

5.)  Fei-Fei Li, head of Stanford’s Vision Lab and one of the world’s leading CV researchers, compares computer vision today to children. Although computers can “see” better than humans in some narrow use cases, even small children are experts at one thing – making sense of the world around them. No one tells a child how to see. They learn through real-world examples. Considering a child’s eyes as cameras, they take a picture every 200 milliseconds (the average time an eye movement is made). So by age 3, the child will have seen hundreds of millions of pictures, which is an extensive training set for a model. Seeing is relatively simple, but understanding context and explaining it is extremely complex. That’s why over 50% of the cortex, the surface of the brain, is devoted to processing visual information.

6.)  This thinking is what led Fei-Fei Li to create ImageNet in 2007, a database of tens of millions of images that are labeled for use in image recognition software. That dataset is used in the ImageNet Large Scale Visual Recognition Challenge each year.  Since 2010, teams have put their algorithms to the test on ImageNet’s vast trove of data in an annual competition that pushes researchers and computer scientists to raise the bar for computer vision. Don’t worry, the database includes 62,000 images of cats.

7.)  Autonomous driving is probably the biggest opportunity in computer vision today. Creating a self-driving car is almost entirely a computer vision challenge, and a worthy one — 1.25 million people die a year in auto-related deaths. Aside from figuring out the technology, there are also questions of ethics like the classic trolley problem: Should a self-driving vehicle alter its path into a situation that would kill or injure its passengers to save a greater number of passengers in its current direction? Lawyers and politicians might have to sort that one out.

8.)  There’s an accelerator program specifically focused on computer vision, and we’re excited to be participating as mentors. Betaworks is launching Visioncamp, an 11-week program dedicated to ‘camera-first’ applications and services starting in Q1 2018. Betaworks wants to “explore everything that becomes possible when the camera knows what it’s seeing.”

We’re just scratching the surface of what computer vision can accomplish in the future. Self-driving cars, automated manufacturing, augmented and virtual reality, healthcare, surveillance, image recognition, helpful robots, and countless other spaces will all heavily employ CV. The future will be seen.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

The Rise of B2C CRM & Personalization: Retailers Combat Convenience

Written by guest author Carlos Castelan, Chief Strategy Officer at Conlego. 

In The Future of Retail, Loup Ventures laid out a vision for the future of retail amidst the continuing consumer shift from offline retail shopping to online.  To combat the shift towards pure convenience, and provide an enhanced in-store customer experience, retailers have started implementing business-to-consumer (B2C) customer-relationship management (CRM) programs to more effectively tailor product offers and services to its customers and drive traffic to their brick-and-mortar locations.

Traditionally, CRM systems and programs were utilized by large companies to manage sales cycles into other businesses (B2B sales).  However, retailers and consumer-focused companies have started to adopt the technology to better understand purchasing habits and interactions to improve customer engagement.  At some retailers, this personalization and data-capture is taking place through loyalty programs or branded credit cards but others are expanding these programs to better serve customers daily through a complete purchase profile.

A great example of CRM in the form of a loyalty program is Nordstrom’s which expanded its loyalty program in 2016 to include all customers (i.e. non-credit card holders).  Loyalty members earn points towards vouchers so long as they provide their phone number – which acts as their ID number.  With a profile and Guest ID/phone number, associates can easily view a guest’s purchase history so, for example, a customer can easily make a return without a receipt or associates can identify the customer’s size in a brand (if they purchased that brand before).  It’s not hard to imagine a future state where there’s a rich database the company can pull from to deploy evolving artificial intelligence (AI) and, at a meta-level, better predict buying trends and, on a personalized level, understand when its best customers walk into their stores and how to best cater to them.

These CRM programs and personalization are rapidly expanding, particularly among higher end retailers that focus on high-touch customer service.  A recent example of a company that is implementing a CRM system is lululemon athletica.  As laid out by one of its executives, Gregory Themelis, lululemon seeks to better understand the engagement consumers have with their brand across three levels: transactions, sweat, and engagement.  In this sense, lululemon is seeking a more holistic understanding of the customer (vs. just understanding sales/transactions).  Through data and being “informed” by it (vs. being driven by data), lululemon will be able to better engage customers by tailoring the right level of personalization along with creating seamless marketing across all channels.  lululemon is taking a much more brand-oriented approach to drive customer engagement through a personalized one-on-one experience to build a community.  In the CRM and personalization model, it’s easy to understand how a retailer, such as lululemon, could add more value to its customers in the future by sharing personalized suggestions (workouts, restaurants, etc.) to drive brand affinity and subsequently drive traffic and sales.

Whether it be through the implementation of loyalty programs or pure CRM, personalization is a concept that retailers and consumer brands are adopting to drive traffic to their locations and enhance the customer experience.  In a world that has come to value convenience, personalization and high-touch service is a way for these companies to continue to differentiate themselves and, in the future, use AI to predict customer behavior and serve them even more effectively.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Humans Are a Bigger Existential Risk Than AI

Elon Musk continues to warn us of the potential dangers of AI, from debating the topic with Mark Zuckerberg to saying it’s more dangerous than North Korea. He’s called for regulating AI, just as we regulate other industries that can be dangerous to humans. However, Musk and the other AI debaters underestimate the biggest threat to humanity in the AI era: humans.

For the purposes of the current debate, there are three potential outcomes debaters of artificial intelligence propose:

  1. AI is the greatest invention in human history and could lead to prosperity for all.
  2. A malevolent AI could destroy humanity.
  3. An “unwitting” AI could destroy humanity.

There are few arguments in between worth considering. If the first possibility was not the ultimate benefit, then the development of AI wouldn’t be worth exploring given the ultimate risks (2, 3).

There’s certainly a non-zero chance that a malevolent AI destroys humanity if one were to develop; however, malevolence requires intent, which would require at least human level intelligence (artificial general intelligence, or AGI), and that is probably several decades away.

There’s also a non-zero chance that a benign AI destroys humanity because of some effort that conflicts with human survival. In other words, the AI destroys humanity as collateral damage relative to some other goal. We’ve seen early AI systems begin to act on their own in benign ways where humans were able to stop them. A more advanced AI with a survival instinct might be more difficult to stop.

There’s also a wild card relative to the first outcome that the two sides of the AI debate. On the road to scenario one, the positive outcome and probably the most likely outcome, humans will need to adapt to a new world where jobs are scarce or radically different than work we know it today. Humans will need to find new purpose outside of work, likely in the uniquely human capabilities of creativity, community, and empathy, the things that robots cannot authentically provide. This radical change will likely scare many. They may rebel with hate toward robots and the humans that embrace them. They may band behind leaders that promise to keep the world free of AI. This could leave us with a world looking more like the Walking Dead than utopia.

Since the advent of modern medicine, humans have been the most probable existential threat to humanity. The warning bells on AI are valid given the severity of the potential negative outcomes (even if unlikely), and some form of AI regulation makes sense, but it must be paired with plans to make sure we address the human element of the technology as well. We need to prepare humans for a post-work world in which different skills are valuable. We need to consider how to distribute the benefits of AI to the broader population via a basic income. We need to transform how people think about their purpose. These are the biggest problems we face as we prepare to enter the Automation Age, perhaps even bigger than the technical challenges of creating the AI that will take us there.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Machines Taking Jobs: Why This Time Is Different

Will AI and robotics revolutionize human labor or not? 

More than half of all US jobs could be disrupted by automation in the next several decades; at least that’s our opinion. About half the people we talk to disagree. Those that disagree think AI will open up new job opportunities by enhancing human abilities. A common element to their argument is that we’ve always had technical innovation and human work has evolved with it. A few examples would be the cotton gin, the printing press, and the automobile. All of these inventions threatened jobs of their era, but they ended up creating more jobs than they destroyed. So why is this time different?

Because, for the first time in history, we don’t need to rely on human intelligence to operate the machines of the future. The common denominator among those three examples and countless other technical innovations is that they were simply dumb tools. There was no on-board intelligence. Humans using those tools provided the intelligence layer. Humans were the brains of the cotton gins, printing presses, and automobiles. If the human operator saw or heard a problem, they fixed it and continued working. Today, the intelligence layer can be provided by computers through computer vision, natural language processing, machine learning, etc. Human intelligence is no longer required.

You might say that machines aren’t nearly as smart as humans, so they aren’t as capable as humans. But in reality, they don’t need to be. AI required to operate a machine only needs to have very limited domain knowledge, not human level intelligence (a.k.a. artificial general intelligence). Think about driving a car. You aren’t using 100% of your total intelligence to drive a car. A large portion is thinking about other things, like disagreeing with this article, singing along with the radio, and probably texting. An autonomous driving system only needs to be capable of processing image data, communicating with computers from other devices related to driving, like other vehicles, traffic signals, and maybe even the road itself, making dynamic calculations based on those data inputs and turning those calculations into actions performed by the vehicle. Any incremental intelligence not related to those core functions is irrelevant for an autonomous driving system.

The magnitude of the technological change is also significantly different in this current wave of advancements in AI and robotics. This wave is more akin to the advent of the farm when humans were still gatherers, or the advent of the factory when we were still farmers. Farms not only organized the production of food, but also encouraged the development of community and trade. Factories organized the production of all goods, encouraged the development of cities, and enabled our modern economic system by institutionalizing the trade of labor for wages. Automation will result in equivalent fundamental changes to the philosophy of production by taking it out of the hands of humans. This could result in societal changes of greater freedom of location and a basic income. In a way, the Automation Age may be an enhanced return to the hunter/gatherer period of humanity where basic needs were provided, originally by nature, in the future by machines. Except in the Automation Age, our purpose will be to explore what it means to be human instead of simply survive.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

AirPods Are More Important Than The Apple Watch

At this point, it might not even be that crazy to say it, but we think AirPods are going to be a bigger product for Apple than the Watch. After using AirPods for the past month, the Loup Ventures team is addicted. The seamlessness in connecting and disconnecting with our phones and enabling Siri has meaningfully improved the way we work and consume content. AirPods are a classic example of Apple not doing something first, but doing it better. And they look cool. We think there are three reasons that AirPods are more important than the Apple Watch.

AI-First World
Google has been talking about designing products for an AI-first world for about a year now. In our view, an AI-first world is about more natural interfaces for our screen-less future. Speech is an important component of the next interface. Siri, Alexa, Google Assistant, and Cortana are making rapid improvements in terms of voice commands they understand and what they can help us with.

We view AirPods as a natural extension of Siri that will encourage people to rely more on the voice assistant. As voice assistants become capable of having deeper two-way conversations to convey more information to users, AirPods could replace a meaningful amount of interaction with the phone itself. By contrast, using Siri on the Apple Watch is less natural because it requires you to hold it up to your face. Additionally, the screen is so small that interaction with it and information conveyed by it is not that much richer than an AI voice-based interface.

Read More