An Old Conversation With K9’s Manu Kumar
Manu Kumar of K9 is a pure early-stage technology investor, very hands on, and one of the few people who recognized the waves around mobile cameras and digital imaging technology overall. If you’re building an app that leverages the camera (or investing in them), this transcript is for you.
@semil: Let’s talk about the mobile phone camera. I know this is an area that you spent a lot of time in. You invested in companies, I think Lytro, a long time ago, which maybe people have just heard about over the last 18 months. CardMunch, which was acquired by Linkedin, card.io, which was just acquired by PayPal. You have Occipital, which does the 360 iPhone panoramic pictures. There’s HighlightCam, and then there’s a sixth one you said.
Manu: The sixth one is 3Gear systems, which hasn’t launched yet.
@semil: How did, as a seed-stage investor, you amass this portfolio of companies that are leveraging the iPhone camera?
Manu: First, not all of them are leveraging just the iPhone camera, but the camera in general. I think four out of the six that you mentioned are leveraging the iPhone camera, and two are using just cameras in general, Lytro, and 3Gear. I should point out that you were one of the people, who actually pointed out to me that I had all of these companies, which were using cameras.
@semil: I think I had a Manu Kumar folder on my iPhone.
Manu: It wasn’t something that I was consciously looking at when I was investing in stuff, meaning that which is related to cameras. I was looking at each investment independently. But given my filters of looking for core technology and radical innovation, it turns out a lot of the stuff seems to be happening related to cameras.
@semil: Can you walk us through a little bit? In the last five years with the iPhone and somewhere in-between, there was some tipping point, right? MG has written about this with Instagram, on the consumer-side. What do you see from the hardware point of view, that enabled the software to have people snapping images all over the world?
Manu: I think there were a couple of hardware innovations that happened that enabled things to take off. One was just straight-up resolution. The resolution on the original smart phone cameras just was not good enough to take interesting pictures. In fact, I think that was one of the big motivations behind Instagram making pictures look good. What happened somewhere around two, or three years ago, is that the camera quality on phones actually became really good. It became good to a point where it’s comparable to the quality you would get from a point-and-shoot, and it’s only going to improve in that direction. A couple of key technical innovations: One I mentioned is resolution, the second is auto focus. Older phones used to be fixed-focused, focused at infinity, so you couldn’t really take things that were up close. In fact you mentioned CardMunch, and card.io, both of which use the iPhone camera to take a picture of a business card or a credit card. Neither of those would have worked if we didn’t have auto-focus cameras. The focal length of a fixed-focus camera was so far out that the image you would capture would be too small and illegible. Having an auto-focus camera really made it possible for both of these companies to actually do what they were doing.
@semil: A couple of quick questions there. Was it not possible then to have software to auto-focus the image, or wouldn’t it happen fast enough, or would the image quality be degraded, or did you need that step-up in the hardware?
Manu: You needed that step-up in the hardware as an enabling feature. Essentially, the image that you would capture with a fixed-focus, if you put the card too close to the camera, would be blurry. Once it’s blurry, sometimes it would be so blurry that even humans couldn’t read it, and so having auto-focus cameras was a big change. The third change that happened is the introduction of back-lit sensors. Once the sensor is back-lit, it’s actually capturing a lot more light, and that’s what improves the overall image quality. This is stuff that we’ve seen in the iPhone 4, and the iPhone 4S, and in a lot of the Android phones as well. That improvement in image quality, the auto-focus, higher resolution, all of those contribute to being able to do interesting things with the camera. A fourth piece, which is not necessarily related to camera hardware, is an improvement in the overall processing power on the devices themselves. The devices are now capable of doing some fairly heavy-duty graphics computation, that they were previously not able to do. When you take the example of things like Occipital, where you are able to literally just wave your phone around and grab a panorama, they’re doing some very heavy-duty computation on the phone, as well as on the server-side. That wasn’t possible in earlier generation phones, and is now possible on the phones that we have today.
@semil: I see. Just quickly before we move on, how did you amass this knowledge-base in it? Because I know you invested in Lytro years ago; you met the founder when you were at Stanford. What personally caught your interest in this area?
Manu: When I was at Stanford, doing my PhD, my PhD was actually on eye-tracking. Eye-tracking also happens to use cameras. For eye-tracking, you essentially have a camera that’s looking at your face, and it’s really looking at your eyes, trying to figure out what your eyes are looking at. I was doing eye-tracking, and as a continuation of my interest in eye-tracking, I also started looking more into what cameras were being used for, and what they were doing. A lot of what I learned about cameras actually came from working closely with Lytro during the first few days. I didn’t know much about imaging and cameras when I started, but just being close to Ren and helping Lytro helped me to learn more about that.
@semil: Education by investment.
Manu: Education by working on a startup, yes.
@semil: Looking ahead a little bit, I think a lot of people who are interested in photos and developing for the iPhone or are thinking about Android Jelly Bean updates. Walk us a little bit through the future, like in iOS6. What’s going to change and improve in the hardware, that can enable innovations in the software, that you see coming up?
Manu: I can talk in general about things that I would like to see. I don’t know what Apple’s got planned, or what Google’s got planned. Basically, you already have two cameras in most devices. You have a front-facing camera, and a back-facing camera. The quality of the front-facing camera is actually still very low. It’s almost a VGA camera in most cases.
@semil: Is that done for any specific reason?
Manu: I’m guessing it’s just a cost reason, and over time I would expect the quality of that camera to also improve. The direction that the camera is looking at relative to the screen actually has a big impact in what you can do with it. There are applications that I’ve looked at which need a front-facing camera, but the quality of the front-facing camera isn’t good enough. First thing is they just need better quality over there. Another innovation which is a little bit further down the line is depth-sensing cameras. A typical camera is just going to capture RGB. With a depth-sensing camera, you are getting RGBD for the depth as well. That opens up a whole other realm of possibilities for things that can be created. I have companies in my portfolio that are actually working with depth-sensing cameras.
@semil: What’s an example on a consumer level, of what somebody can do with a depth-sensing camera that they can’t do today?
Manu: The company that I’m working with is actually doing hand-based gestures. They’re using a depth-sensing camera. Now, this is not in the context of a mobile phone. This is a conventional camera, a depth-sensing camera. The Kinect is a classic example of what a depth-sensing camera can do. The Kinect is essentially a depth-sensing camera, and it opened up a whole new realm of how we interact with games and with devices. You can imagine basically taking the technology that is in the Kinect, and bringing that down into phones and other places. It opens up new ways of being able to interact with these devices.
@semil: For developers and potential developers out there who are looking to build out the next application that leverages these advancements, what kind of advice would you give? What would be your general advice, after coaching and investing in so many entrepreneurs?
Manu: The biggest thing is a lot of these types of applications, which rely on computer vision, on graphics, hardcore algorithms, and computation, are not weekend projects. These are projects which actually require a lot of depth of expertise, to be able to pull these off. In fact, with card.io, which was just acquired this week by PayPal, at the surface it’s, “Oh, you use the camera to scan a credit card,” And you’re like, “Yeah, obviously. Why shouldn’t you be able to do that?” But it’s actually a very hard problem to get it right. Most computer vision-based applications and products require deep expertise, and so having a team that actually has that deep expertise is very important. When you talk about advice for developers, my advice to them is always, “It is important to have deep knowledge in a certain area before you can really unlock something new.” That’s the focus of the types of the things that I invest in as well – deep technology. Folks who are basically doing a lot of computer vision stuff, they should actually now start looking at the new devices and new cameras that are emerging, and think about how they can take things that they were doing in a lab setting or in a research setting. You now have the capability sitting in your pocket to actually be able to do some of those things.