This is the story of how we invested in Fiddler Labs. At the beginning of 2018, we almost invested in a startup with two strong founders. To make a long and private story short, on the morning I was about to call the founders to let them know I was in, they decided to amicably part ways. I was stunned, but also relieved it happened then and not much later. A few months passed, and we ended up investing in the company with one of the original founders. And, a few months after that, we heard the other founder came up with a new idea, and we had to scramble to chase him down. I’m glad we did.
My old friend Krishna Gade and his new co-founder, Amit Paka, were teaming up to build a new startup in one of the most exciting technology spaces out there today — artificial intelligence. But Gade’s and Paka’s vision wasn’t simply to leverage AI for a vertical application. Instead, drawing from their experiences at Facebook, Samsung, and other large technology companies, Fiddler Labs embarked on a journey to build the systems and interfaces to empower companies to handle one of the most critical areas of AI today: Explainability.
The last decade has seen both the explosion of new information and huge changes in how information moves around the world. Social media is one of the biggest examples of this, inverting and transforming how information flows. Algorithms are powering most of that flow, and these algorithms are often designed to maximize engagement, reduce costs, and the like. The data inputs and decisions of these algorithmic systems, however, have traditionally been hidden from the consumer’s view. This is what they call the “Black Box.” And, looking ahead, imagine a world where these algorithms get stronger, smarter, and more intuitive, to the point where they develop their own intuition in making decisions.
Enter, Fiddler Labs. In the future, will companies feel pressure to expose these black boxes to their customers, or to regulators, or to law enforcement? New laws like GDPR in Europe have introduced the concept of a “Right To Explanation” as a feature of data privacy laws. Some states in the U.S. use algorithms to help with school placements and other decisions which shape our lives and our children’s futures. Despite all of this, “Explainability” as a branch of AI is anything but straight-forward — many experts believe the drive for explainability in AI will either trigger a degradation of systems, or that an AI’s intuition will evolve into a form of reasoning, akin to a human’s, which cannot be easily explained.
Haystack is lucky to be a seed investor in Fiddler Labs, alongside Bloomberg BETA and Lightspeed Venture Partners, where I am also a Venture Partner. The story behind the deal is just as interesting — as you can imagine, Gade and Paka had a lot of interest in their round, so given that I knew Gade well from before, I did a few things I’ve never done before — I wrote a term sheet, committed my largest ever first check (gulp!), negotiated the post-money with the founders, and hand to fend off a number of strong funds who wanted to invest. Normally, this would have made me very nervous, but I have known Krishna for some time now, and I love the white space in this field of AI so much, I very much want to part of the ride. The ride is made even easier by knowing that my friends James Cham from Bloomberg BETA and one of my mentors and colleagues Ravi Mhatre, along with new Lightspeed partner Jay Madheswaran, will be forming a power-syndicate to help support Fiddler Labs and the future of Explainable AI.