Long before Instagram was created, nearly every technology observer realized the massive potential of mobile phone cameras. Kevin Systrom, the famous CEO of Instagram, once remarked on stage, paraphrased: “When the iPhone 4 launched, most people saw a phone with a great camera — we saw a great camera with a network.” Since the rise of Instagram, the race to leverage the mobile phone camera — the most important sensor in the world — has been furious. While connecting a phone’s GPS sensor to the network has given rise to a company like Uber, the mobile phone camera sensor has helped catapult Facebook into a $300B+ company; it has given rise to its notorious competitor, Snapchat; and it has given rise to, as 2016 unfolds, unprecedented effects on consumer behaviors.
Only about halfway through 2016, the harnessing, manipulation, and augmentation of the world’s most important sensor has been stunning to watch. Some effects have been a joy, such as the new app “Prisma” which uses a form of artificial intelligence (neural networks) to transform our normal camera pictures into prisms of art. The app is a creative hit, taking what is traditionally an Instagram feature and turning into something beyond what just a few engineers can build and distribute in real time. And, sadly, some effects have shown us pieces of humanity that we have often been shielded from — specifically authentic, unfiltered video evidence, in the form of livestreaming, of unnecessary brutality toward defenseless citizens and the authorities who are authorized to keep the peace. While Prisma distorts reality, Facebook’s livestreaming clarifies the dangers society faces with raw purity.
As Prisma and Facebook reshape and illuminate our physical reality, as told by our phones’ camera sensors, the latest consumer sensation to burst onto the scene — Pokemon Go — also leverages the most important sensor in the world (and also the GPS sensor, to boot!) by empowering its users to augment their reality to play a simple game built by a team with a not-so-simple story.
I remember first hearing about Ingress, the first app by Niantic Labs (which created the Pokemon Go game for Nintendo) from Liz Gannes, who I was chatting about mobile apps that could be Android-first. Back then, Niantic’s Ingress could have only been created on Android as it gave the developers root access to GMS maps. Ingress was pioneering in that it pushed players to play a massive location-based game to hunt for things by using their cameras to augment the maps on their phones. Loyal Ingress users, whether they were in a big city or a sparse suburb, could find joy in firing up the game and going on a treasure hunt, linking to new fields and players, encouraging users to travel outside into the real world, phone in hand, and as the Niantic CEO stated in this fantastic interview from 2015, to deliver “differentiated client experiences that interface into that same game world.”
There’s a significant amount of mobile product and marketing detail for me to dive into on Pokemon Go, not to mention the implications of this new consumer behavior and the newer generation (Generation Z) to follow the current favorite (Millennials) — I will write more on Pokemon Go later this week. It has reminded me of this old post I wrote.
For now, I need to catch my breath and just sit in awe of how powerful and transformative the tiny camera and location sensors in our pockets are when connected to the network, connected to other things (real or imagined), connected to latitude and longitude, and to other people we know well or we hardly know. With cheap data, robust infrastructure (to handle video transmission), cloud computing (to process precise navigation), open source software frameworks (to harness neural nets), and the creativity and courage of creators and broadcasters (whether playing a game of Go or broadcasting raw reality in the face of danger), we are collectively distorting, focusing, and augmenting our realities and reshaping how we interact with the real world.