The AI is Always Watching | Hackaday

My phone can now understand me but it’s still an idiot when it comes to understanding what I want. We have both the hardware capacity and the software capacity to solve this right now. What we lack is the social capacity.

We are currently in a dumb state of personal automation. I have Google Now enabled on my phone. Every single month Google Now reminds me of bills coming due that I have already paid. It doesn’t see me pay them, it just sees the email I received and the due date. A creature of habit, I pay my bills on the last day of the month even though that may be weeks early. This is the easiest thing in the world for a computer to learn. But it’s an open loop system and so no learning can happen.

Earlier this month [Cameron Coward] wrote an outstanding pair or articles on AI research that helped shed some light on this problem. The correct term for this level of personal automation is “weak AI”. What I want is Artificial General Intelligence (AGI) on a personal level. But that’s not going to happen, and I am the problem. Here’s why.

Blindfolding AI

Like most people, my phone is now part of who I am. Although I spend hours a day using an actual desktop computer (that’s the kind where the monitor and keyboard aren’t one integrated part of the computer) much of my life passes though a 5.2″ touchscreen. Google is always watching but for now it’s relegated to a small portion of what is going on. It sees those monthly bill notices I previously mentioned because I use gmail, but it doesn’t watch my browser activity close enough to see me pay them.

Everything in life needs an impetus to happen. If there isn’t a closed loop on my bill payments it’s no surprise that I get deprecated reminders about them. This means my current automation is annoying rather than assistive. It can’t see everything I do.

How to Look at Someone Without Creeping Them Out

Cayla is always listening

In many cultures there is a social norm that you don’t stare at people. That is to say, there are times when it is and isn’t appropriate to look at people; there is a maximum amount of time you can continue gazing upon them; and the rules that make this work are a game of moving goal posts.

Yet almost all humans are capable of, and do learn this game. Even strangers who have never met you before can quickly recognize when you need help and if they should offer it or not. This is the keystone to unlocking useful personal AI. It’s also an incredibly difficult task.

A much easier method is to watch absolutely everything the user does. This makes a lot more data available but it’s super creepy and raises a ton of ethical concerns. Being observed the majority of the time is unprecedented — there’s no human-to-human paradigm for this type of watchfulness. And the early technology paradigms have not been going well. Just last week authorities in Germany recommended that owners of a doll called “Cayla” destroy the microphones housed within. The doll’s microphone is always listening, routing what is heard through a voice recognition service with servers outside of the country.

Creepiness aside, privacy is a major issue with allowing an system to watch everything you do. If that information is somehow breached it would be an identity theft goldmine. Would your AI need to know to shut itself down anytime you walk into a public restroom, hospital, or other sensitive environment? How could you trust that it had done so on every occasion?

My mind also jumps to a whimsical scenario where your personal AI gets a bit too smart and decides to blackmail you (a Douglas-Adams-like thought… I will try to keep this discussion on the track of what is plausible). More likely, once your personal assistant knows you well enough and proves it can get you to do your work more efficiently it’ll be promoted from your assistant to your manager. Are you still an effective team?

Machine Learning as a Social Norm

Seth Bling’s neural network learning Super Mario World

Machine learning is the key to doing amazing things. But gain a bit of understanding of how it works and you immediately see where the problem lies. A machine can learn to play video games at a very high level, but it must be allowed to see all aspects of the game play and requires concrete success metrics like a high score or rare/valuable collected items.

Yes, for a personal AI to be truly useful it must have nearly unrestricted access to collect data by watching you in daily life. But I think it goes even a step further. An AI can’t speed-run your Monday over and over the way it would a level of Super Mario World. For machine learning to work in this case it needs to share data across large populations to get a useful set. It would definitely work, but that’s a peeping-tom network of epic proportions. That’s not an uncanny valley, it’s a horror movie plot.

We have already seen the implications of this flavor of data collection. Social media is the machine learning without any of the AI benefits. Millions of people have published what might seem to them as innocuous information on innumerable platforms. But big data turns that innocuous information into predictions about the behavior of segments of the population.

If the dopamine drip of social media got people to share all of this data, what impact would effective personal AI have? It would be your friend, advisor, confidant, all in one. I tip my hat to Charles Stross who depicted a very scary AI in his book Accelerando. It takes the form of realistic robotic house cat. It’s incredibly easy to underestimate abstract intelligence.

Given Access, AI Still Lacks Vision

Google’s ‘Inceptionism’ turned out some trippy images but it still doesn’t know what it’s looking at.

The current state of the art could allow a unified data collection effort to watch everything on your various computers and portable devices. It could listen to the audio in your life. And even record video of limited use.

First things first, even given total digital access to your life it is a big task to make sense of everything you’re doing. This is not an insurmountable challenge right now, but it would certainly require that the processing happen remotely to get the necessary horsepower. The same goes for audio data. This is already the case for many systems like the Amazon’s Echo, Apple’s Siri, Google’s Allo, and for children’s toys like the aforementioned Cayla and Mattel’s Barbie.

Video recognition doesn’t really exist right now. This is the real cutting edge of a lot of robotics research (think self-driving cars and military robots) so it is surely coming. As with voice recognition, there are services like Google Cloud Vision that depend on a system of constraints: orientation of the item to the camera, lighting levels, known sets to compare, and more. But in the foreseeable future I don’t think that dependable computer vision will be a suitable data source for personal AI purposes.

This is a real problem for making sense of our lives. How will your AI know who you are talking to? Without a view of what you see, gathering context becomes very hard. And the most obvious route for this input would have been wearable cameras like Google Glass. We all know how that turned out. Perhaps Snapchat’s entry into that field will change the landscape.

What We Could Get But Won’t

Okay, I’ve done a lot of bellyaching about the problems. If those were all solved, what do I actually want? In a nutshell I want my intelligence augmented.

If my wife and I have a passing conversation about a musical coming to town I want my personal AI to remember and tell me when tickets go on sale. For that matter, I want it to know my seating and cost preferences for me and to check my calendar and my wife’s calendar to choose the perfect day, simply asking me to pull the trigger on the purchase. I want it to know that we usually will pair a show with a dinner or with drinks afterward and the collate our restaurant visit history to guess which place we would most enjoy visiting this time around. I want the moon.

But I also want privacy. I want my humanity, and I want to live my own life. So I’ll pull myself back from visions of a brave new world and appreciate what we have: access to information which was lunacy to imagine 30 years ago. Technology will continue its march forward and we will benefit from it. But for now that tech isn’t and can’t watch us closely enough to make an Artificial General Intelligence system part of your daily life. But people will try and that will be very interesting to read about on Hackaday.