It’s the greatest marketing slogan of all time: “Artificial Intelligence will kill us all”. It was five years ago and it still is. They warn you the machines will grow past today’s weak Siris and Alexas, into strong, HAL-9000 killers. At least, that’s what the talking points say — you tend to get attention when the product roadmap includes ‘Armageddon’ as a feature.

“Don’t worry,” they continue, “we’re making great progress and we’ll get there in 20 or 30 years. Until then, these bots and things are completely harmless! Look how cute!” But these bots (by definition ‘weak’ AIs) actually have the potential to sink us long before Skynet incinerates us. Ask yourself:

If artificial intelligence is embedded into my environment to help me find what I want and need based on my behavior, and I behave according to its suggestions, at what point does my own intelligence cease to be a part of the equation?

This is, simply put, a feedback loop. This is behavior, auto-corrected.

To build a suggestion, you need data. But there is already plenty of data collected now, and more on the way. Everything is connected: my car, my home, my work, my medical records, all my means of communication. Plus my thermostat, my toaster, and probably my toilet. Telemetry is engineered into the design of everything today, so I can’t just cut the purple wire, or put a piece of electrical tape over the offending bit. Every major ecosystem has created a psychometric profile of me.

Since it’s not the government doing this, then it’s totally OK, right?

The Artificial Intelligence Problem

I digress…if everything is wired and instrumented and processed by algorithm, and the system regularizes and metes out my options by mathematical regression, how am I to find New things? Where does New enter the equation? How does New get exposure (and how much would it cost to get a little more?) Do I trust they add some spice to the equation? Or is the output bought and sold on closed exchanges like display advertising? And at what point is that exchange turned entirely over to machine control?

You can see a glimmer of this now. Twitter suggests hashtags, based on traffic, which you can buy. It’s always trying to auto-complete what I type based on trend. You can even buy the promotion from Twitter itself. Now, tens of millions of Twitter accounts are already bots. They exist as collections of rules and scripts, most intended to influence some sort of financial gain but all helping to drive trends.

I’m not saying this is intentional manipulation of behavior on a wide scale — but it’s getting there. The mechanism intentionally monetizes the behavior, and so it is practically the same thing.

Longer term, if the system pushes everyone ‘like’ me in the same direction, how long before everyone is ‘like’ me? What does ‘like me’ look like? Can I change it? How do I know what I want to change it to? Can it tell me? Will I notice? What will it cost? Will there be 5 billion unique ‘me’s, or an Apple-esque selection of 10 ‘great’ People Personality Models? Will they call them iMe? iMyself or maybe just iI?

It may seem crazy, but you see what I mean.


When I first wrote this in 2016, “giving the keys to Alexa” looked like a distinct possibility. Five years, two general presidential elections and a pandemic have virtually proven it’s going this way. Except the mechanism isn’t entirely composed of weak AI and algorithms. A vital component is reliance on the biases and tendencies of humans to perceive and believe things a certain way. For example, confirmation bias is the human tendency to believe things that confirm what they already believe, no matter how wrong it may be. These “mind tricks’ supercharge technology’s effect.

In the End

At some point, maybe we invent a microscopic artificial intelligence in the form of grey goo which promptly eats us, then figures out how to circumvent the laws of thermodynamics and goes on to destroy the Known Universe. Yay us.

And one can make the argument that society already shapes our preferences through mass indoctrination and the enforcement of cultural norms, although this is on the timescale of generations. I’m talking about the timescale of a trending topic.

This loop is already subtly steering our behavior. In the end, the question is, how long before it’s entirely automatic? A time when don’t wake up in the Matrix…we’ll all just wake up in a Gap commercial.

I wrote this off the cuff as an editorial on artificial intelligence, I did not have time to include a mega-extensive literary analysis so I could continuously drop names like Minsky, Dennett, Fermi, etc. and it would have been TL;DR anyway. If you’re into that I’m sure something will auto-suggest things for you.

All trademarks, plot devices and fictional characters mentioned above are the property of their respective mark holders or their successors, assignees, roadies or stans.