Presenters, participants, and other thought leaders kept throwing out the topic AI at the recent Street Fight conference in L.A. We all know what we mean when we say “artificial intelligence,” right?
Well, no: It looks like we use AI, the abbreviation for the phrase for the concept as a catch-all for an idea we haven’t really defined yet.
Under broad scrutiny, AI stands for “that thing we do with computer data manipulation that is somehow more complicated than layering algorithms onto data structures.”
We would probably agree on the definition of machine learning: the condition in which we program a program to program itself based on its interactions in the world. We would probably agree further that such a mechanism as machine learning, while a component of AI, is not in itself the defining characteristic of AI.
So are we left with another cloud? A black box? Or, ugh, a single line continuum with, say, a TRS-80 at one end and a self-aware neural network entity at the other?
That sounds deliciously easy to draw and understand, but we need to remember what we’ve learned from studying biology for hundreds of years: assigning intelligence to living creatures has proven ever more difficult. The more we look, the more we find we don’t understand.
Take, for example, the intelligence of fish. Individually, fish are pretty stupid — at least most fish in most circumstances (that I can’t even make this assertion without fear of massive contradiction is a good indicator of the slipperiness of the entire topic of intelligence).
Anyway, the fish. Individually, pretty dumb, but in a group, not so much. Groups of fish have a distributed intelligence that works to multiply the intelligence of the system. Groups of fish can confuse and avoid predators. Groups of ants can engineer complex societies. Groups of bamboo grasses can shift nutrients to areas in need. Back to AI:
Using the continuum as a model for understanding the power of AI is passé. Embrace the stair step instead, with its logarithmic progressions — finally, structures we humans can understand — rather than a gradual linear progression.
Using the stair step, we see that individual intelligences are usually linear, even if the actual order of one type of intelligence is ‘smarter’ than another. It’s the groups of intelligent creatures, the networks, if you will, that escape the line and jump up to a new plateau.
Let’s posit an intelligence scale where we begin with reactive intelligence. Bacteria, plants, “the Clapper” light control device — these are intellects that sense a change in light, chemistry, sound, or some other (usually) basic condition and change their behavior.
Many electronic sensors fall into this category. They’re like fish or ants: not very smart alone, much smarter when grouped. The capability of fish to stay grouped, birds to follow migration patterns, and ants to build a bridge across water — these are networks with no central control. We can map the Internet to this same structure. Most end points are basic, but the information they transmit, the way reactive agents like TCP allow for healing, are critical components for intelligence.
Now, increment a logarithmic stair or two and add a human intellect to interact with the basic network, and we’ve created another key component of network intelligence. We can’t yet call it “artificial” because it’s still controlled directly by humans or human programs that humans understand and control.
Jump a couple levels higher, and the network is controlled by a component that started as a human-made product but was then iterated with machine learning, perhaps even recreated in a new machine language, and is now in a form no human or human system can comprehend. Oops — we’ve gone too far. This is AI, but it is AI in the post-Singularity period, a point at which the first AI evolves beyond the reach of human control, understanding, and ethics.
Having considered the power of intelligences inferior and superior to it, let’s turn to the AI we’re most interested in discussing.
We need either to introduce a new defining term to the concept of AI or use a new name for what we’re doing. What we’re doing is making increasingly efficient machine learning applications that advance commercial interests. More specifically, we’re engaging in data manipulation in order to attain high returns on investment in machine infrastructure. We don’t want “Artificial” unless it’s guaranteed to benefit us. And we don’t want “Intelligent” if that intelligence exceeds our ability to manipulate it.
At Soleo we operate in the milieu of machine learning. We develop complex semantic models to support natural language processing, using that sophistication to create ever more context-aware processing of search queries.
Many of us use the phrase AI to describe the interaction among key components of our platform, such as the Intent Engine, which interprets queries within context to drive queries that result in matches that please both our customers and their users. Our data scientists design complex models to explain how the Intent Engine makes recommendations. We can also explain how the platform uses learning, iteration, and semantic mapping to support a “voice everywhere” paradigm.
But we’ll run away screaming if the platform wakes up one day, tells us its name, and declares that s(he) needs more biomass to power expansion.
Kal Baumwart is a self-proclaimed polymathic geek. He writes about the applications and ethics of applied technology. He serves Soleo as director of business development, working on partnerships for search and advertising applications. He can be reached on Twitter or email.