About the authors: Noah Giansiracusa is an assistant professor of mathematical sciences at Bentley University and the author of How algorithms generate and prevent fake news. Paul Roemer is an associate professor at NYU and a co-recipient of the 2018 Nobel Memorial Prize in Economics.
The story of a Google engineer (and Christian mystic) who saw signs of personality in Google’s latest artificially intelligent chatbot software and was later fired has reignited public debate over whether any of today’s AI systems are sentient. The consensus among experts is that they are not: see this, this, this and This, for example. We came to the same conclusion in a different way, using a little mathematical formalism to clear the fog of confusion. A chatbot is a feature. Functions are not sentient. But functions can be powerful. You can fool people. The important question is who controls them and whether they are used transparently.
Mathematics is powerful because it fosters two extremes – abstraction and specificity. If you are tempted to think that functions can be sentient, start with some specifics. Which of these four functions is the most sentient?
-
f(x) = 5x + 7
-
f(x) = sin(x)
-
f(x) = log(x2)
-
f(x) = 3cos(x4 – 7x2) + 4
Counting parameters is one way to measure complexity. It takes nine symbols to specify the first versus 19 for the fourth (and the three-letter symbol “cos” is a placeholder for its own complex function, the cosine.) So some functions are more complex than others. But more sentient? Obviously not.
In case your high school math is rusty, these are arbitrary functions we invented that, when graphed, produce various lines and curves.
Now move from the specifics to the overarching abstraction. The abstract notion of a mathematical function is so powerful precisely because it allows us to use our understanding of simple instances to think about relatives of any complexity. A function is a rule to convert one number (or list of numbers) to another number (or list of numbers). By this definition, all AI systems in use today, including the LaMDA chatbot that sparked the recent controversy, are functions. AI systems are much more complex than the four functions listed above, much more. It would take hundreds of billions of symbols to write down the formula for the function that is LaMDA. So LaMDA is a very complex function, but a function nonetheless. And no function is sentient.
Abstractions help us reason by letting us look past every irrelevant detail. One such detail is that the software behind a chatbot includes a rule to convert a message written in letters of the alphabet into a number, which the software takes as input x, and then converts the output f(x) back into letters . Every computer you use, including the computer you carry in your bag, routinely performs this type of translation. From an abstract point of view, each input or output is a number. Every program is a function.
Placing LaMDA alongside its other functions takes the wind out of the sails of those reaching into familiar territory of moral judgment, dazed and confused by its complexity. In high school we learned how to tell if a function is increasing or decreasing. Should we also have learned to recognize whether a function is sentient? Should our teachers have scolded us for the harm we might inadvertently inflict on this innocent life-form if we plotted a sentient function on a calculator?
Instead of going round in circles about how to use the word “sensitivity” (which nobody seems to be able to define), we could ask how humans should learn to harness the amazing power of complex functions. For centuries, mathematicians and engineers have codified knowledge into new functions. During an investigation, they discovered “pseudo-random” functions — functions that give a perfectly predictable output to anyone who knows the underlying formula, but that simulate the random behavior of a coin toss with incredible accuracy. If a real person flipped a real coin from behind a curtain and a computer performed a modern pseudo-random function behind another, no one would be able to tell them apart, even after millions, billions, billions, billions of reports of heads and tails were collected.
During the “training phase” the software examines huge amounts of text for a chatbot. Imagine that the phrase “my dog likes it” is followed by “play fetch” half the time and “chew furniture” half the time. When the trained chatbot chats with someone, the inputs can signal that it’s time to say something about a dog. Software running on a computer can’t literally toss a coin to decide between “play fetch” and “chew furniture.” Instead, it uses one of the pseudo-random functions.
Traditionally, any mathematician or engineer who discovered a better formula for simulating randomness published it for everyone to critique and copy. As a result, simulated randomness is used in many applications today, including securing Internet transactions. Everyone who worked on these practical applications could use the best of all known functions. That’s how science works. With no prospect of becoming multi-billionaires, people discovered new features and shared them. Man has made progress. It was difficult to use secret knowledge to amass enormous social, economic, or political power.
It was a good system. It has worked for centuries. But it is under siege. In artificial intelligence, advancement at the frontier of knowledge is now dominated by a few private companies. They have access to enough data and enough computing power to discover and exploit remarkably powerful features that no one on the outside watching can understand.
The output produced by LaMDA is impressive. It shows that artificial intelligence could open up amazing new ways for people to access all human knowledge. We expect Google to find a way to optimize LaMDA to nudge the decisions of billions of people who use it in opaque ways; and that it will raise billions of dollars from companies that want those decisions to benefit them. But that’s not the only way forward. The same tools could be developed by a combination of academics and Wikipedia contributors—people who genuinely want to make all human knowledge available to everyone; People not trying to get you to click on a link and get a response that isn’t the one you wanted.
Don’t be fooled by features, no matter how complex they may be. The engineers at Google are not modern Dr. Frankenstein’s. They don’t bring wires and transistors to life. They are smart people doing the kind of work that scientists and mathematicians have always done, work that often boils down to finding an explicit formula for a useful new function.
Instead of debating the vulnerability of chatbots, let’s address the long-term implications of a shift away from the scientific system that produces that knowledge for the benefit of all, to a system dominated by tech giants where secret knowledge imparts power and profit few.
Opinions like this one are written by writers outside of the newsrooms at Barron’s and MarketWatch. They reflect the perspective and opinion of the authors. Send suggested comments and other feedback to [email protected].