The conscience of AI

Image
Portrait of Pope Quintin.

Ph.D. student Quintin Pope considers technology’s potential to alter social dynamics

Photo by Owen Roth

Quintin Pope doesn’t spend a lot of time worrying about artificial intelligence hastening the demise of civilization. In fact, he estimates the probability of “pure AI doom” to be less than 1%.

Real-world experience just hasn’t borne out any worst-case scenarios, he says.

“A lot of the arguments people made about AI, prior to there being any AI worthy of the name to argue about, turned out to be totally wrong,” Pope said.

A doctoral student in computer science at Oregon State University, Pope has become a prominent voice in public discourse about the future of AI. He appears on podcasts, contributes to online forums, and tweets the occasional hot take on emerging issues within the field.

“I think it’s morally important to present better-informed perspectives on AI development in the general conversations we’re having,” Pope said. “There is a level of concern about the sorts of missteps we could take if our understanding is poorly calibrated. So, I and some others who also do not think we are doomed from AI are trying to put together resources presenting this perspective and advance it in the public conversation.”

In September 2023, Pope was awarded a $50,000 first prize in Open Philanthropy’s AI Worldviews essay contest. Pope’s essay takes aim at the premises underlying one particularly persistent AI threat model, the “sharp left turn.”

Under that scenario, AI’s development makes a sudden and drastic shift, diverging from human values while growing more powerful and generalizing its capabilities across new domains. The sharp left turn raises the prospect that — like HAL 9000, the renegade computer in Stanley Kubrick’s “2001: A Space Odyssey”— the machines might someday escape our control and turn against us.

The rapid (on a geologic scale) evolution of humans from naked apes to a self-aware, spacefaring species is often invoked, by way of analogy, as a real-world example of the sharp left turn. Pope’s essay, titled “Evolution provides no evidence for the sharp left turn,” explicitly rejects this analogy.

“Human evolution is not an allegory or a warning,” he writes. “It was a series of events that happened for specific, mechanistic reasons. If those mechanistic reasons do not extend to AI research, then we ought not (mis)apply the lessons from evolution to our predictions for AI.”

Initially dreaming of becoming an astrophysicist, Pope pursued an undergraduate degree in physics and applied math at the University of Rochester. By the time he graduated in 2019, his ambitions had changed, and his attention had shifted to research in artificial intelligence.

“Around that time, it became clear that artificial intelligence was going to become a substantial force in the world relatively quickly,” Pope said. “I was concerned that the default path we were going down would end up quite poorly for us.”

What particularly interested Pope was the branch of AI study known as alignment, which focuses on challenges in getting AI systems to understand human values and to act in accordance with them. Drawn to Oregon State for its robust program of AI research and its record of consistent funding, he enrolled in the graduate program in fall 2019.

He began working with Xiaoli Fern, associate professor of computer science, to develop methods for better understanding and troubleshooting text classification by a natural language processing model. As part of that work, they examined how minimal changes to the text of a movie review could result in the model’s flipping its classification of the review from positive to negative.

That research resulted in the publication of a co-authored paper, accepted by the 2021 Conference on Empirical Methods in Natural Language Processing. The experience also served to diminish Pope’s apprehensions about AI.

Which is not to say that he doesn’t have any concerns at all. Over-centralization of AI resources is one area where Pope says public concern ought to be greater than it is.

Pointing out that “AI is ultimately a lot easier to control than people are,” Pope’s greatest trepidations revolve around how AI could impact social dynamics. He raises the possibility that AI could be used to impose more effective censorship, for example, and he warns of its potential to accelerate a “values lock-in," in which social progress comes to a halt.

“I do think there are reasons to be concerned about the implications of AI,” Pope said. “But those are less in terms of a malicious ‘ghost in the machine’ emerging spontaneously and more in terms of some very ordinary questions: What will people do with this incredibly powerful technology? And how does this change social dynamics? Who has access, and who will benefit?”

Read Quintin Pope’s essay

Feb. 14, 2024