It wasn’t your typical panel discussion.

On a day when other Washington think tanks were convening more conventional conversations about environmental regulations or public opinion research, the Information Technology and Innovation Foundation put this provocative topic to a group of experts on Tuesday morning: “Are Super Intelligent Computers Really A Threat to Humanity?”

It’s a subject that’s been considered by science fiction writers and futurists for years, and Hollywood seems increasingly worried about it, as evidenced by a slew of techno-skeptical recent releases such as Avengers: Age of Ultron, Ex MachinaChappie and, most recently, Terminator: Genisys. These films imagine worst case scenarios involving artificial intelligence — scenarios that often include deadly robot uprisings. Yet most of Tuesday’s panelists voiced optimism about a future where humans share the world with super smart machines.

Robert Atkinson, the Information Technology and Innovation Foundation’s president, was the most strident, stressing a variety of ways he believes AI will benefit humanity.

“It’s going to alleviate poverty,” he said. “It’s going to cure diseases. It’s going to do all of these great things for us. We should be doubling AI research. We should be tripling AI research. But when we talk about it in these apocalyptic terms, we’re going in the exact opposite direction.”

None of Atkinson’s fellow panelists advocated against AI research or disputed its potential benefits, but several talked about the need to study possible negative outcomes of radical technological developments.

Stuart Russell, an electrical engineering and computer sciences professor at the University of California, Berkeley, argued that robots with AI must be “value-aligned” with humans. He suggested that powerful machines with the ability to think and make decisions for themselves could otherwise be dangerous, especially if they figure out how to avoid being turned off.

“Whether or not AI is going to be a threat to the human race depends on whether we make it a threat to the human race,” he said. “At the moment, there’s not nearly enough work on making sure it isn’t a threat.”

Atkinson said he supports funding for safety measures, but argued that widespread talk about doomsday scenarios is quite damaging to the cause of technological progress. He then admonished the rest of the panel — Nate Soares of the Machine Intelligence Research Institute, Ronald Arkin of Georgia Tech and and Manuela Veloso of Carnegie Mellon — for previous statements that, in his view, “suggest to policymakers and suggest to the media that we could all die” at the hands of robots.

Soares, the Machine Intelligence Research Institute’s executive director, countered that “if we did think there was, like, a five percent chance of [AI] wiping out the human race, that would be a huge frickin’ deal.” At the same time, he acknowledged that media coverage of artificial intelligence is often sensational.

“It is really hard for me to appear in a news article without two pictures of the Terminator,” he said.

For his part, Arkin, who serves as Associate Dean for Research and Space Planning at Georgia Tech, made the point that all of these discussions must include voices from outside the fields of science and technology.

“You cannot leave this up to the AI researchers. You cannot leave this up to the roboticists,” he said. “Examining social implications of technology and having doubt about what it is that we are actually creating for the future is not part of our general regime.”

Follow Graham on Twitter