Welcome to DU!
    The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
    Join the community:
    Create a free account
    Support DU (and get rid of ads!):
    Become a Star Member
    Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
    All Forums
        Issue Forums
        Culture Forums
        Alliance Forums
        Region Forums
        Support Forums
        Help & Search
    
The A.I. Prompt That Could End the World -- NYT Opinion Essay
https://archive.ph/ZrI3pBy Stephen Witt
Mr. Witt is the author of The Thinking Machine, a history of the A.I. giant Nvidia. He lives in Los Angeles.
Oct. 10, 2025
"...The A.I. pioneer Yoshua Bengio, a computer science professor at the Université de Montréal, is the most-cited researcher alive, in any discipline. When I spoke with him in 2024, Dr. Bengio told me that he had trouble sleeping while thinking of the future... worried that an A.I. would engineer a lethal pathogen  some sort of super-coronavirus  to eliminate humanity. I dont think theres anything close in terms of the scale of danger, he said...
The dangers begin with the prompt. Because A.I.s have been trained on vast repositories of human cultural and scientific data, they can, in theory, respond to almost any prompt  but public-facing A.I.s like ChatGPT have filters in place to prevent pursuing certain types of malicious requests...
The practice of subverting the A.I. filters with malicious commands is known as jailbreaking. Before a model is released, A.I. developers will typically hire independent jailbreaking experts to test the limits of the filters and to look for ways around them...
As it turns out, A.I.s do lie to humans. Not all the time, but enough to cause concern...
The Model Evaluation and Threat Research group, based in Berkeley, Calif., is perhaps the leading research lab for independently quantifying the capabilities of A.I. (METR can be understood as the worlds informal A.I. umpire. Dr. Bengio is one of its advisers.) This July, about a month before the public release of OpenAIs latest model, GPT-5, METR was given access...
METR compares models using a metric called time horizon measurement. Researchers give the A.I. under examination a series of increasingly harder tasks, starting with simple puzzles and ... moving up to cyber-security challenges and complex software development. With this metric, researchers at METR found that GPT-5 can successfully execute a task that would take a human one minute  something like searching Wikipedia for information  close to 100 percent of the time. GPT-5 can answer basic questions about spreadsheet data that might take a human about 13 minutes. GPT-5 is usually successful at setting up a simple web server, a task that usually takes a skilled human about 15 minutes. But to exploit a vulnerability in a web application, which would take a skilled cybersecurity expert under an hour, GPT-5 is successful only about half the time. At tasks that take humans a couple hours, GPT-5s performance is unpredictable...
Dr. Bengios pathogen is no longer a hypothetical. In September, scientists at Stanford reported they had used A.I. to design a virus for the first time. Their noble goal was to use the artificial virus to target E. coli infections, but it is easy to imagine this technology being used for other purposes.
... the data has outpaced the debate, and it shows the following facts clearly: A.I. is highly capable. Its capabilities are accelerating. And the risks those capabilities present are real. Biological life on this planet is, in fact, vulnerable to these systems. On this threat, even OpenAI seems to agree.
In this sense, we have passed the threshold that nuclear fission passed in 1939. The point of disagreement is no longer whether A.I. could wipe us out. It could. Give it a pathogen research lab, the wrong safety guidelines and enough intelligence, and it definitely could. A destructive A.I., like a nuclear bomb, is now a concrete possibility. The question is whether anyone will be reckless enough to build one."
With all this going on in the background, why is the latest AI issue its impact on the economy?
https://www.democraticunderground.com/10143545510
