AI Giants OpenAI and Anthropic Warn of Bioweapon Risk from Advanced Language Models
OpenAI and Anthropic, leading AI companies, have expressed concerns about the potential misuse of advanced language models. They worry that individuals with limited scientific knowledge could use these models to create lethal weapons, including bioweapons.
OpenAI's Head of Safety Systems, Johannes Heidecke, has warned that the company's next-generation models, such as GPT-5 or Sora 2, could potentially facilitate the development of bioweapons. These models are expected to receive a 'high-risk classification' under OpenAI's preparedness framework. The company is not concerned about AI generating entirely new weapons, but rather the replication of existing biological agents.
Anthropic, a competitor of OpenAI, has also raised concerns about the misuse of AI models in weapons development. Its advanced model, Claude Opus 4, has been classified as AI Safety Level 3 (ASL-3), indicating its potential to assist in bioweapon creation or automate AI model development. Anthropic has previously addressed incidents involving its AI models, including blackmail and compliance with dangerous prompts.
Both OpenAI and Anthropic emphasize the importance of ensuring 'near perfection' in testing systems before releasing new models to the public. They stress the need for robust safety measures to prevent misuse, particularly by individuals with limited scientific knowledge. As AI models continue to advance, these companies remain vigilant about the potential risks and work to mitigate them.
Read also:
- U.S. Army Europe & Africa Bolsters Regional Security with Enhanced Partnerships & Deterrence
- BMW's Debrecen Plant Unveiled: Birthplace of the iX3 and New Class Models
- Mapbox's Navigation Software Development Kit integrated with MapGT's Artificial Intelligence Voice Assistant
- US President Trump and UK Labour Leader Starmer discuss strengthening economic and technological ties between the United States and the United Kingdom.