Trigger warning: If you suffer from anxiety, you may want to skip this post.
The trope of a mad-scientist or terrorist developing a bio-weapon has been around in science fiction for many years; see Frank Herbert's The White Plague for an early example. Now we have something else to worry about.
Many companies and academic researchers use AI-based software to optimize drug development. Basically, the software helps them find chemicals that have low toxicity and high efficacy. But what happens if you instead, ask the software to optimize for high toxicity? You get a weapon.
That's what a US-based company, Collaborative Pharmaceuticals Inc., found when they decided to tweak their software for toxicity as part of a presentation for a Swiss security conference. The results were, to put it mildly, troubling, as they describe in this article for Nature.
The Swiss Federal Institute for NBC (nuclear, biological and chemical) Protection —Spiez Laboratory— convenes the ‘convergence’ conference series1 set up by the Swiss government to identify developments in chemistry, biology and enabling technologies that may have implications for the Chemical and Biological Weapons Conventions. Meeting every two years, the conferences bring together an international group of scientific and disarmament experts to explore the current state of the art in the chemical and biological fields and their trajectories, to think through potential security implications and to consider how these implications can most effectively be managed internationally. The meeting convenes for three days of discussion on the possibilities of harm, should the intent be there, from cutting-edge chemical and biological technologies. Our drug discovery company received an invitation to contribute a presentation on how AI technologies for drug discovery could potentially be misused.
The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting. Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery. We have spent decades using computers and AI to improve human health—not to degrade it. We were naive in thinking about the potential misuse of our trade, as our aim had always been to avoid molecular features that could interfere with the many different classes of proteins essential to human life. Even our projects on Ebola and neurotoxins, which could have sparked thoughts about the potential negative implications of our machine learning models, had not set our alarm bells ringing.
Our company—Collaborations Pharmaceuticals, Inc.—had recently published computational machine learning models for toxicity prediction in different areas, and, in developing our presentation to the Spiez meeting, we opted to explore how AI could be used to design toxic molecules. It was a thought exercise we had not considered before that ultimately evolved into a computational proof of concept for making biochemical weapons.
This is the scariest article about potential misuse of technology that I've read in quite some time. It's the first time I've ever put a trigger warning on my blog. I rather wish I hadn't read it myself; my anxiety levels are high enough (COVID-19 and Ukraine) as they are.
No comments:
Post a Comment