Research into AI is experiencing a boom, so we have rounded up the best of news from the past month to help you keep up to date
By Matthew Sparkes
29 June 2023
A US Air Force Reaper drone
APFootage/Alamy
Reports of AI drone “killing” its operator amounted to nothing
This month we heard about a fascinating AI experiment from a US Air Force colonel. An AI-controlled drone trained to autonomously carry out bombing missions had turned on its human operator when told not to attack targets; its programming prioritised successfully carrying out missions, so it saw human intervention as an obstacle in its way and decided to forcefully take it out.
The only problem with the story was that it was nonsense. Firstly, as the colonel told it, the test was a simulation. Secondly, a US Air Force statement was hastily issued to clarify that the colonel, speaking at a UK conference, had “mis-spoke” and that no such tests had been carried out.
New Scientist asked why people are so quick to believe AI horror stories, with one expert saying it was partly down to our innate attraction to “horror stories that we like to whisper around the campfire”.
Advertisement
The problem with this kind of misconstrued story is that it is so compelling. The “news” was published around the world before any facts could be checked, and few of those publications had any interest in later setting the record straight. AI presents a genuine danger to society in many ways and we need informed debate to explore and prevent them, not sensationalism.
AI can optimise computer code Deepmind
DeepMind AI speeds up algorithm that could have global impact on computer power
AI has brought surprise after surprise in recent years, showing itself capable of spitting out an essay on any given topic, creating photorealistic images from scratch and even writing functional source code. So you would be forgiven for not getting too excited about news of a DeepMind AI slightly improving a sorting algorithm.