GenAI tools can be ‘tricked’ into helping criminals launch cyber-attacks

Several reports published this week have warned that AI tools like ChatGPT can be tricked into sharing information that could be used to launch cyber-attacks.

A government paper released on Wednesday ahead of next week’s AI Safety Summit at Bletchley park, said that safeguards to prevent frontier AI models from complying with harmful requests, including designing cyber-attacks, are currently not “robust”.

Similar findings in a study published by the University of Sheffield’s computer science department on Tuesday suggested that AI can be manipulated to help steal sensitive personal information, tamper with or destroy databases, or bring down services via Denial-of-Service attacks.

PhD student at the University Xutan Peng, who co-led the research, warned that the risk with AI systems like ChatGPT is that people are increasingly using them as productivity tools rather than a conversational bot.

“For example, a nurse could ask ChatGPT to write an SQL command so that they can interact with a database, such as one that stores clinical records,” explained Peng. “As shown in our study, the SQL code produced by ChatGPT in many cases can be harmful to a database, so the nurse in this scenario may cause serious data management faults without even receiving a warning.”

The research, which studied a number of AI tools including ChatGPT, found that if they asked each of the systems specific questions, they produced malicious code.

Once executed, the code would leak confidential database information, interrupt a database's normal service, or even destroy it.

On Chinese intelligent dialogue platform Baidu-UNIT, which is being used by high-profile clients across a number of industries, the scientists were able to obtain confidential Baidu server configurations and made one server node out of order.

Speaking on the launch of the government paper, technology secretary Michelle Donelan said that the UK is the first country in the world to formally summarise the risks presented by AI.

“There is no question that AI can and will transform the world for the better, from making everyday tasks easier, to improving healthcare and tackling global challenges like world hunger and climate change," said Donelan. "But we cannot harness its benefits without also tackling the risks."

    Share Story:

Recent Stories


Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.