AI to make scam emails look genuine, UK cybersecurity agency warns

post-img

Artificial intelligence will make it difficult to spot whether emails are genuine or sent by scammers and malicious actors, including messages that ask computer users to reset their passwords, the UK’s cybersecurity agency has warned, Report informs referring to The Guardian.

The National Cyber Security Centre (NCSC) said people would struggle to identify phishing messages – where users are tricked into handing over passwords or personal details – due to the sophistication of AI tools.

Generative AI, the term for technology that can produce convincing text, voice and images from simple hand-typed prompts, has become widely available to the public through chatbots such as ChatGPT and free-to-use versions known as open source models.
 
The NCSC, part of the GCHQ spy agency, said in its latest assessment of AI’s impact on the cyber threats facing the UK that AI would “almost certainly” increase the volume of cyber-attacks and heighten their impact over the next two years.
It said generative AI and large language models – the technology that underpins chatbots – will complicate efforts to identify different types of attack such as spoof messages and social engineering, the term for manipulating people to hand over confidential material.

“To 2025, generative AI and large language models will make it difficult for everyone, regardless of their level of cybersecurity understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts.”

Ransomware attacks, which had hit institutions such as the British Library and Royal Mail over the past year, were also expected to increase, the NCSC said.

It warned that the sophistication of AI “lowers the barrier” for amateur cybercriminals and hackers to access systems and gather information on targets, enabling them to paralyse a victim’s computer systems, extract sensitive data and demand a cryptocurrency ransom.

The NCSC said generative AI tools already helped make approaches to potential victims more convincing by creating fake “lure documents” that did not contain the translation, spelling or grammatical errors that tended to give away phishing attacks – their contents having been crafted or corrected by chatbots.
However, it said generative AI – which emerged as a competent coding tool – would not enhance the effectiveness of ransomware code but would help sift through and identify targets.

World