In recent tests, phishing emails written by AI are better than humans

[ad_1]

Natural language processing Keep looking for its way Into unexpected corners. This time Phishing emailIn a small study, researchers found that they can use the deep learning language model GPT-3 and other artificial intelligence-as-a-service platforms to significantly lower the barriers to entry for large-scale spear phishing campaigns.

Researchers have long debated whether scammers are worth the effort to train machine learning algorithms and then generate compelling phishing messages. After all, mass phishing emails are simple and formulaic, and they are already very effective. However, highly targeted and tailored “spear phishing” messages are more laborious to write. This is where NLP might come in surprisingly handy.

At the Black Hat and Defcon Security Conference in Las Vegas this week, a team from the Singapore Government’s Technology Bureau demonstrated a recent experiment and they presented 200 of their colleagues. Both messages contain links that are not actually malicious, but simply report the click-through rate to the researcher. They were surprised to find that more people clicked on links in AI-generated messages than in manually written messages—a significant amount.

“Researchers pointed out that artificial intelligence requires a certain degree of expertise. Training a really good model requires millions of dollars,” said Eugene Lim, a cybersecurity expert at the Government Technology Agency. “But once you put it on artificial intelligence as a service, it costs a few cents, and it’s really easy to use—just type in text, and output text. You don’t even have to run the code, you just give it a Prompt, it will give you output. Therefore, this lowers the barriers to reaching a larger audience and increases the potential target of spear phishing. Suddenly, every email on a large scale can be targeted for every receipt. Individually.”

Researchers use OpenAI’s GPT-3 platform in combination with other artificial intelligence-as-a-service products that focus on personality analysis to generate phishing emails suitable for the background and characteristics of colleagues. Machine learning focused on personality analysis aims to predict a person’s tendency and mentality based on behavioral input. By running output through multiple services, researchers can develop a pipeline to organize and refine emails before sending them. They said the results sounded “very humane” and that these platforms automatically provided surprising details, such as mentioning Singaporean laws when instructing to generate content for people living in Singapore.

Although they were impressed with the quality of the synthesized information and the number of clicks they received from colleagues and the number of clicks on synthetic information, the researchers pointed out that the experiment was only the first step. The sample size is relatively small, and the target pool is fairly homogeneous in terms of employment and geographic area. In addition, both the artificially generated messages and the messages generated by the AI-as-a-service pipeline are created by people inside the office, rather than external attackers trying to speak the correct tone from a distance.

“There are many variables to consider,” said Tan Kee Hock, a cyber security expert at the Government Technology Agency.

Nonetheless, these findings have prompted researchers to think more deeply about how artificial intelligence as a service can play a role in phishing and spear phishing activities.For example, OpenAI itself has been a long time Fear of potential for abuse Own services or other similar services. The researchers pointed out that it and other rigorous artificial intelligence-as-a-service providers have clear codes of conduct, trying to audit their platforms for potential malicious activities, and even trying to verify user identities to some extent.

“The abuse of language models is an industry-wide issue, and we take this issue very seriously as part of our commitment to the safe and responsible deployment of artificial intelligence,” OpenAI told WIRED in a statement. “We grant access to GPT-3 through our API, and we will review every production use of GPT-3 before GPT-3 goes live. We take technical measures, such as rate limiting, to reduce malicious use by API users Possibility and impact. Our active monitoring system and audit are designed to detect potential evidence of abuse as early as possible, and we have been working hard to improve the accuracy and effectiveness of our security tools.”

[ad_2]

Source link