Experts are sounding the alarm over the rise of artificial intelligence (AI) hacking attacks. According to Ernst&Young survey, 85% of company representatives expressed concern that AI is helping to plan and execute more sophisticated attacks on the IT infrastructure of companies and organizations.
Perhaps even more than statistics, the seriousness of the problem is demonstrated by specific high-profile examples of hacks and leaks. In August 2022, crypto exchange Binance was jeopardized. Using generative AI and deep fake technology, a virtual copy of a public relations specialist was created who held Zoom conferences on behalf of the exchange and persuaded holders of crypto wallets to transfer money to the account of the attack organizers.
How to protect your company’s IT infrastructure? What should we prepare for in the future? Umal Nanumura agreed to talk to us about this and more. Umal is a leading developer of a major EdTech platform from Sri Lanka’s FONIX Software Solutions, responsible for implementing and integrating cybersecurity tools and measures.
Umal, what can AI scammers do today that they couldn’t before?
First of all, it is important that with the help of AI, quite serious hacking attacks can be carried out by more or less random people with little training (we cybersecurity specialists call them unsophisticated actors). This makes the cybersecurity situation less manageable for companies and organizations than it used to be. The emergence of self-learning systems (ML – machine learning technology), neural networks of all kinds, which need only new input data to adapt to a new task, has led to the automation of hackers’ actions. This has been the primary consequence of hackers mastering “artificial intelligence,” not that AI can, given enough time, pick the password to any account 100% of the time (Bruteforce). Although that’s also true.
The automation of malicious activity led to the birth of that very “Bender” from Futurama, a human-corrupted robot whose anthropomorphic brain was at first neither bad nor good (laughs). OC Kali Linux, HackedGPT, WormGPT, – are the names of new IT systems specifically designed for hacking and malicious activity. It only takes a few simple tweaks to execute an attack complex enough from a cybersecurity perspective.
Another consequence of AI adoption has been an increase in the effectiveness of traditional attacks. If we look at what the most common hacker attacks are by type, it would certainly be phishing – malicious links that perform a dangerous scenario for company and organization networks when clicked on. Phishing links are spread through browsers and in emails. So, AI has led to the fact that it is now possible to automatically compose messages that are extremely personalized. A neural network can analyze large amounts of information about a particular individual and organization, analyze it, and send exactly the kind of message that has the maximum chance that a particular employee will take it at face value and accidentally run a malicious script. There is evidence that the emergence of “large language models” (LLMs) – these are neural networks that synthesize and recognize speech – decrease the cost of phishing for hackers by 95% with the same efficiency.
SQL injection, infiltration of local corporate networks and cloud storage, through vulnerabilities in old plugins and packages, DoS and DDoS when you overload and “take down” a competitor’s server, keystroke listening when AI “listens” to what you type through your keyboard and guesses passwords. All of this is simplified. AI scammers are able to automatically examine a company’s server or cloud security system and learn how to hack into that particular network. Needless to say, they can process and take into account so much information that no human hacker could ever dream of doing so.
There is a lot of talk now about deep fake attacks. How dangerous is generative artificial intelligence?
First we need to define the concepts. What is generative AI? These are neural networks capable of creating and understanding content: pictures, text, audio recordings, and so on. This type of AI is now actively used to overcome biometric authorization systems, as well as to mislead individuals for their own purposes. In addition, systems for imitating human behavior on the Internet, which can click, surf and open pages “like a human”, help in this. Such systems have quite peaceful uses, such as automated testing of UI-interfaces and web analytics, but everything that was invented for the good will be tested by attackers for their own purposes. Have you ever had to pass captchas that use pictures, riddles, or any other way to make sure you’re a human and not a bot? There is already software that automates the cracking of such captchas. And it’s based on AI!
What are the consequences of having a vulnerability in corporate networks or corporate information systems?
Unbound aftermaths! In common worldwide legal practice, liability for the loss of personal data of users, customers, partners is imposed on the company that lost the data, if the hack is secondary. In 2017, hackers gained access to Equifax databases. This is one of the largest credit history exchanges in the United States. As a result, the personal data of 143 million people was leaked online. Everything was there: social security numbers, driver’s license numbers, dates of birth, addresses. As a result, Equifax had to pay a $700 million fine. How many medium and small firms can pay 16 million. And if – a common situation in the U.S. – annual profits are 3-4 million? Here the company awaits not only bankruptcy, but also debts of managers and owners. The story of Youbit, a small South Korean crypto exchange, which was hacked twice in 2017, resulting in the theft of 17% of bitcoins in circulation on the exchange, is very telling. It is understood that after this Youbit simply shut down. The technical details are unknown. The South Korean government blamed North Korean intelligence services for the incident.
Can we predict how cyberattacks will change due to AI in the future? What should the corporate world prepare for?
Yes, there are such studies. For example, the UK’s National Cyber Security Center (NCSC) recently shared such predictive prognostics with the UK’s National Cyber Security Center (NCSC). I would emphasize in this regard the ability of generative AI to write code. If you describe a program in enough detail to the same ChatGPT today, it will write you a program in any language. Today, hacker software is not able to utilize the ability to generate code after machine learning to make AI even more flexible and unconventional when hacking, but this work is underway and if it succeeds, cybersecurity is in for a rough ride. Imagine a tool that analyzes: what work needs to be done and if it needs to be a hammer, it will “reprogram itself” into a hammer.
I also see the danger of hackers using AI in social engineering. In e-commerce, neural networks have already been trained quite well to monitor users’ actions and find an individual approach to it. Attackers are now doing the same thing, passing data from social networks and personal accounts to language model modules. As a result of processing this personal information, unprecedentedly sophisticated attacks may be organized in the future. Remember the story of Binance? And after all, the crypto market suggests a powerful cybersecurity department… Nevertheless, the attackers were not afraid to choose this particular site for their actions.
Readers of our interview, of course, want to know how to effectively defend against the new threats that AI brings with it?
First, there are cybersecurity experts. I count myself among those as well. The systems of policies and tools I have developed for FONIX Software Solutions and the ExamHUB training platform have never been overcome by attackers. I’m willing to collaborate with other companies as well, I like the idea that I’m defending order against anarchy. Second, strangely enough, AI itself can help against AI. In parallel to AI-based fraudulent software, “artificial intelligence” software that protects corporate information systems is also developing. You and I have talked before about how killer the consequences of hacker attacks can be. And the good thing about neural networks in cybersecurity is that they are pretty good at predicting threats and are able to take protective measures on their own. In terms of their actual use in the field, the data says they can mitigate up to 80% of threats. In fact, they can very many. Most of these defenses can monitor real-time abnormalities in the broadest sense of the word and immediately shut down the data for anyone to retrieve. Suppose some employee in your work environment accidentally created a file with a permission that allows the entire Internet to access it. The security neural network would react immediately. What other recommendations can be given besides the implementation of such systems? Automate protection as much as possible. I realize that some technical managers will feel a loss of control and discomfort, but there is no other way. First, it makes defense cheaper. Second, it’s not just reacting to vulnerabilities that’s important, but reacting instantly. Cybersecurity experts – people will not have time to take action when they detect a hack. Data will leak to the attackers. Another tip: adopt internationally tested “best practices” in information security (embodied in international regulations). They are updated every year and kept up to date as new types of threats emerge. You need to be a hard target so hackers won’t target you, because they also calculate the economics of an attack and don’t want an expensive event with an unclear outcome.
By “best practices”, do you mean doing the right thing by cybersecurity professionals?
Not only, and not so much. Often hacks are caused by the lack of correct procedures for working with information and the network. In the documentation of the enterprise may be fixed correct approaches, for example, on the hierarchy of accesses and so on, but in life it all may not be observed. Here’s a simple example: Cybersecurity experts can do a good job of protecting against hacking neural networks and AI that have been centrally trained on behalf of management. And attackers will find the key to the system in some AI-based open-source program installed by some non-IT employee. For example, for image processing. Yes, yes, we should talk not only about neural networks used by fraudsters, but also about the vulnerability of AI used by companies. You see, serious neural networks involve machine learning. If you train them on incorrect or one-sided data, attackers can take control of them, forcing them to give away corporate information.
What do you think about the SaaS model of enterprise systems, where companies keep all their information in the cloud?
I would say that from a security perspective – it’s not a bad strategy. If you put your resources in the cloud from big vendors like Amazon, your data is more secure than if you have your own data center. They specialize in this, so they have the latest IT technology working on security. On top of that, they are responsible for keeping your data safe, so if you get hacked, your risks are somewhat reduced.
Recent surveys of cybersecurity decision makers show: only 52% are confident that with the security systems in place, they will be able to recognize a deepfake copy of their CEO. 48% said they are not sure they have all the systems they need to repel AI-based hacking attacks. How many of those 52% managers who are willing to talk to AI in the guise of their boss are actually prepared to do so? From a risk management perspective, a good risk is one that is never realized.