Technology

AI and the Evolution of Data Loss Prevention

AI and the Evolution of Data Loss Prevention

In recent months, AI chatbots have been receiving a great deal of attention in the press, on social media, and by word of mouth. The launch of OpenAI’s ChatGPT, and subsequent updates and expansion of its abilities, has brought new and old questions about machine learning, artificial intelligence, intellectual property, data security, and other considerations to the forefront of public discourse. While there are many potentially beneficial ways to use AI chatbots like ChatGPT, there are also security concerns that come with it, whether you’re making use of it as an individual user or on behalf of an organization. Laypeople and cybersecurity professionals alike should be aware of the risks as well as the perks.

Explaining Generative AI Chatbots

Generative AI chatbots, such as ChatGPT, are trained on large language models (LLMs) using “machine learning algorithms that allow for computers to understand text, speech, and images.”  GPT stands for generative pre-trained transformer; a transformer is the algorithm that takes inputs and transforms them into tokens that the technology can understand and respond to. ChatGPT is only one generative AI chatbot out of many that operate on the same basic premises and models. The specific disparities in knowledge or performance between different GPT chatbots comes down to training and the amount and the type of data that goes into it.

The most popular and prolific generative AI chatbots are developed and owned by industry leaders and built on models that are larger than others. They are trained on massive amounts of data from a wide range of sources, and they require huge clusters of hardware to run so they can process all of the information. Using the data that it is trained on in tandem with inputs provided by users, these bots can hold shockingly normal conversations, compose emails, essays, and cover letters, and even help to organize your schedule. However, it is important to take their information with a large grain of salt, as well as watch out for security risks.

New and Evolving Security Risks

Many of the security issues that accompany chatbots like ChatGPT can be attributed to the fact that the technology is still relatively new and frequently updated, so bugs can cause problems for users. A ChatGPT bug in March 2023 exposed some users to other users’ chat history rather than their own, and may have also leaked certain payment-related data of ChatGPT-Plus subscribers. While the bug that caused this incident was resolved quickly, potentially sensitive data was still exposed, and similar bugs are liable to arise in the future and cause accidental leaks and data breaches.

In addition to the dangers of accidental breaches, there are also cybercriminal attacks to consider. Incidents so far have mostly been the work of security professionals attempting to expose vulnerabilities that attackers can take advantage of, but they have shown that the GPT model is highly susceptible to indirect prompt injection attacks from any source. Hackers can add invisible text to a webpage to send a prompt that causes a bot, like Bing’s AI chatbot, to ask users for sensitive information that can then be used to compromise their accounts, leak their personal data, or take control of various assets.

Security Recommendations and Best Practices

As with all potentially dangerous interactions, online or otherwise, the bulk of staying safe is to be cautious and aware of the risks. Emails and webpages already often warn users never to share their passwords or other sensitive data, and these same principles apply when using chatbots as well. While users may feel comfortable sharing personal information with something like ChatGPT because it is a technology and not a person, the fact is that any information you put into it is liable to end up in front of another person, whether by accident due to a bug or on purpose if a cybercriminal targets you.

Organizations that oversee people using AI chatbots will have to grapple with the level of control they want to have over those users and their interactions. There are solutions available that can monitor the data that gets sent to domains for chatbot services and block sensitive information from being shared, no matter where the endpoint is. When using plugins, it is important to only run the plugins that are necessary for a given task to avoid inadvertently sharing excessive information. It is also recommended that organizations observe general best practices like using robust passwords and the principle of least privilege.

Conclusion

Generative AI chatbots like ChatGPT are neither inherently beneficial nor inherently harmful; as with all technology, it comes down to how it is used. While you can’t control how cybercriminals use their expertise to manipulate the technology, you can take steps to mitigate the risks. Cybersecurity professionals should stay informed on the latest updates in technology and any new dangers arising over time, and all users should be wary of what information they provide when talking with a chatbot. With the right security posture and awareness, it is possible to minimize, though not entirely eliminate, the dangers associated with AI.

PJ Bradley

PJ Bradley

PJ Bradley is a writer on a wide variety of topics, passionate about learning and helping people above all else. Holding a bachelor’s degree from Oakland University, PJ enjoys using a lifelong desire to understand how things work to write about subjects that inspire interest. Most of PJ’s free time is spent reading and writing. PJ is also a regular writer at Bora

Comments
To Top

Pin It on Pinterest

Share This