🤖 Top 10 vulnerabilities for ChatGPT and LLM applications
New technologies introduce new and old vulnerabilities and threats
On Tuesday, the OWASP Top 10 for LLM was released. This document explains the ten most common vulnerabilities that can affect LLM (Large Language Models) either if you are implementing them in your website through a chatbot, you are developing your own LLM model or you are using a LLM platform for your homework.
Let's take a quick look at each vulnerability:
LLM01: Prompt Injection. This is probably the most abused “feature” in ChatGPT. Basically by modifying your question adding some statements, you can force ChatGPT to respond to forbidden or unethical questions.
LLM02: Insecure Output Handling. If the LLM does not perform the proper filtering, it might be prone to XSS, Sql injection, Remote Code Execution… just as any other web app.
LLM03: Training Data Poisoning. By entering fake data into the AI model, answers or even the model itself can be biased.
LLM04: Model Denial of Service. As LLM models use a lot of resources (i.e. CPU, RAM, …) you could trick the model to enter an infinite loop by asking recursive questions, etc.
LLM05: Supply-chain Vulnerabilities. As LLMs use third-party libraries and dependencies, in case these have some known vulnerabilities, they could be exploited.
LLM06: Sensitive Information Disclosure. If you can abuse some functionality to retrieve unexpected info from the LLM model, such as data entered by other users, system data, etc.
LLM07: Insecure Plugin Design. Usually ChatGPT and other LLMs let developers to write custom plugins that can connect to external websites or APIs to retrieve data. This could be exploited in case there is lack of user input filtering or any other vulnerability that affects the plugin itself or the API behind.
LLM08: Excessive Agency. Basically that leads us to the “least permission principle”. If any LLM component (the model, plugin, etc) has more permissions than needed it might be exploited to perform unintended actions.
LLM09: Overreliance. Knowing that the Human Factor is always the weakest link in the chain. That also applies to LLM models. As humans we must think critically and always assess the accuracy, consistency of any information for decision making.
LLM10: Model Theft. More on the cloud security side, if some APT compromises the infrastructure and is able to steal IP, source code, etc.
A.I. recent developments and LLM models are relatively new technologies, by understanding and addressing these vulnerabilities. We can enhance the security of LLM implementations and foster a safer AI-driven environment… and preserve the humanity 🌍🌱
For more info: https://owasp.org/www-project-top-10-for-large-language-model-applications/