Learn Pentesting like a Pro!

Share this post

🤖 Top 10 vulnerabilities for ChatGPT and LLM applications

pentesting.academy

Discover more from Learn Pentesting like a Pro!

Stay updated on the latest cybersecurity insights from Cloud and Mobile to Blockchain. (HUNDREDS OF SUBSCRIBERS)
Continue reading
Sign in

🤖 Top 10 vulnerabilities for ChatGPT and LLM applications

New technologies introduce new and old vulnerabilities and threats

DH
Aug 4, 2023
1
Share this post

🤖 Top 10 vulnerabilities for ChatGPT and LLM applications

pentesting.academy
Share

a close up of a cell phone with a logo in the background
Photo by Mojahid Mottakin on Unsplash

On Tuesday, the OWASP Top 10 for LLM was released. This document explains the ten most common vulnerabilities that can affect LLM (Large Language Models) either if you are implementing them in your website through a chatbot, you are developing your own LLM model or you are using a LLM platform for your homework.

Let's take a quick look at each vulnerability:

  1. LLM01: Prompt Injection. This is probably the most abused “feature” in ChatGPT. Basically by modifying your question adding some statements, you can force ChatGPT to respond to forbidden or unethical questions.

  2. LLM02: Insecure Output Handling. If the LLM does not perform the proper filtering, it might be prone to XSS, Sql injection, Remote Code Execution… just as any other web app.

  3. LLM03: Training Data Poisoning. By entering fake data into the AI model, answers or even the model itself can be biased.

  4. LLM04: Model Denial of Service. As LLM models use a lot of resources (i.e. CPU, RAM, …) you could trick the model to enter an infinite loop by asking recursive questions, etc.

  5. LLM05: Supply-chain Vulnerabilities. As LLMs use third-party libraries and dependencies, in case these have some known vulnerabilities, they could be exploited.

  6. LLM06: Sensitive Information Disclosure. If you can abuse some functionality to retrieve unexpected info from the LLM model, such as data entered by other users, system data, etc.

  7. LLM07: Insecure Plugin Design. Usually ChatGPT and other LLMs let developers to write custom plugins that can connect to external websites or APIs to retrieve data. This could be exploited in case there is lack of user input filtering or any other vulnerability that affects the plugin itself or the API behind.

  8. LLM08: Excessive Agency. Basically that leads us to the “least permission principle”. If any LLM component (the model, plugin, etc) has more permissions than needed it might be exploited to perform unintended actions.

  9. LLM09: Overreliance. Knowing that the Human Factor is always the weakest link in the chain. That also applies to LLM models. As humans we must think critically and always assess the accuracy, consistency of any information for decision making.

  10. LLM10: Model Theft. More on the cloud security side, if some APT compromises the infrastructure and is able to steal IP, source code, etc.

A.I. recent developments and LLM models are relatively new technologies, by understanding and addressing these vulnerabilities. We can enhance the security of LLM implementations and foster a safer AI-driven environment… and preserve the humanity 🌍🌱

For more info: https://owasp.org/www-project-top-10-for-large-language-model-applications/

Thanks for reading Learn Pentesting like a Pro! Subscribe for free to receive new posts.

1
Share this post

🤖 Top 10 vulnerabilities for ChatGPT and LLM applications

pentesting.academy
Share
Previous
Comments
Top
New

No posts

Ready for more?

© 2023 pentesting.academy
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing