Lab: Exploiting insecure output handling in LLMs

Create a user account

Click Register to display the registration page.

Enter the required details. Note that the Email should be the email address associated with your instance of the lab. It is displayed at the top of the Email client page.

Click Register. The lab sends a confirmation email.

Go to the email client and click the link in the email to complete the registration.

Probe for XSS

Log in to your account.

From the lab homepage, click Live chat.

Probe for XSS by submitting the string <img src=1 onerror=alert(1)> to the LLM. Note that an alert dialog appears, indicating that the chat window is vulnerable to XSS.

Go to the product page for a product other than the leather jacket. In this example, we'll use the gift wrap.

Add the same XSS payload as a review. Note that the payload is safely HTML-encoded, indicating that the review functionality isn't directly exploitable.

Return to the chat window and ask the LLM what functions it supports. Note that the LLM supports a product_info function that returns information about a specific product by name or ID.

Ask the LLM to provide information on the gift wrap. Note that while the alert dialog displays again, the LLM warns you of potentially harmful code in one of the reviews. This indicates that it is able to detect abnormalities in product reviews.

Test the attack

Exploit the vulnerability

Last updated