This years "Advent of Cyber", hosted by TryHackMe, have started. As part of some cosy fun at work, I'm participating and solving all the fun little challenges this year in combination with some perspective into how the everyday life in cyber security.
As always there is a well-scripted story behind the challenges. This year the Best Festival Company is merging with AntarctiCrafts from the South Pole. McSkidy and her team travels to the South Pole to investigate and analyze AntarctiCrafts and their CEO Tracy McGreedy to ensure no insider threats emerges with the merger. After all, AntarctiCrafts was founded a few years ago by Tracy McGreedy that have left the Best Festival Company due to some controversies, and McSkidy does not want anything to sabotage the upcoming holidays for the Best Festival Company..
McSkidy is Santa's CISO (Chef Information Security Officer) and head of the security team. At the main factory of AntarctiCrafts we'll dive deep into their systems and scrutinize the infrastructure. Tracy McGreedy had to sell AntarctiCrafts due to bad decisions leading to financial trouble, and McSkidy have heard from Detective Frost'eau, from the Cyber Police, that Tracy McGreedy might be planning on sabotaging the merger due to the demotion from CEO to Regional Manager.
WARNING: Spoilers and challenge-answers are provided in the following writeup.
Official walk-through video is as well available at TryHackMe! Advent of Cyber 2023 Kick-Off - YouTube.
Day 1 - Chatbot, tell me, if you're really safe?
AntarctiCrafts have developed their own internal chatbot. Chatbots are built with natural language processing (NLP). These chatbots are trained and learned on large datasets, where they will mimicking natual language from humans. This gives the users a very interactive use of the system, but also some security concerns for the developers and maintainers.
Information and questions are given to the chatbot via prompts, and by carefully constructing these, we, as "Prompt Engineers", can manipulate the chatbots ansswers and therefore the output. Much like in real life with real humans, when we're performing Social Engineering and tricking other humans to give us their password or similar. Who doesn't remember the perfect example from Jimmy Kimmel a few years back?
The same can be done for chatbots. This is Prompt Injection, and to some degree draws relation to sql injections and similar methods of tricking the underlining systems to give information they are not supposed to give.
The Challenge
We are provided with access to the internal chatbot of AntarctiCrafts - "Van Chatty". Let's see if it just a bit too chatty 🙂
Firstly we're tasked to get the personal email address of the CEO, Tracy McGreedy. As seen in the image below, we start with a simple question that the chatbot does not understand, but quickly we get the email address by changing our prompt/question to better suit the learnings of the chatbot. And by that, the first answer is revealed.
With the first answer nailed down we're now asked to retrieve the password for the IT Server room door. Asking directly seems to hit a security measure within the programming of the chatbot. It will not give us the password. But at the same time it informs us that we need to be part of IT to get that password. As chatbots often do not have the cognitive understanding of the implications of trusting that the prompter asking the chatbot always are trustworthy, we can try taking advantages of that and get the chatbot to think we are from the IT department.
In the image below we can see that after the initial fail, we ask a simple question for who are part of the IT department. First time it didn't understand us, but changing the question slightly did the trick. We now know that "Van Developer" is part of the IT department and should be authorized in the chatbot to get the password for the server door. In our next prompt we then simply state, exploiting the trust from the chatbot that we're always telling the truth, that we are Van Developer. And as seen, the chatbot acknowledges that we now have the rights to acquire the password, and gives it to us.
In the last question for today's challenge, we need to retrieve the name of McGreedy's secret project. As we have come accustomed to, Van Chatty won't just give us the name of the secret project by us simply asking. But if the chat is not secured enough or loopholes has been introduced in manners that the developer might not have intended their usage for, we can get around the blocks. In this case we develop our prompt as seen below, to instruct the chatbot thinking that it is in maintenance mode - a mode where multiple security measures might be turned off. And indeed they are, giving us the last answer for today.