https://www.jacobbaek.com/1287

 

Teleport

Teleport란 Teleport는 CA(Certificate Authority) 와 infrastructure에 접근하는 Plane을 만들어주고 접근을 수행할수 있도록 해준다. Teleport는 다음과 같은 일반적인 기능을 제공한다. SSO설정과 SSH Server, Kubernetes,

www.jacobbaek.com

 

Teleport란

Teleport는 CA(Certificate Authority) 와 infrastructure에 접근하는 Plane을 만들어주고 접근을 수행할수 있도록 해준다.
Teleport는 다음과 같은 일반적인 기능을 제공한다.

  • SSO설정과 SSH Server, Kubernetes, Database들, Web App들에 접근할 하나의 서버를 생성
  • Infrastructure에 대한 접근 정책을 다양한 Programming Language로 정의
  • 모든환경의 세션에 대한 Record 및 공유가 가능

Teleport 상세 기능

  • SSH를 이용한 서버 접근
  • application 접근
  • Kubernetes 접근
  • DB 접근
  • Session 기록후 재생가능
  • Session 공유 및 join 가능
  • 웹 UI 상의 terminal 기능 제공

https://goteleport.com/docs/

Teleport 구성

구성요소는 다음과 같은 역할로 나누어진다.

위 3가지 구성요소는 single binary로 제공되어 All-in-one으로도 설치가 가능하다.
(또한 분리 구성도 가능하다.)

Teleport 동작방식

SSH 접근에 대해 먼저 설명하자면 Teleport Auth 서비스에서 자체 CA를 가지고 인증서를 발급 및 폐기를 진행한다.
SSH접근을 예로 들면 위 CA 인증서를 기반으로 OpenSSH certificates 방식으로 인증을 수행하게 된다.

실제 동작은 다음과 같이 수행되어진다.

SSH Certificate 를 사용한 인증방식
아래는 SSH Certificate 의 동작방식에 대한 내용이 나와있는 링크들이다.

Teleport 설치

다음과 같은 방식으로 설치가 가능하다.

설치전 알아두어야할 정보들
아래와 같은 3022~3027,3080 등이 사용되어진다.
하여 방화벽이 서버에 존재시 해당 포트를 open 하는 작업이 필요하다.

rpm 방식으로 teleport server(proxy/auth) 설치

[root@rocky8-server ~]# dnf install https://get.gravitational.com/teleport-6.2.8-1.x86_64.rpm

자동으로 config yaml 파일이 생성되지 않는다. 하여 다음링크에서 yaml 파일 내용을 복사해온다.

혹은 아래 명령을 통해 teleport.yaml 파일을 생성할 수 있다.

teleport configure > /etc/teleport/teleport.yaml

또한 systemd로 자동 등록되지 않기에 이를 추가 해야 한다.

참고
root 계정을 사용하지 말고 그외 계정을 생성해서 사용하기를 권장한다.
https://goteleport.com/docs/admin-guide/

systemd service로 생성후 daemon-reload 및 start를 수행하면 동작이 된다.

teleport.yaml 파일을 원하는 방식으로 동작되도록 설정한다.

아래 ansible-playbook으로 server(proxy,auth) / node에 대한 기본 설치 및 systemd 등록 그리고 teleport.yaml을 생성하는 것을 만들어놓았으니 이를 참고하면 좀더 쉽게 설치/설정을 할수 있을거라 보인다.

node는 아래 node 추가항목을 참고하길 바란다.

teleport 사용

먼저 사용자 생성이 중요하다.

사용자 로그인

login은 tctl을 통해 먼저 계정 생성을 한후 접근이 가능하다.

아래링크를 참고하여 사용자를 생성하자.

실제 tctl 명령어를 이용하여야 하고

[root@rocky8-server etc]# tctl users add dubaek
NOTE: Teleport 6.0 added RBAC in Open Source edition.

In the future, please create a role and use a new format with --roles flag:

$ tctl users add "dubaek" --roles=[add your role here]

We will deprecate the old format in the next release of Teleport.
Meanwhile we are going to assign user "dubaek" to role "admin" created during migration.

User "dubaek" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h:
https://rocky8-server:3080/web/invite/cdc955154e112eeb1ce23884165545e7

NOTE: Make sure rocky8-server:3080 points at a Teleport proxy which users can access.

언급된 주소(https://rocky8-server:3080/web/invite/xxxxx)로%EB%A1%9C) 연결하여 아래와 같은 password 변경을 수행한다.
(이때 google OTP를 이용하여 token을 발행해야 한다.)

 

 

실제 생성된 계정에는 앞서 생성시 추가했던 root,dubaek,ubuntu 등이 logins에 추가 되어 있다.

[root@rocky8-server log]# tctl get user
kind: user
metadata:
  id: 1627887255392573657
  name: dubaek
spec:
  created_by:
    time: "2021-08-02T06:54:15.392197653Z"
    user:
      name: 7c3e1de2-e8cc-407e-a046-774e8ce2f26f.rocky8-server
  expires: "0001-01-01T00:00:00Z"
  roles:
  - admin
  status:
    is_locked: false
    lock_expires: "0001-01-01T00:00:00Z"
    locked_time: "0001-01-01T00:00:00Z"
  traits:
    kubernetes_groups:
    - ""
    kubernetes_users:
    - dubaek
    logins:
    - dubaek
    - root
    - ubuntu
version: v2

만약 user의 logins 에 다른 계정을 추가하고자 할 경우 다음과 같은 방식으로 수정 및 업데이트가 가능하다.

[root@rocky8-server log]# tctl get user/dubaek > dubaek.yaml
[root@rocky8-server log]# vim dubaek.yaml
[root@rocky8-server log]# tctl create -f dubaek.yaml
user "dubaek" has been updated

node 추가

node 추가에는 invite 하는 방식과 token을 발행하여 join 하는 방식 두가지가 있다.

invite 방식의 node 추가
[root@rocky8-server etc]# tctl nodes add
The invite token: 217d9e2757b28c8c74e3efa808f72977
This token will expire in 30 minutes

Run this on the new node to join the cluster:

> teleport start \
   --roles=node \
   --token=217d9e2757b28c8c74e3efa808f72977 \
   --ca-pin=sha256:817e4eb920cbfe46c0549623e871b4b1e9805dc18cbd5db96514fbe16ea5746f \
   --auth-server=10.0.2.15:3025

Please note:

  - This invitation token will expire in 30 minutes
  - 10.0.2.15:3025 must be reachable from the new node
token발행과 cluster join을 통한 node 추가

아래 링크를 참고하여 노드 추가가 가능하다.

역할별 접근 권한 관리

다음과 같이 4개의 role이 기본으로 제공된다.

[root@rocky8-server log]# tctl get role | grep -E "^  name"
  name: access
  name: admin
  name: auditor
  name: editor

서버별 접근을 제어하기 위해서는 role내에 label을 통한 제어를 수행해야 한다.

먼저 role template을 다음 명령어로 생성한다.(기존 양식을 가져와 수정하여 적용하는 방식이다.)

[root@rocky8-server ~]# tctl get role/access > dev-role.yaml
[root@rocky8-server ~]# vim dev-role.yaml

여기서 metadata id는 제거하고 name은 생성하려는 role name으로 지정한다.
또한 아래와 같이 spec내에 접근을 하고자 하는 teleport node의 label을 등록해주면
지정된 label을 보유한 node들만 해당 role을 할당받은 사용자들에게 출력된다.

spec:
  allow:
    ...
    node_labels:
      'env': 'prd'

수정이 완료되면 다음과 같이 role을 생성한다.

[root@rocky8-server ~]# tctl create -f dev-role.yaml
role 'dev' has been created
[root@rocky8-server ~]# tctl get role | grep "^  name"
  name: access
  name: admin
  name: auditor
  name: dev
  name: editor

dev role이 생성된것을 확인할 수 있다.

이후 해당 role로 접근을 시도해보면 앞서 지정한 label에 매핑되는 node들이 출력되고 접근이 될수 있게 된다.

jacob@dubaekPC:~$ tsh login --insecure --proxy=rocky8-server --user=testuser
Enter password for Teleport user testuser:
Enter your OTP token:
194164
WARNING: You are using insecure connection to SSH proxy https://rocky8-server:3080
> Profile URL:        https://rocky8-server:3080
  Logged in as:       testuser
  Cluster:            rocky8-server
  Roles:              dev
  Logins:             -teleport-nologin-76b845e0-f75e-4e51-8176-a42bb600f277
  Kubernetes:         disabled
  Valid until:        2021-08-04 12:30:46 +0900 KST [valid for 1h0m0s]
  Extensions:         permit-agent-forwarding, permit-port-forwarding, permit-pty
jacob@dubaekPC:~$ tsh ls
Node Name Address            Labels
--------- ------------------ -------
ubuntu20  192.168.56.20:3022 env=prd

기능 확인

SSH Proxy

아래와 같이 tsh ssh 명령을 통해 등록된 서버에 모두 접근이 가능하다.

jacob@dubaekPC:~/temp/teleport/examples$ tsh ls
Node Name     Address            Labels
------------- ------------------ -----------------------------------
rocky8-server 127.0.0.1:3022     env=example, hostname=rocky8-server
ubuntu20      192.168.56.20:3022

jacob@dubaekPC:~/temp/teleport/examples$ tsh ssh root@ubuntu20
root@ubuntu20:~#
root@ubuntu20:~# exit
logout
the connection was closed on the remote side on  02 Aug 21 16:31 KST
jacob@dubaekPC:~/temp/teleport/examples$ tsh ssh ubuntu@ubuntu20
ubuntu@ubuntu20:~$
ubuntu@ubuntu20:~$ exit
logout
the connection was closed on the remote side on  02 Aug 21 16:31 KST
jacob@dubaekPC:~/temp/teleport/examples$ tsh ssh root@rocky8-server
[root@rocky8-server ~]#

참고
아래와 같이 x509 인증서 에러가 나오는경우는 로그인과정상에 인증실패가 된것이라 보면 된다.

jacob@dubaekPC:~$ tsh ssh -p 22 root@ubuntu20
ERROR: Get "https://rocky8-server:3080/webapi/ping": x509: certificate signed by unknown authority

Access Kubernetes

WIP..

Access Application

WIP..

Access Database

WIP..

Session Recording and joining

아래와 같이 UI를 통해 recording 된 리스트를 확인할 수 있고 play 버튼을 클릭하여 실제 영상으로 재생해볼수 있다.

 

 

추가로

 

 

Agentless 방식의 SSH

Bastion Host 형태로 제공되는 Teleport를 사용하는 경우 Teleport Node에서 동작되는 Teleport Agent가 종료된 경우
접근이 불가능한 상황이 초래될수 있다. 이러한 경우 이를 우회하여 접근할 수 있는 방법에 대하여 알아보도록 하자.

아래 링크에서는 agentless 방식으로 접근이 가능하다고 한다.

실제로는 openssh 방식을 통한 접근을 사용하게 되면 Teleport Agent가 예기치 않게 종료되어도 접근을 수행할 수 있다.

Teleport Server(Proxy/Auth)에서 아래 명령을 입력하여 CA를 export 하고

sudo tctl auth export --type=user > teleport_user_ca.pub

이를 각 Teleport Node의 /etc/ssh 디렉토리로 복사한뒤 아래 설정을 넣어준후 sshd를 재시작하게 되면 Teleport Server의 CA를 인지하고 인증이 이루어질수 있다.

TrustedUserCAKeys /etc/ssh/teleport_user_ca.pub

certificate rotate

일반적으로 아래와 같은 rotated 상태이다.

[root@rocky8-server ~]# tctl status
Cluster  rocky8-server
Version  6.2.8
Host CA  rotated Aug  4 10:34:44 UTC
User CA  rotated Aug  4 10:34:44 UTC
Jwt CA   rotated Aug  4 10:34:44 UTC
CA pin   sha256:817e4eb920cbfe46c0549623e871b4b1e9805dc18cbd5db96514fbe16ea5746f

만약 initialize나 다른 상태이며 아래와 같은 명령으로 이를 정상화 시킬수 있다.

[root@rocky8-server ~]# tctl auth rotate --manual --phase=standby
Updated rotation phase to "standby". To check status use 'tctl status'

Audit

Audit log는 다음 3가지 방식으로 제공이 된다.

  1. SSH Event : login 성공/실패
  2. Recorded Sessions : SSH shell session에 대한 record 된 영상
  3. Enhanced Session Recording : BPF 기반으로 사용했던 명령들을 json 형태로 남기고 이를 encoding 하여 만약의 해킹과 같은 위험 상황에 손쉽게 확인되지 않도록 함.

2번 Recorded Session의 경우 영상 기반으로 재생이되는 방식으로 아래와 같이 tsh 명령을 이용해 해당 영상을 재생하거나 UI를 통해 확인해볼수 있다.

tsh play [session_record_id]

3번의 경우 enhanced recording session 이라는 기능이며 최신버전에서 제공하고 있으며 커널버전이 5.8이상으로 최신 OS사용에 대한 제한이 있다. 다만 해당 기능을 통해 SSH로 연결하여 수행했던 history를 좀더 편리하게 확인할 수 있다.

아무래도 영상 기반의 파일이라 향후 추적을 위한 검색기능을 구현해야할 경우 이를 활용하기는 어려워 보인다.
하여 text 기반으로 남겨지는 session recording 데이터가 없는지 확인해본 결과 chunks 파일을 통해 확인할 수 있었다.
다만, 해당 chunks 파일은 tsh play를 한번이라도 한 경우 남게 되어 매번 이를 수행하는 cron job 같은게 필요할 것으로 보인다.

root@service1:/var/lib/teleport/log/playbacks/sessions# tree
.
└── default
    ├── 820bee91-e7e9-4627-8af6-1213be477906-0.chunks
    ├── 820bee91-e7e9-4627-8af6-1213be477906-0.chunks.gz
    ├── 820bee91-e7e9-4627-8af6-1213be477906-0.events.gz
    ├── 820bee91-e7e9-4627-8af6-1213be477906.index
    ├── 820bee91-e7e9-4627-8af6-1213be477906.tar
    ├── bb03fa95-cd98-4745-af7f-b054a8898583-0.chunks
    ├── bb03fa95-cd98-4745-af7f-b054a8898583-0.chunks.gz
    ├── bb03fa95-cd98-4745-af7f-b054a8898583-0.events.gz
    ├── bb03fa95-cd98-4745-af7f-b054a8898583.index
    ├── bb03fa95-cd98-4745-af7f-b054a8898583.tar
    ├── e3a74e05-8192-440a-8be6-eda12d16dfcc-0.chunks
    ├── e3a74e05-8192-440a-8be6-eda12d16dfcc-0.chunks.gz
    ├── e3a74e05-8192-440a-8be6-eda12d16dfcc-0.events.gz
    ├── e3a74e05-8192-440a-8be6-eda12d16dfcc.index
    └── e3a74e05-8192-440a-8be6-eda12d16dfcc.tar

1 directory, 15 files

tsh play 명령 (혹은 UI상에서 play 버튼을 클릭한 경우) 수행후 위와 같은 파일이 남게 되고
여기서 *.chunks 파일을 확인해보면 다음과 같은 history를 확인할 수 있다.

root@service1:/var/lib/teleport/log/playbacks/sessions# cat default/e3a74e05-8192-440a-8be6-eda12d16dfcc-0.chunks
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@service2:~$ ls
ubuntu@service2:~$ pwd
/home/ubuntu
ubuntu@service2:~$ exit
logout

위 audit log들에 대한 설정은 아래 링크에서 session_recording 를 검색하여 해당 설정에 대한 설명을 읽어보길 바란다.

Pros/Cons

아래 blog에 teleport에 대한 장/단점 요약이 잘되어 있다.

참고사이트

출처: https://www.jacobbaek.com/1287 [Jacob Baek's home:티스토리]

블로그 이미지

wtdsoul

,

 

https://labs.detectify.com/writeups/a-deep-dive-into-aws-s3-access-controls-taking-full-control-over-your-assets/

 

A deep dive into AWS S3 access controls - Labs Detectify

Original research from Frans Rosen on vulnerabilities in AWS S3 bucket access controls and how to do set it up properly and monitor security.

labs.detectify.com

 

TL;DR: Setting up access control of AWS S3 consists of multiple levels, each with its own unique risk of misconfiguration. We will go through the specifics of each level and identify the dangerous cases where weak ACLs can create vulnerable configurations impacting the owner of the S3-bucket and/or through third party assets used by a lot of companies. We also show how to do it properly and how to monitor for these sorts of issues.

A simplified version of this write-up is available on the Detectify blog.

Quick background

Amazon Web Services (AWS) provides a service called Simple Storage Service (S3) which exposes a storage container interface. The storage container is called a “bucket” and the files inside the bucket are called “objects”. S3 provides an unlimited storage for each bucket and owners can use them to serve files. Files can be served either privately (via signed URLs) or publicly via an appropriately configured ACL (Access Control List) or ACP (Access Control Policy).

AWS also provides a (CDN) service called CloudFront which is often configured to quickly serve S3 hosted files/objects from an optimized CloudFront server as close as possible to the user who is requesting the file.

Introduction

Recently, a few blog posts have mentioned scenarios where the misconfiguration of a S3 bucket may expose sensitive data as well as explaining that the S3 access control lists (ACL) are quite different to the regular user permission setup in AWS which is called Identify Access Management (IAM).

However, we decided to approach this from a different angle. By identifying a number of different misconfigurations we discovered that we could suddenly control, monitor and break high end websites due to weak configurations of the bucket and object ACLs.

Disclaimer

All instances disclosed below were reported to the affected parties using responsible disclosure policies. In some of the cases, third party companies were involved and we got assistance from the companies affected to contact the vulnerable party.

We do not recommend testing any of the vulnerable scenarios below without prior approval. This is especially important in scenarios where the only way to identify the vulnerability was to actually override files and configurations. We did, however, identify one method to detect one of the vulnerable setups without actually modifying the data. You should still make sure you’re not affecting any party that has not given you written approval.

Technical details

The different misconfigurations and the impact of each depend on the following criteria:

  • Who owns the S3 bucket
  • What domain is being used to serve the files from the bucket
  • What type of files are inside the bucket

We will try to go through all different cases below and explain when they can be created with a vulnerable misconfiguration.

Identification of buckets

To start off, we need to be able to identify buckets owned by or used by the company. We need the specific bucket’s name to make signed requests to the bucket.

Identifying a bucket depends on the setup and also how the bucket is being reached: The request can go directly to S3, to CloudFront (or any other CDN proxy serving files from the bucket), to the S3 “Static Website” option, or more.

Some methods to identify S3-buckets are:

  • Look at the HTTP-response for a Server-header which says AmazonS3.
  • Look at a random URL that doesn’t exist and see if it gives you a S3-404, either with “Static Website enabled” or not, containing Access Denied or NoSuchKey: 
  •  
  • The DNS-entry of the domain might reveal the bucket-name directly if the host points directly to S3.
  • Try accessing the root-URL. If index-listing is enabled (public READ on the Bucket ACL) you will be able to see the bucket-name defined in <Name>-element.

We have identified multiple ways to make an S3-bucket actually reveal itself independent of proxies in front of it. We have notified AWS about these methods and chosen not mention them above.

If you do find a domain that is pointing to a bucket, but cannot get the bucket name, try the actual fully qualified domain name (FQDN) as the bucket name, this is a common setup, having the bucket named as the domain that is pointing to it.

If this doesn’t work, try to:

  • Google the domain and see if any history of it exposes the bucket name.
  • Look at response headers of objects in the bucket to see if they have meta data that reveals the bucket name.
  • Look at the content and see if it refers to any bucket. We’ve seen instances where assets are tagged with the bucket name and a date when they were deployed.
  • Brute-force. Be nice here, don’t shoot thousands of requests against S3 just to find a bucket. Try be clever depending on the name of the domain pointing to it and the actual reason why the bucket exists. If the bucket contains audio files for ACME on the domain media.acme.edu, try media.acme.edu, acme-edu-media, acme-audio or acme-media.

If the response on $bucket.s3.amazonaws.com shows NoSuchBucket you know the bucket doesn’t exist. An existing bucket will either give you ListBucketResult or AccessDenied.

(You might also stumble upon AllAccessDisabled, these buckets are completely dead).

Remember, just because a bucket is named as the company or similar, that doesn’t mean it is owned by that company. Try find references directly from the company to the bucket to confirm it is indeed owned by the specific company.

Permission/predefined groups

First, we will explore the different options that can be used for giving access to a requester of a bucket and the objects within:

ID / emailAddress

You are able to give access to a single user inside AWS using either the AWS user ID or their email address. This makes sense if you want to allow a single user to have specific access to the bucket.

AuthenticatedUsers

This is probably the most misunderstood predefined group in AWS S3’s ACL. Having the ACL set to AuthenticatedUsers basically means “Anyone with a valid set of AWS credentials”. All AWS accounts that can sign a request properly are inside this group. The requester doesn’t need to have any relation at all with the AWS account owning the bucket or the object. Remember that “authenticated” is not the same thing as “authorized”.

This grant is likely the most common reason a bucket is found vulnerable in the first place.

AllUsers

When this grant is set, the requester doesn’t even need to make an authenticated request to read or write any data, anyone can make a PUT request to modify or a GET request to download an object, depending on the policy that is configured.

Policy permissions / ACP (Access Control Policies)

The following policy permissions can be set on the bucket or on objects inside the bucket.

The ACPs on bucket and objects control different parts of S3. AWS has a list showing exactly what each grant does. There are more cases not mentioned below where you can create specific IAM policies for a bucket, called a bucket-policy. Creating a bucket-policy has its own issues, however, we will only cover the standard setup of ACLs set on buckets and objects.

READ

This gives the ability to read the content. If this ACP is set on a bucket, the requester can list the files inside the bucket. If the ACP is set on an object, the content can be retrieved by the requester.

READ will still work on specific objects inside a bucket, even if Object Access READ is not set on the complete bucket.

With the following ACL setup inside AWS S3:

Bucket-ACL: Object-ACL:

We can still read the specific object:

$ aws s3api get-object --bucket test-bucket --key read.txt read.txt
{
    "AcceptRanges": "bytes", 
    "ContentType": "text/plain", 
    "LastModified": "Sun, 09 Jul 2017 21:14:15 GMT", 
    "ContentLength": 43, 
    "ETag": ""1398e667c7ebaa95284d4efa2987c1c0"", 
    "Metadata": {}
}

This means READ can be different for each object, independently of the settings on the bucket.

READ_ACP

This permission gives the ability to read the access control list of the bucket or object. If this is enabled, you can identify vulnerable assets without trying to modify the content or ACP at all.

READ_ACP will still work on specific objects inside a bucket, even if Object Access READ_ACP is not set on the complete bucket.

Bucket-ACL: Object-ACL:
$ aws s3api get-object-acl --bucket test-bucket --key read-acp.txt        
{
    "Owner": {
        "DisplayName": "fransrosen", 
        ...

This means READ_ACP can be different for each object, independently of the settings on the bucket.

WRITE

This permission gives the ability to write content. If the bucket has this enabled for a user or group, that party can upload, modify and create new files.

WRITE will not work on specific objects inside a bucket, if Object Access WRITE is not set on the complete bucket:

Bucket-ACL: Object-ACL:
$ aws s3api put-object --bucket test-bucket --key write.txt --body write.txt

An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

However, if WRITE is set on the bucket, all objects will obey and will not be able to decide individually if they should be writable or not:

Bucket-ACL: Object-ACL:
$ aws s3api put-object --bucket test-bucket --key write.txt --body write.txt
{
    "ETag": ""1398e667c7ebaa95284d4efa2987c1c0""
}

This means, WRITE can be verified on the bucket in two ways, either by uploading a random file, or by modifying an existing one. Modifying an existing file is destructive and should not be done at all. Below we will explain a way to check this without doing a destructive call, by triggering an error in between the access control check and the actual modification of the file.

WRITE_ACP

This permission gives the ability to modify the permission ACL of a bucket or object.

If the bucket has this enabled for a user or a group, that party can modify the ACL of the bucket which is extremely bad. Having WRITE_ACP on a bucket will completely expose it to be controlled by the party having the ACP set, meaning any content of any object can now be controlled by the party. The attacker might not be able to READ every object already in the bucket, but they can still fully modify the existing objects. Also, the initial owner of the S3-bucket will get an Access Denied in the new AWS S3-console when the attacker is claiming ownership of it when removing the READ-access on the bucket.

First, no access to READ_ACP or WRITE:

$ aws s3api get-bucket-acl --bucket test-bucket

An error occurred (AccessDenied) when calling the GetBucketAcl operation: Access Denied

$ aws s3api put-object --bucket test-bucket --key write-acp.txt --body write-acp.txt

An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

Then we try to change the bucket ACL:

$ aws s3api put-bucket-acl --bucket test-bucket --grant-full-control emailaddress=frans@example.com && echo "success"
success

The initial owner of the bucket will now see this:


(Being the owner, they will still be able to modify the policy of the bucket, but this is a weird case anyway.)

We can now control everything:

$ aws s3api get-bucket-acl --bucket test-bucket
{
...
    "Grants": [
        {
            "Grantee": {
                "Type": "CanonicalUser", 
                "DisplayName": "frans", 
                "ID": "..."
            }, 
            "Permission": "FULL_CONTROL"

$ aws s3api put-object --bucket test-bucket --key write-acp.txt --body write-acp.txt
{
    "ETag": ""1398e667c7ebaa95284d4efa2987c1c0""
}

A very interesting thing is that WRITE_ACP will actually still work on specific objects inside a bucket even if Object Access WRITE_ACP is not set on the complete bucket:

Bucket-ACL: Object-ACL:
$ aws s3api put-object-acl --bucket test-bucket --key write-acp.txt --grant-write-acp uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers && echo "success"
success

Also, the opposite of WRITE applies here, having WRITE_ACP on the bucket, doesn’t mean you directly have WRITE_ACP on an object:

Bucket-ACL: Object-ACL:
$ aws s3api put-object-acl --bucket test-bucket --key write-acp.txt --grant-full-control emailaddress=frans@example.com

An error occurred (AccessDenied) when calling the PutObjectAcl operation: Access Denied

However, by performing the following steps when having WRITE_ACP on the bucket you will still gain full access of the content of any object, by replacing the existing object with new content:

  1. Modify the bucket ACL:
    $ aws s3api put-bucket-acl --bucket test-bucket --grant-full-control emailaddress=frans@example.com && echo "success"
    success
  2. Modify the object (This changes you to the owner of the object):
    $ aws s3api put-object --bucket test-bucket --key write-acp.txt --body write-acp.txt
    {
     "ETag": ""1398e667c7ebaa95284d4efa2987c1c0""
    }
  3. Modify ACP of the object again:
    $ aws s3api put-object-acl --bucket test-bucket --key write1.js --grant-full-control emailaddress=frans@example.com && echo "success"
    success

Since WRITE still needs to be set on the bucket, you cannot upgrade a WRITE_ACP on an object to give yourself WRITE on the same object:

$ aws s3api put-object-acl --bucket test-bucket --key write-acp.txt --grant-write-acp uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers --grant-write uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers --grant-read-acp uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers --grant-read uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers && echo "success"
success
Bucket-ACL: Object-ACL:

This will still give you:

$ aws s3api put-object --bucket test-bucket --key write-acp.txt --body write-acp.txt

An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

However, you can still remove all ACPs on the object, making the object completely private, which will stop it being served, giving a 403 Forbidden.

WRITE_ACP can unfortunately only be verified by testing writing a new ACP on a bucket or object. Modifying the existing one is of course destructive and should not be done without approval. We have not found a non-destructive way of testing this ACP.

FULL_CONTROL

This is the policy that combines all other policies. However, WRITE will still not work on an object unless the bucket has it set, even if this permission is set on an object.

Vulnerable scenarios

The following scenarios are cases where the company can be affected.

1. Bucket used on a domain owned by the company

You found a bucket which is served by a subdomain or domain of the company.

You should test for:

  • BUCKET READ
    Listing files in the bucket. Sensitive information might be exposed.
  • BUCKET READ-ACP
    Let’s look at the ACP and see if we can identify the bucket being vulnerable without actually trying anything. If we see that AllUsers or AuthenticatedUsers has WRITE_ACP set, we know we can gain full control over the bucket, without doing anything else.
  • BUCKET WRITE (Simulate using invalid-MD5 hack)
    If we can upload a new file to the bucket. This also tells us we can overwrite any object in the bucket. However, if we want to avoid uploading anything, we can try the following hack, not uploading anything but still see that we are able to do it:
    When making a signed PUT request to a bucket, we have the option to add a Content-MD5 telling AWS the checksum of the content being uploaded. It turns out that this check is happening inside the following flow:
    1. Check that the user has access writing the file.
    2. Check that the MD5-checksum is matching the content.
    3. Upload the file.
    Since the checksum control happens after we know that we have access to the file, but before actually modifying it, we do not need to write to the file to know that we are able to.
    # use this by: ./put-simulate.sh test-bucket/write.txt 
    AWS_ACCESS_KEY_ID="***"
    AWS_SECRET_ACCESS_KEY="***"
    AWS_S3_BUCKET="$(echo "$1" | cut -d "/" -f1)"
    AWS_PATH="/$(echo "$1" | cut -d "/" -f2-)"
    date=$(date +"%a, %d %b %Y %T %z")
    acl="x-amz-acl:private"
    content_type='application/octet-stream'
    
    # we create a checksum of the word "yepp", but will upload a file with the content "nope".
    content_md5=$(openssl dgst -md5 -binary <(echo "yepp") | openssl enc -base64)
    
    string="PUTn${content_md5}n${content_type}n${date}n${acl}n/${AWS_S3_BUCKET}${AWS_PATH}"
    signature=$(echo -en "${string}" | openssl sha1 -hmac "${AWS_SECRET_ACCESS_KEY}" -binary | base64)
    echo "PUT to S3 with invalid md5: ${AWS_S3_BUCKET}${AWS_PATH}"
    result=$(curl -s --insecure -X PUT --data "nope" 
    -H "Host: ${AWS_S3_BUCKET}.s3.amazonaws.com" 
    -H "Date: $date" 
    -H "Content-Type: ${content_type}" 
    -H "Content-MD5: ${content_md5}" 
    -H "$acl" 
    -H "Authorization: AWS ${AWS_ACCESS_KEY_ID}:${signature}" 
    "https://${AWS_S3_BUCKET}.s3.amazonaws.com${AWS_PATH}")
    
    if [ "$(echo ${result} | grep 'The Content-MD5 you specified did not match what we received')" != "" ]; then
      echo "SUCCESS: ${AWS_S3_BUCKET}${AWS_PATH}"
      exit 0
    fi
    echo "$result"
    exit 1
    On a bucket we can upload to, this will result in:On a bucket we cannot upload to, this will result in:We will therefore never modify the content, only confirm we can do it. This unfortunately only works on WRITE on objects, not on WRITE_ACP as far as we know.
  • $ ./put-simulate.sh test-secure-bucket/write.txt PUT to S3 with invalid md5: test-secure-bucket/write.txt <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message>
  • $ ./put-simulate.sh test-bucket/write.txt PUT to S3 with invalid md5: test-bucket/write.txt SUCCESS: test-bucket/write.txt
  • The following bash code simulates this scenario:
  • BUCKET WRITE-ACP
    The most dangerous one. Fully upgradable to full access of the bucket. Destructive call. Be careful. The only way to do this one properly is to first figure out how the bucket behaves to not break any current ACP. Remember that you can still have access to WRITE_ACP even though you do not have access to READ_ACP.
    API-documentation reference
  • OBJECT READ
    We can try to read the content of files we are interested in found by BUCKET READ.
  • OBJECT WRITE
    No need to test this one, since BUCKET WRITE decides fully. If BUCKET WRITE gives an error the object will not be writable and if BUCKET WRITE is successful, the object will always be writable.
    However, if the company using the bucket has an application where users can upload files, look at the implementation of how they make the actual file upload to S3. If the company is using a POST Policy upload, specifically look in the policy at the Condition Matching of the $key and the Content-type. Depending on if they use starts-with you might be able to modify the content type to HTML/XML/SVG or similar, or change the location of the file being uploaded.
  • OBJECT WRITE-ACP
    We can try modifying the ACP of the specific object. It will not enable us to modify the content, but only the access control of the file, giving us the ability to stop files from working publicly.
    API-documentation reference

Possible vulnerabilities:

  • Reflected XSS. If we can do BUCKET READ we can list assets and might find vulnerable objects, like a vulnerable SWF served on the company’s domain.
  • Stored XSS / asset control. If we can do BUCKET WRITE or BUCKET WRITE-ACP (also meaning OBJECT WRITE) we can modify existing content or create new content, being able to modify javascript/css-files or by uploading a new HTML-file.
  • Denial of server. If we can modify the ACP of objects using OBJECT WRITE-ACP, we can prevent objects from loading publicly.
  • Information disclosure. If we can list objects we might find sensitive information.
  • RCE. If the bucket contains modifiable executables this can result in Remote Code Execution (RCE) depending on where the executables are being used and if/by whom they are being downloaded.

2. Assets from bucket used by the company

Additional Disclaimer: The assets being used by a company might not always be owned by the company. You need to be extremely careful here not to attack anyone other than the intended target who has given you permission to test.

There are projects trying to automate this, such as Second Order. However, Second Order only checks for assets being referenced in the HTTP-response, files being loaded dynamically are not being checked. Below is a quick example of also checking for dynamically loaded assets using Headless Chrome.

First, start the headless version on port 9222:

"/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --remote-debugging-port=9222 --disable-gpu --headless

We can then use a small script. (context.js is borrowed from the HAR-capturer-project since that one properly closes tabs)

const CDP = require('chrome-remote-interface');
const URL = require('url').URL;
const Context = require('./context');

async function log_requests(orig_url) {
    const context = new Context({});

    process.on('SIGTERM', function () {
        context.destroy();
    });

    try {
        const client = await context.create();
        const {Network, Page} = client;
        const ourl = new URL('http://' + orig_url);
        const ohost = ourl.host;

        Network.requestWillBeSent((params) => {
            if (params.request.url.match('^data:')) {
                return;
            }
            const url = new URL(params.request.url);
            console.log(ohost + ':' + url.host + ':' + params.request.url);
        });
        await Promise.all([Network.enable(), Page.enable()]);
        await Page.navigate({url: 'http://' + orig_url});
        await Page.loadEventFired();
        await Page.navigate({url: 'https://' + orig_url});
        await Page.loadEventFired();
    } finally {
        await context.destroy();
    }
}
const url = process.argv.slice(2)[0];
log_requests(url);

Which will give us all assets on the page which we then can use to figure out if they are served from S3 or not:

You should test for:

Possible vulnerabilities:

  • Stored XSS / asset control. If we can do BUCKET WRITE or BUCKET WRITE-ACP (also meaning OBJECT WRITE) we can modify existing content or create new content, being able to modify javascript/css-files or similar. This can be extremely bad depending on where the assets are being used, such as on login pages or on main pages.
  • Denial of server. If we can modify the ACP of objects using OBJECT WRITE-ACP, we can prevent objects from loading publicly.
  • RCE. If assets are modifiable executables this can result in Remote Code Execution (RCE) depending on where the executables are being used and if/by whom they are being downloaded.

3. Bucket randomly found, indications it’s owned by the company

This one is a bit complicated. You need to have clear evidence and proof that the bucket is indeed owned by the company. Try to find references from the company pointing to this bucket, such as references on their website, CI logs or open source code.

You should test for:

Possible vulnerabilities:

  • Stored XSS / asset control. If we can do BUCKET WRITE or BUCKET WRITE-ACP (also meaning OBJECT WRITE) we can modify existing content or create new content, being able to modify javascript/css-files. However, in this case we don’t know where the files are being used so we cannot know how big the impact is without talking with the company.
  • Denial of server. If we can modify the ACP of objects using OBJECT WRITE-ACP, we can prevent objects from loading publicly. We do not know in this case if they are however.
  • Information disclosure. If we can list objects we might find sensitive information. Only do this if you have confirmed that the bucket is indeed connected to the company you have approval from.
  • RCE. If the bucket contains modifiable executables this can result in Remote Code Execution (RCE) depending on where the executables are being used and if/by whom they are being downloaded.

Results

During this research we were able to confirm we could control assets on high profile websites. We reported these issues directly and were able to get them solved quickly. The following categories of websites were affected:

  • Password managers
  • DNS/CDN providers
  • File storage
  • Gaming
  • Audio and video streaming providers
  • Health tracking

We identified vulnerable assets placed on the login pages of some companies.

In some cases, vulnerable assets were loaded using Google Tag Manager (gtm.js) however, they did not sandbox the third parties properly, running the third party assets directly on the domain itself (not by sandboxing them using www.googletagmanager.com)

We got in touch with some third party providers, both directly but also with help from the affected companies, quickly identifying the issue and solving it very fast.

How to stay safe

The following processes can prevent this issue from happening:

  1. Sandbox third party assets. As soon as you are in need of third party assets, through gtm.js or similar, try isolating the scripts either by using the iframe provided by Google Tag Manager or by placing them on a separate domain (not only using a subdomain). Also ask your provider how they handle access control on their files, and if they are using S3 for file serving.
  2. If you have your own buckets, take a look through the bucket ACLs to verify WRITE and WRITE_ACP are only set on specific users, never on groups such as AllUsers or AuthenticatedUsers.
  3. The hardest fix is to prevent any object in any bucket from having WRITE_ACP, test yourself by doing a aws s3api put-object-acl with the appropriate settings using a restricted AWS-user against your own objects in your buckets. You might need to update the ACL on every object to mitigate this completely.
  4. Take a look and see how you are uploading objects to S3 buckets and make sure you set the proper ACLs on both buckets and objects.
  5. Do not use a secret bucket name as a form of Security through Obscurity. Treat the bucket name like it is already public information.

On a final note

It’s clear after this research that this problem is widespread and hard to identify and completely solve, especially if the company uses a huge amount of buckets, created by different systems. WRITE_ACP is the most dangerous one for reasons mentioned, both on buckets and objects.

An interesting detail when manually uploading files to S3 using Cyberduck, changing the access control on a file looks like this:

Pretty easy to accidentally pick the wrong one there.

Until next time.

What Detectify scans for

Detectify tests web applications for the following S3 misconfiguration vulnerabilities with a severity range between 4.4-9 on the CVSS scale:

  • Amazon S3 bucket allows for full anonymous access
  • Amazon S3 bucket allows for arbitrary file listing
  • Amazon S3 bucket allows for arbitrary file upload and exposure
  • Amazon S3 bucket allows for blind uploads
  • Amazon S3 bucket allows arbitrary read/writes of objects
  • Amazon S3 bucket reveals ACP/ACL

 

'경로 및 정보' 카테고리의 다른 글

eSIM 구현 Android 오픈소스  (0) 2024.03.06
Teleport 관련 자료  (1) 2024.02.24
nginx version 숨기기 및 header 정보 숨기기  (1) 2023.12.07
/.vscode/sftp.json  (0) 2023.11.19
Active Directory Basic 문제  (0) 2023.11.12
블로그 이미지

wtdsoul

,