null@nothing $

Exploring kernel exploitation and reverse engineering.

15 April 2026

Building AOSP & Setting Up a Native Debugging Environment

by 0xnull007

If you’ve ever wanted to step through the Android Runtime with a debugger, set breakpoints inside libart.so, you need to build AOSP from source. There’s no shortcut. The prebuilt SDK images that ship with Android Studio are stripped of debug symbols, and no corresponding symbol files are distributed. So even if you can push modified libraries to the emulator, you can’t get meaningful debugger output without building from source

This post documents the complete process I followed: from cloning the AOSP tree to booting a custom emulator and attaching GDB to zygote64. It’s the first part of a series on Android internals, where we’ll eventually dig into the ART runtime. But none of that is possible without the environment, so let’s build it.

01) Prerequisites: Hardware and Expectations

Before you commit to this, here’s what you need to know about resources. A full AOSP checkout is roughly 100GB of source code. A single build of the emulator target adds another 150–200GB of intermediate and output files.

Resource Minimum Recommended
CPU Cores 8 16+
RAM 32 GB 64 GB
Storage 400 GB NVMe SSD 500+ GB NVMe

I ran this on a laptop with 20 cores, 32GB of RAM, and a 1TB NVMe. The 32GB turned out to be tight, more on that shortly.

A note on swap

If you have 32GB of RAM, you will hit OOM kills during the link stage. The LLVM linker (ld.lld) can consume several gigabytes per instance, and with multiple link jobs running in parallel, memory usage spikes well past 32GB. The solution is swap, but not just any swap.

My first attempt was plugging in a 128GB USB drive and using the entire thing as a swap partition. This was a mistake. Even USB 3.0 tops out at 100–400 MB/s for random reads, compared to 3000+ MB/s on NVMe. When the build hit swap, it didn’t just slow down; it effectively froze, thrashing for minutes on end.

Solution: Create swap on your NVMe instead. 16GB is plenty, it’s a safety net for the link spikes, not something the build should live in. If you’re on ZFS (as I was), use a zvol rather than a swap file, since ZFS has caveats with swap files:

# ZFS swap
sudo zfs create -V 16G rpool/swap
sudo mkswap /dev/zvol/rpool/swap
sudo swapon /dev/zvol/rpool/swap

# Standard ext4/btrfs swap
sudo fallocate -l 16G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

02) Cloning the AOSP Source Tree

First, install the dependencies. On Ubuntu 22.04 or 24.04:

sudo apt install -y git-core gnupg flex bison build-essential \
  zip curl zlib1g-dev libc6-dev-i386 lib32z1-dev \
  x11proto-core-dev libx11-dev libgl1-mesa-dev \
  libxml2-utils xsltproc unzip fontconfig python3 lldb\

Install the repo tool, which manages the hundreds of individual Git repositories that make up AOSP:

mkdir -p ~/bin
curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
chmod a+x ~/bin/repo
export PATH=~/bin:$PATH

Now initialize and sync. I chose android-14.0.0_r1, the first Android 14 release tag because I needed a pre-patch version of a specific ART vulnerability. You can list all available Android 14 tags with git tag -l "android-14*" inside .repo/manifests.

mkdir ~/aosp && cd ~/aosp
repo init -u https://android.googlesource.com/platform/manifest -b android-14.0.0_r1
repo sync -c -j$(nproc) --no-tags --no-clone-bundle

The -c flag syncs only the current branch, which is significantly faster than a full mirror. This still downloads around 100GB and can take anywhere from 30 minutes to several hours depending on your connection.

Switching branches later: Since you already have the full repo, switching to a different tag is incremental: repo init -b android-14.0.0_r50 && repo sync -c -j$(nproc). Only the changed files are re-fetched.

Sync issues: don’t panic

The initial sync can be flaky. I hit errors during my first clone like network timeouts, fetch failures, and at one point the sync kept dying repeatedly. The fix turned out to be reducing the number of parallel jobs. The default -j$(nproc) opens too many concurrent connections and some of them get throttled or dropped by Google’s servers:

# If repo sync keeps failing, dial back the parallelism
repo sync -c -j4

The other thing worth knowing: if a sync fails partway through, just run repo sync again. It resumes from where it left off, it doesn’t re-download the repositories it already fetched. I had to run repo sync three or four times before the full tree was pulled down cleanly. Each run picked up where the last one died. Don’t delete the directory and start over.

03) Understanding the lunch Menu

After syncing, set up the build environment and choose your target:

source build/envsetup.sh
lunch

This prints a numbered menu of “common” build targets and prompts you to pick one. When I first saw this, the prompt at the bottom, “Pick from common choices above or specify your own” , was confusing. Here’s what’s actually going on.

The lunch command sets two things: a product (what you’re building for) and a variant (how it’s built). The format is <product>-<variant>.

Variants

Variant Root Debug Use case
user No No Production release builds
userdebug Yes Partial Development, near-production behavior
eng Yes Full Maximum debug logging and assertions

Products: what not to pick

The menu lists dozens of targets. The named ones like aosp_oriole, aosp_raven, aosp_barbet are real Pixel hardware targets for Pixel 6, Pixel 6 Pro, Pixel 5a respectively. Building these produces images meant for physical devices and requires proprietary vendor blobs you likely don’t have.

The aosp_cf_* targets are Cuttlefish, Google’s cloud-oriented virtual device that uses crosvm (Chrome OS’s VMM). It works but requires additional host-side tooling (launch_cvd, a specific Debian host package) and is overkill for our purposes.

What to actually pick

For emulator-based development, you want one of these:

Target Description
aosp_x86_64-userdebug Minimal AOSP emulator image
sdk_phone_x86_64-userdebug SDK emulator target (better stability)

The critical insight: you can type any valid target directly, you don’t have to pick from the numbered list. The menu only shows “common” combos. So even though aosp_x86_64-userdebug might not appear in the list, you can type it and it will be accepted if the product definition exists in the tree.

I initially went with aosp_x86_64-userdebug, but after fighting emulator graphics crashes (details below), I switched to sdk_phone_x86_64-userdebug. The sdk_phone target includes the full emulator GPU stack (SwiftShader, Goldfish OpenGL drivers) and is what Google uses to build the system images that ship with Android Studio. It’s heavier but significantly more stable in the emulator.

Why x86_64? Use x86_64 unless you specifically need ARM behavior. It runs natively on your host CPU with hardware virtualization (KVM), making it dramatically faster than emulating ARM64. ART’s core logic like JNI, GC, class linking is architecture-independent.

lunch sdk_phone_x86_64-userdebug

04) Building

With the target selected, build:

m -j$(nproc)

If you have 32GB of RAM, don’t do this. I started with -j18 (I have 20 cores) and my terminal got killed, the OOM killer struck during the linking phase. Even after adding swap, I had to dial it back:

RAM Safe -j value
64 GB -j$(nproc)
32 GB + NVMe swap -j6 to -j10
32 GB, no swap -j4
m -j10

The first full build takes 1–4 hours depending on hardware. Close browsers and anything heavy. Chrome alone can eat 4–8GB.

Incremental builds

After the first build, you almost never need to rebuild everything. If you’re modifying the ART runtime, rebuild just ART:

# Rebuild just the ART runtime (~2 minutes)
m libart -j$(nproc)

# Rebuild the debug variant with extra assertions
m libartd -j$(nproc)

The iterative loop becomes: edit source, rebuild module, push to emulator, restart runtime, attach debugger — about 30–60 seconds per cycle.

05) Build Output: Where Everything Lives

Understanding the output directory structure is important, especially for debugging. Each lunch target gets its own output directory, so different targets never overwrite each other.

out/target/product/emulator_x86_64/
├── system.img              ← system partition (stripped binaries)
├── vendor.img
├── userdata.img
├── ramdisk.img
├── kernel
└── symbols/                ← unstripped libraries (for debugger)
    └── apex/com.android.art/lib64/
        ├── libart.so       ← full debug symbols
        └── libartd.so      ← debug variant + runtime assertions

The key distinction: system.img contains stripped binaries (no symbols, smaller size). The symbols/ directory contains the same binaries unstripped. When you attach a debugger from your host, you point it at symbols/ and it maps the symbol information onto the running stripped binary on the device. You never need to push unstripped binaries to the emulator.

libart.so vs libartd.so

Both use the same core logic. The difference is what happens at runtime:

  libart.so libartd.so
Assertions (DCHECK) Compiled out (no-ops) Live (every assumption is checked)
Heap verification Disabled Can be enabled (-Xgc:preverify)
Performance Optimized Slower, more verbose
Crash behavior Raw SIGSEGV Detailed diagnostic before crash

For vulnerability research, this distinction matters. With libartd.so and heap verification enabled, triggering a use-after-free will give you a clean stack trace showing exactly where ART detected the bad state, instead of just a raw crash in the copying collector.

06) Launching the Emulator (and Everything That Went Wrong)

In theory, booting the emulator is one command:

source build/envsetup.sh
lunch sdk_phone_x86_64-userdebug
emulator -writable-system -no-snapshot-load

In practice, I hit four distinct issues before getting a successful boot.

Issue 1: Missing userdata image

Error: qemu-system-x86_64: Could not open '.../userdata-qemu.img': No such file or directory

This was the first error I faced while launching the emulator when I had built aosp_x86_64-userdebug. The build didn’t generate a userdata-qemu.img file. The -wipe-data flag, which is supposed to create a fresh one, also didn’t work. The fix was creating it manually:

fallocate -l 10G out/target/product/generic_x86_64/userdata-qemu.img
mkfs.ext4 -L userdata out/target/product/generic_x86_64/userdata-qemu.img

10GB matches the size the emulator tries to resize to anyway. But sdk_phone_x86_64-userdebug target doesn’t have this issue.

Issue 2: Segfault on launch (Vulkan/GPU)

Error: Segmentation fault (core dumped) immediately after the gRPC server starts.

The aosp_x86_64 target’s graphics stack wasn’t playing nicely with the emulator’s Vulkan integration. The fix was forcing software rendering:

emulator -writable-system -no-snapshot-load -gpu swiftshader_indirect

This makes the emulator use SwiftShader (CPU-based Vulkan/OpenGL implementation) instead of trying to pass through to the host GPU. Slower rendering, but it works. Ultimately, switching to the sdk_phone_x86_64 target resolved this properly since it includes the correct GPU libraries.

Issue 3: IPv6 loopback failure

Error: address resolution failed for ::1:38157: Name or service not known

The emulator’s modem simulator tries to bind to the IPv6 loopback address ::1. If your system has IPv6 disabled on the loopback interface, this fails and kills the emulator. Fix:

sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=0

Issue 4: Wrong emulator binary

At one point, running emulator picked up a system-installed version (v36.4.10 from Android Studio) instead of the AOSP-built one (v31.3.9). The Android Studio emulator requires an AVD configuration, while the AOSP emulator knows how to find system images from the build tree automatically.

Solution: Always run source build/envsetup.sh and lunch sdk_phone_x86_64-userdebug commands when trying to launch emulator from a new terminal.

The command that finally worked

emulator -writable-system -no-snapshot-load -gpu swiftshader_indirect

Verified with:

adb devices
# List of devices attached
# emulator-5554   device

adb shell getprop ro.build.display.id
# sdk_phone_x86_64-userdebug 14 UP1A.231005.007 eng.null.20260410.002942 test-keys

adb shell getprop ro.build.type
# userdebug

07) Attaching a Debugger to the ART Runtime

With the emulator running our custom build, we can now attach a native debugger to the process that hosts the ART runtime. The most interesting target is zygote64, it’s the parent process of all Android apps and loads libart.so at startup. We will discuss android’s booting process and architecture in future posts.

The easy way: gdbclient.py

AOSP ships a helper script at development/scripts/gdbclient.py that automates the entire setup. Behind the scenes, it:

source build/envsetup.sh
lunch sdk_phone_x86_64-userdebug

# Attach to zygote64
adb root
python3 development/scripts/gdbclient.py -p $(adb shell pidof zygote64)

The manual way

If gdbclient.py gives trouble, the manual approach works as well but can be a little hectic. I’m aware of this way only, if any of you know some better way(less hectic), you are most welcome to ping me.

# Find lldb-server path
find ./prebuilts/ -type f -name lldb-server

# Push gdbserver to the emulator
adb push ./prebuilts/clang/host/linux-x86/clang-r450784e/runtimes_ndk_cxx/i386/lldb-server /data/local/tmp/

# Make it executable
adb shell chmod +x /data/local/tmp/lldb-server

# Launch lldb-server on device
adb shell
emulator_x86_64:/ $ su && /data/local/tmp/lldb-server platform --listen '*:9999' --server &

# Forward the debug port to host
adb forward tcp:9999 tcp:9999

# Find pid of zygote64
adb shell pidof zygote64

# Launch LLDB, attach it to emulator 
lldb
(lldb) platform select remote-android
(lldb) platform connect connect://localhost:9999

# Set the symbol files' path, I came to know about these paths from `gdbclient.py`
(lldb) settings append target.exec-search-paths <path_to_aosp>/aosp/out/target/product/emulator_x86_64/symbols/system/lib64/ <path_to_aosp>/aosp/out/target/product/emulator_x86_64/symbols/system/lib64/hw <path_to_aosp>/aosp/out/target/product/emulator_x86_64/symbols/system/lib64/ssl/engines <path_to_aosp>/aosp/out/target/product/emulator_x86_64/symbols/system/lib64/drm <path_to_aosp>/aosp/out/target/product/emulator_x86_64/symbols/system/lib64/egl <path_to_aosp>/aosp/out/target/product/emulator_x86_64/symbols/system/lib64/soundfx <path_to_aosp>/aosp/out/target/product/emulator_x86_64/symbols/vendor/lib64/ <path_to_aosp>/aosp/out/target/product/emulator_x86_64/symbols/vendor/lib64/hw <path_to_aosp>/aosp/out/target/product/emulator_x86_64/symbols/vendor/lib64/egl <path_to_aosp>/aosp/out/target/product/emulator_x86_64/symbols/apex/com.android.runtime/bin

# Attach to target process
(lldb) attach -p <pid>

LLDB extensions for exploit development

Vanilla LLDB is painful for vulnerability research. Two extensions worth installing:

Switching to the debug runtime

To get the extra runtime assertions from libartd.so (if it’s present on the image):

adb shell setprop persist.sys.dalvik.vm.lib.2 libartd.so
adb shell stop && adb shell start

For maximum diagnostic output during GC:

adb shell setprop dalvik.vm.extra-opts "-Xgc:preverify -Xgc:postverify -Xgc:verbose"
adb shell stop && adb shell start

08) The Iterative Workflow

With everything in place, the development cycle for ART internals research looks like this:

# 1. Edit ART source
vim art/runtime/jni/jni_internal.cc

# 2. Rebuild just ART (~2 minutes)
m libart -j$(nproc)

# 3. Push to emulator
adb root && adb remount
adb push out/target/product/emulator_x86_64/symbols/apex/com.android.art/lib64/libart.so \
    /apex/com.android.art/lib64/libart.so

# 4. Restart runtime
adb shell stop && adb shell start

# 5. Attach debugger
python3 development/scripts/gdbclient.py -p $(adb shell pidof zygote64)

No flashing, no bootloader unlock, no verity headaches. The -writable-system flag we launched with makes the APEX partitions writable, so adb push directly into /apex/ works. The whole cycle takes under a minute once the initial build is done.

09) What’s Next

With this environment set up, we have everything needed to start exploring ART internals at the source level. This is Part-1 of a series that will work through the Android Runtime from the ground up, building toward understanding real-world N-day vulnerabilities in android runtime. Before we can break things, we need to understand how they work. In the next post, we’ll see android’s booting process and trace the full journey of an Android app from DEX bytecode through dex2oat compilation to OAT/VDEX files, and explore the three execution modes i.e, interpreter, JIT, and AOT, that ART switches between at runtime. Understanding this pipeline is foundational to everything that follows, because every vulnerability in ART ultimately lives somewhere along this path.

tags: Android-Debugging - LLDB - AOSP-Building - AOSP