CPP

We have this program's source code, but it uses a strange DRM solution. Can you crack it?

We are provided with the following 6234-line source code:

76KB
Open
cpp.c

Essentially, the flag is checked through a bunch of #define and #ifdef statements in the source code. Compiling the code gives the following output:

The way it works is that you define your flag here:

and the following preprocessor directives determine whether the flag is valid.

Analysis (Part 1)

Looking through the code, here are some findings. First, the ROM_x_y macros represent the bits of each flag character.

ROM_x_y is the yy-th bit of the flag characterFx128F_{x-128}, where xx is in binary.

Next,

  • LD(x, y) means ROM_ ## x ## _ ## y (concatenate)

  • l means l7 ## l6 ## l5 ## l4 ## l3 ## l2 ## l1 ## l0 (concatenate)

Subsequently, LD(l, y) is used to check whether the flag characters are valid. The first example of this is after if S == 34 (line 4229).

First, the bits l0 to l7 are set, where lx is the x-th bit. Together, l7 l6 ... l0 form the x (flag index) in ROM_x_y.

Then, LD(l, y) is used to check the y-th bit of the flag character.

Finally, note that all of this only happens when __INCLUDE_LEVEL__ > 12. Before that, it recursively includes itself. Note the definition of the pre-defined __INCLUDE_LEVEL__ macro:

This macro expands to a decimal integer constant that represents the depth of nesting in include files. The value of this macro is incremented on every ‘#include’ directive and decremented at the end of every included file. It starts out at 0, its value within the base file specified on the command line.

The following else statement corresponds to the previous if __INCLUDE_LEVEL__ > 12 which, if passed, checks the flag.

Before the __INCLUDE_LEVEL__ goes to 13, it recursively includes itself.

Converting to Python Code

In an attempt to make the code more readable and to analyse the checking of the flag, I wrote a script to convert the preprocessor directives to Python code.

The result is something like this:

This is Python code that performs the same checking functionality as the preprocessor directives. This works by replacing ifdef and ifndef with checking whether the variable exists, which is basically the same thing!

For LD(x, y), we can define the following function which accepts l as an array of [l7, l6, ..., l0]:

Analysis (Part 2)

After some dynamic analysis, I found that the code essentially checks each character of the flag one by one, starting from index 0. The value of S follows a predictable sequence for the checking of the first few characters, then goes to S = 56 before the program says INVALID_FLAG.

Let's inspect this part of the code then.

At S = 56, if any of Q0 to Q7 is defined, then S is set to 57, and this results in INVALID_FLAG. However, if all of Q0 to Q7 is not defined, then it skips this part of the code and jumps to S = 58.

Solving

This knowledge greatly reduces the time complexity of a bruteforce solution. The idea is that when any of Q0 to Q7 is set, we can conclude that the last-checked character is wrong. Previously, we would have no way of knowing whether each individual character is correct, only that the flag as a whole is wrong.

The following solver script implements this, albeit in a very hacked-together kind of way. Since LD(l, n) is called for each bit in each flag character, we know that the i-th character is wrong if by the i+1-th character, any of Q0 to Q7 is set. This is then handled by the driver code by moving on to the next possible i-th character.

This gives us the flag relatively quickly: CTF{pr3pr0cess0r_pr0fe5sor}

Last updated

Was this helpful?