Hefty Secrets
Hefty Secrets - Detailed Writeup#
Challenge#
- Name: Hefty Secrets
- Prompt: “Two files. One network. You’re handed a base model and an adapter. Alone, they’re meaningless. Together… well, that’s for you to figure out.”
- Given files:
base_model.pt,lora_adapter.pt - Expected flag format:
apoorvctf{...}
Initial Triage#
The key hint is “base model + adapter”. In modern ML workflows, this often means:
- a base model checkpoint (full weights), and
- a LoRA adapter (low-rank delta weights)
So the likely solve path is:
- Inspect both
.ptfiles structurally. - Identify where LoRA tensors apply.
- Merge adapter into base weights.
- Check for hidden message encoded in merged tensors.
Step 1 - Identify File Structure#
Even without PyTorch installed, both files can be inspected as archives:
file base_model.pt lora_adapter.ptbashBoth were shown as Zip archives. Listing internal entries reveals standard torch serialization layout:
.../data.pkl.../data/<index>files containing raw tensor storage- metadata files (
byteorder,version, etc.)
data.pkl tells us tensor names and shapes.
Step 2 - Read Tensor Metadata (Without torch)#
Since torch was unavailable, the pickle was disassembled using pickletools.
Base model tensors#
From base_model/data.pkl:
layer1.weight-> shape(256, 64)layer1.bias-> shape(256,)layer2.weight-> shape(256, 256)layer2.bias-> shape(256,)layer3.weight-> shape(128, 256)layer3.bias-> shape(128,)output.weight-> shape(10, 128)output.bias-> shape(10,)
LoRA adapter tensors#
From lora_adapter/data.pkl:
layer2.lora_A-> shape(64, 256)layer2.lora_B-> shape(256, 64)
So the adapter is clearly intended to modify only layer2.weight.
Step 3 - Reconstruct Weights from Raw Storage#
The raw tensor bytes are float32 little-endian in the data/<index> files.
Reconstruction formula for LoRA-applied weight:
W2_merged = W2_base + (B @ A)
where:
W2_baseis(256,256)from base model,Ais(64,256),Bis(256,64).
I tested common LoRA scale variants (alpha/r, etc.), but scale = 1.0 produced a matrix whose values are almost exactly binary (~0 and ~1), which is the intended payload encoding.
Step 4 - Detect Hidden Bitmap in Merged Matrix#
After computing W2_merged, thresholding at > 0.5 gives a sparse binary image.
- Nonzero content only appears in rows
122..140 - And a wide span of columns
- Rendering those bits as pixels forms readable text
The text reads:
apoorvctf{l0r4_m3rg3}
Final Flag#
apoorvctf{l0r4_m3rg3}
Reproducible Solver Script#
Save as solve.py and run with python3 solve.py.
import zipfile
import numpy as np
def read_tensor(zf, path, shape):
data = zf.read(path)
arr = np.frombuffer(data, dtype="<f4")
return arr.reshape(shape)
def main():
with zipfile.ZipFile("base_model.pt") as zb, zipfile.ZipFile("lora_adapter.pt") as zl:
w2 = read_tensor(zb, "base_model/data/2", (256, 256))
a = read_tensor(zl, "lora_adapter/data/0", (64, 256))
b = read_tensor(zl, "lora_adapter/data/1", (256, 64))
merged = w2 + (b @ a)
# Convert to binary image
bits = (merged > 0.5).astype(np.uint8)
# Crop to content
rows = np.where(bits.sum(axis=1) > 0)[0]
cols = np.where(bits.sum(axis=0) > 0)[0]
crop = bits[rows.min() : rows.max() + 1, cols.min() : cols.max() + 1]
# Print as terminal art
for r in crop:
print("".join("#" if x else " " for x in r))
print("\nRead text from bitmap:")
print("apoorvctf{l0r4_m3rg3}")
if __name__ == "__main__":
main()pythonNotes / Pitfalls#
- If you only inspect each file independently, nothing obvious appears.
- The payload is revealed only after merging base + adapter.
- A common pitfall is misreading characters in the rendered bitmap:
- it is
l0r4, notDr4.
- it is
- You do not need PyTorch for this challenge; zip + pickle metadata + NumPy is enough.