0xnhl

Hefty Secrets

/ Update
3 min read

Hefty Secrets - Detailed Writeup#

Challenge#

  • Name: Hefty Secrets
  • Prompt: “Two files. One network. You’re handed a base model and an adapter. Alone, they’re meaningless. Together… well, that’s for you to figure out.”
  • Given files: base_model.pt, lora_adapter.pt
  • Expected flag format: apoorvctf{...}

Initial Triage#

The key hint is “base model + adapter”. In modern ML workflows, this often means:

  • a base model checkpoint (full weights), and
  • a LoRA adapter (low-rank delta weights)

So the likely solve path is:

  1. Inspect both .pt files structurally.
  2. Identify where LoRA tensors apply.
  3. Merge adapter into base weights.
  4. Check for hidden message encoded in merged tensors.

Step 1 - Identify File Structure#

Even without PyTorch installed, both files can be inspected as archives:

file base_model.pt lora_adapter.pt
bash

Both were shown as Zip archives. Listing internal entries reveals standard torch serialization layout:

  • .../data.pkl
  • .../data/<index> files containing raw tensor storage
  • metadata files (byteorder, version, etc.)

data.pkl tells us tensor names and shapes.

Step 2 - Read Tensor Metadata (Without torch)#

Since torch was unavailable, the pickle was disassembled using pickletools.

Base model tensors#

From base_model/data.pkl:

  • layer1.weight -> shape (256, 64)
  • layer1.bias -> shape (256,)
  • layer2.weight -> shape (256, 256)
  • layer2.bias -> shape (256,)
  • layer3.weight -> shape (128, 256)
  • layer3.bias -> shape (128,)
  • output.weight -> shape (10, 128)
  • output.bias -> shape (10,)

LoRA adapter tensors#

From lora_adapter/data.pkl:

  • layer2.lora_A -> shape (64, 256)
  • layer2.lora_B -> shape (256, 64)

So the adapter is clearly intended to modify only layer2.weight.

Step 3 - Reconstruct Weights from Raw Storage#

The raw tensor bytes are float32 little-endian in the data/<index> files.

Reconstruction formula for LoRA-applied weight:

W2_merged = W2_base + (B @ A)

where:

  • W2_base is (256,256) from base model,
  • A is (64,256),
  • B is (256,64).

I tested common LoRA scale variants (alpha/r, etc.), but scale = 1.0 produced a matrix whose values are almost exactly binary (~0 and ~1), which is the intended payload encoding.

Step 4 - Detect Hidden Bitmap in Merged Matrix#

After computing W2_merged, thresholding at > 0.5 gives a sparse binary image.

  • Nonzero content only appears in rows 122..140
  • And a wide span of columns
  • Rendering those bits as pixels forms readable text

The text reads:

apoorvctf{l0r4_m3rg3}

Final Flag#

apoorvctf{l0r4_m3rg3}


Reproducible Solver Script#

Save as solve.py and run with python3 solve.py.

Notes / Pitfalls#

  • If you only inspect each file independently, nothing obvious appears.
  • The payload is revealed only after merging base + adapter.
  • A common pitfall is misreading characters in the rendered bitmap:
    • it is l0r4, not Dr4.
  • You do not need PyTorch for this challenge; zip + pickle metadata + NumPy is enough.
Hefty Secrets
https://nahil.xyz/vault/writeups/apoorvctf2026/ai/hefty-secrets/
Author Nahil Rasheed
Published at March 24, 2026
Disclaimer This content is provided strictly for educational purposes only.