Reverse Engineering a Thermal Printer (and Learning BLE the hard way)

- 10 min read

This started the way most of my weekend projects start.

I saw a random tweet on X.

Someone had hooked up a tiny thermal printer and was printing AI‑generated images. Grainy, black‑and‑white, slightly cursed prints. Exactly my kind of thing.

A few minutes later I had a thermal printer in my Amazon cart. A Raspberry Pi was already lying around. I also had a vague feeling that I’d been avoiding learning about Bluetooth and BLE for far too long.

So naturally, I decided this was the weekend to fix all of that.

Printer says hello

The printer had no documentation

The printer that arrived was… basic. No brand story. No real documentation. It came with a QR code that took me to iOS and Android app stores. Install app, pair printer, hit print — boom, images came out in 4–5 seconds.

Perfect.

Except I didn’t want to print from my phone. I wanted to write code that would print from a Raspberry Pi.

There was no driver. No SDK. No protocol docs.

Which meant reverse‑engineering.


First: BLE small talk

The first step was simply finding the printer over Bluetooth.

bluetoothctl
scan on
Bluetoothctl scan

And there it was. I got the MAC address of the printer.

The next step was to poke at it using BLE tools and try to understand what services and characteristics it exposed. I used a python library called bluepy for this.

sudo python3 << 'EOF'
from bluepy import btle

print("Connecting to printer...")
printer = btle.Peripheral("12:22:33:44:55:66")

print("\n=== Discovering Services & Characteristics ===\n")
for service in printer.getServices():
    print(f"Service: {service.uuid}")
    for char in service.getCharacteristics():
        props = char.propertiesToString()
        print(f"  └─ Char: {char.uuid}")
        print(f"     Handle: 0x{char.getHandle():04x}")
        print(f"     Properties: {props}")
        if 'WRITE' in props:
            print(f"     ⚡ WRITABLE - This is probably the print characteristic!")

printer.disconnect()
EOF

A quick detour here, because this part confused me initially.

BLE (Bluetooth Low Energy) isn’t like classic Bluetooth where you just open a socket and start talking. Everything is structured around:

  • Services – logical groupings of functionality (think: “what can this device do?”)
  • Characteristics – individual read/write endpoints inside a service

Each of these is identified by a UUID (a long hex string), which is basically an ID saying “this thing represents this capability”.

So when you connect to a BLE device, what you really get is a tree of UUIDs.

And in my case: UUIDs everywhere.

BLE characteristics

Most of the UUIDs meant nothing to me, even after staring at them for a while. I kept bouncing between ChatGPT and Claude, asking things like “what am I even looking at?” and “which of these matter?”. Over time, with their help, patterns started to emerge, and one writable characteristic finally stood out — the one the Android app was clearly using to send print data.

Eventually, one writable characteristic stood out. So I tried sending data to it.

Nothing.

Then… something.

After a lot of trial and error, I managed to get a tiny circle to print.

# send a very small bitmap
char.write(header)
char.write(bitmap)

A literal small black circle.

It felt unreasonable how happy that made me.

The “it works, but painfully” phase

Printing small shapes worked. Larger images technically worked too.

But they took 43 seconds.

For context: the Android app printed the same image in about 5 seconds.

The code that worked looked roughly like this:

for chunk in bitmap_chunks:
    char.write(chunk)
    time.sleep(0.03)

Slow, cautious, and extremely inefficient.

At this point, a normal person might have stopped.

I instead decided to bring in Claude Code.


Claude Code as my weekend lab partner

This is where things got interesting.

Very early into the debugging, Claude started doing something I wouldn’t have done myself: zooming out and treating the whole thing like a research problem, not a bug. This was helped massively by the fact that Claude Code was running directly on the Raspberry Pi — it could make changes, run scripts immediately, and then simply ask me whether the printer output looked right or not.

There were multiple moments where it explicitly called out that we had hit something important.

“CRITICAL DISCOVERY!” The SEZNIK printer uses the same UUIDs as PeriPage/FoMemo printers.

That single observation came from correlating BLE UUIDs across unrelated projects and libraries. As a human, I would have probably kept tweaking chunk sizes and delays for days before even considering that this printer belonged to a known protocol family fileciteturn1file0.

From that point on, the flow changed. We stopped guessing and started validating hypotheses.

Claude would say things like:

“This explains why small images work but large ones don’t.”

And suddenly, weeks of potential confusion and diving into google searched rabbit holes collapsed into a single sentence.


Why is this so slow? Understanding MTU negotiation

Claude kept coming back to one idea: this couldn’t just be raw hardware speed.

Phones weren’t magic. The printer wasn’t magic. Something about how data was being sent had to be different.

We tried:

• different chunk sizes • different delays • different image widths • different command sequences

Some attempts produced half images. Some produced corrupted noise. One attempt straight up turned the printer off.

By default, BLE sends very small packets.

Which meant my Pi was sending thousands of tiny writes, each with overhead and sleeps in between.

The Android app?

It requested a larger MTU and then sent much bigger chunks.

Once we implemented MTU negotiation on the Pi:

printer.setMTU(247)

Everything changed.

Same printer. Same image. Same protocol.

43 seconds → ~4 seconds.


The moment things actually clicked

We weren’t able to figure out the exact width of the printer’s paper initially. Claude insisted on systematic testing and we ended up with this:

Instead of “try full width again”, it pushed for:

Let’s find the exact threshold where this breaks.

That led to tests like:

80px  → works
160px → works
240px → works
320px → works
384px → works
392px → corrupted
400px → corrupted
456px → corrupted

And then this line dropped:

“Found it! 48 bytes per row is the limit.”

That was a huge deal.

As a human, I tend to reason in wholes: full width vs not full width. Claude reasoned in bytes per row. That difference matters when you’re talking to dumb hardware over BLE fileciteturn1file0.

Width vs bytes per row

When research beat brute force

There was another turning point when Claude suggested something I genuinely wouldn’t have done on a weekend:

“If the official app works, it is the documentation.”

That sent us down the APK/XAPK reverse‑engineering path.

Claude actually asked me a very simple question at this point: “Do you have the APK file?”

I dropped the APK onto the Raspberry Pi’s file system, and that’s when things felt next‑level. Claude didn’t just tell me how to reverse‑engineer it — it immediately started doing it. Extracting the APK, unpacking files, scanning through binaries, pulling out strings, and looking for anything even remotely related to Bluetooth or printing.

I’ve reverse‑engineered APKs before, years ago, when I needed to understand how some app worked. But this was very different. This wasn’t me manually digging through folders and guessing where to look. It felt like having someone who knew exactly what questions to ask, and could read everything without getting tired.

Digging through the app binaries surfaced things like:

  • MTU negotiation
  • aggressive chunk sizes (128 bytes)
  • printer initialization commands
  • explicit assumptions about print width (384px, not full paper width)

At one point, Claude literally said:

“This is the critical piece that we’ve been missing — JBIG compression.”

What that meant was simple but profound. The app wasn’t sending raw bitmaps at all. It was first running images through JBIG, a compression algorithm specifically designed for black‑and‑white images. That dramatically reduced the amount of data sent over BLE, which in turn explained both the massive speed difference and why much larger images could be printed without overwhelming the printer.

And it was right. The app wasn’t doing anything magical. It was just doing a few very specific things correctly — things I would never have inferred by just staring at printer output and Python scripts fileciteturn1file0.


Human + AI > either alone

What really stood out to me was how the roles naturally split:

  • Claude was excellent at:

    • pattern matching across unrelated projects
    • remembering everything we had already tried
    • switching between “research mode” and “experiment mode”
  • I was good at:

    • observing physical output
    • noticing weird real‑world artifacts (smudges, margins, feed behavior)
    • sanity‑checking whether something felt right

On my own, this would have taken much longer. With Claude alone, nothing would have printed.

Together, it actually felt like pair‑programming — except the other side had infinite patience and didn’t get tired at 2am.

I ran Claude Code directly on the Raspberry Pi.

This changed the vibe of the project completely.

Instead of me manually googling, guessing, writing code, and debugging alone — Claude would:

• research BLE quirks • suggest experiments • write scripts • ask me to run them • then ask: “What happened?”

I am just a human in the loop

I was basically reduced to a human‑in‑the‑loop printer observer.

Which, honestly, is a great role.


What I learned

I started this project thinking I’d just “hook up a printer”.

Instead I learned:

• how BLE GATT actually works • why MTU negotiation is critical for throughput • that cheap hardware often has very real internal limits • that official apps are often your best reverse‑engineering docs

And maybe most importantly, how effective AI can be as a thinking partner for hardware‑adjacent work.

Not replacing me. But constantly nudging the investigation forward.


This is only part 1

Right now, the printer can print images sent from the Pi quickly and reliably.

What it can’t do yet is:

• generate images using AI • compress them efficiently

That’s coming next.

This post was really about the journey:

• learning BLE the hard (and fun) way • spending a weekend obsessing over a tiny printer • and remembering how enjoyable building stuff is when it’s just curiosity‑driven

Part 2 will be where things get weirder.

Stay tuned.