249 bytes. Not 250. Not 248. 249. That's the sweet spot for Amazon backend keywords. And if you've been counting characters instead of bytes, you've probably been invisible and didn't even know it.
Bytes Are Not Characters
Amazon's backend search term field has a hard limit of 250 bytes. Most sellers assume this means 250 characters. It doesn't. In UTF-8 encoding — which Amazon uses — standard ASCII characters (A-Z, 0-9) are 1 byte each. But special characters are larger:
- An umlaut (ü, ö, ä) = 2 bytes
- A euro sign (€) = 3 bytes
- An emoji = 4 bytes
For German marketplace sellers, this is catastrophic. The word "Motorradhelm" is 12 characters and 12 bytes. But "Motorradhelm für Brillenträger" is 31 characters and 33 bytes — because "ü" and "ä" each consume 2 bytes instead of 1. Every umlaut silently steals a byte from your keyword budget.
The Silent Kill
Here's what makes this truly dangerous: if your backend keywords exceed 250 bytes by even a single byte, Amazon doesn't truncate them. It doesn't warn you. It doesn't reject the listing. It silently ignores the entire field. All your keywords. Gone. Your product becomes unsearchable for every backend term you carefully selected.
One seller we spoke to — Marcus, who manages 12,000 SKUs — was invisible on Amazon for six months because of this. "I had no idea," he told us. "I was counting characters. I thought I was at 247. I was actually at 253 bytes because of three umlauts. Amazon silently dropped everything."
No Human Can Do This
Manually counting bytes in UTF-8 encoded text is impractical. Most text editors and spreadsheet tools count characters, not bytes. Even developers who understand encoding rarely track byte counts in product data workflows. And with 5,000 products, each needing different keyword sets, the scale makes manual byte counting impossible.
Central's Amazon Adapter
Central's Amazon adapter calculates exact byte sizes for UTF-8 encoded text, strips Amazon stop words (which Amazon ignores anyway but which eat your byte budget), prioritizes high-value keywords by search volume, and packs to exactly 249 bytes. Not 250 — because we've seen edge cases where exactly 250 triggers the silent ignore.
It also generates brand-model-type titles following Amazon's strict formula, writes five benefit-first bullet points (not feature-first — Amazon's algorithm rewards benefit language), and maps item specifics to Amazon's category taxonomy.
The result: every byte works. Every keyword indexes. No human intervention required. No more invisible products from a silent byte-counting failure that you never knew was happening.