NOTE
NOTE · 04

Token Limits Break Everything

Quick observation: models consistently fail when given inputs with special characters. Turns out we were hitting token limits we didn't know existed. Emoji and Unicode characters tokenize into way more tokens than expected. Always use tiktoken to count tokens before sending.

Observed Jan 28, 2026
NOTE
NOTE · 07

3-Step AI Validation Pattern

After running 5 different AI deployments, we noticed they all required a similar 3-step validation pattern: (1) Schema validation before model call, (2) Output format check after model call, (3) Business logic validation before using results. This catches 95% of model errors.

Pattern Feb 10, 2026
NOTE
NOTE · 03

Always Validate JSON Schema Early

Model output parsing was our biggest source of errors until we added strict JSON schema validation. Now we validate schema before even calling the model (catches bad prompts) and after (catches bad outputs). Cut production errors by 70%.

Tip Jan 20, 2026
NOTE
NOTE · 09

Why Multi-Agent Coordination Failed

Case study: our multi-agent system looked great in demos but collapsed in production. The problem wasn't the agents—it was the coordination layer. Agents were waiting on each other, creating cascading timeouts. Fixed by making agents fully async and adding circuit breakers.

Case Study Mar 1, 2026
NOTE
NOTE · 05

Model Selection Framework

Framework we use to pick which model for which task: (1) Does it need reasoning? Use o1/o3. (2) Is latency critical? Use Haiku. (3) Is accuracy critical? Use Opus. (4) Is cost critical? Use local. Start with this decision tree before optimizing.

Framework Feb 5, 2026