All Topics

AI Engineering Judgment

When (not) to use AI, debugging non-deterministic systems, AI UX patterns, and compliance — the engineering judgment that separates senior from junior.

0/25
AI vs Traditional Code

The fundamental decision framework for when to use AI versus traditional code. Learn to evaluate problems across determinism, cost, latency, and maintainability axes — and stop defaulting to LLMs for everything.

beginner10 min
Classification & Routing

Intent classification is the backbone of most real AI systems. Learn to build query routers that send different types of inputs to specialized handlers — with confidence thresholds, fallback strategies, and hybrid fast-path/LLM architectures.

intermediate11 min
Text-to-SQL & Structured Data

Natural language to SQL is one of the highest-value AI applications — and one of the hardest to get right. Learn the architecture, schema injection strategies, query validation, error recovery, and when to use code generation instead of SQL.

intermediate11 min
AI for Code Generation

Building coding assistants that generate, execute, test, and iterate on code safely. Learn execution sandboxing, validation loops, context management strategies, and where code generation helps versus where it introduces dangerous complexity.

advanced12 min
Debugging Non-Deterministic Systems

AI systems break differently than traditional software. The same input can produce different outputs, bugs are probabilistic, and stack traces do not exist for reasoning failures. Learn the systematic approach to debugging non-deterministic AI systems.

intermediate11 min
Why Your Agent Loops

Infinite loops are the most common failure mode in AI agents. Learn the four root causes — ambiguous tools, context pollution, missing stop conditions, and reasoning spirals — with concrete detection and prevention strategies.

intermediate10 min
Why Your RAG Returns Garbage

RAG failures are either retrieval problems (wrong chunks retrieved) or generation problems (good context but bad synthesis). Learn to diagnose which stage is broken, fix common issues like chunk boundaries and embedding drift, and build a RAG debugging pipeline.

intermediate10 min
Latency Profiling

Most AI system latency comes from LLM calls and tool execution — not your code. Learn to profile every stage of an agent run, understand TTFT vs TPS, optimize streaming pipelines, and find the biggest wins by reducing LLM calls rather than optimizing code.

advanced10 min
Cost Forensics

Token accounting reveals where your AI budget actually goes. Learn to track costs per user, per feature, and per conversation — then optimize with context trimming, model tiering, and caching to cut costs by 50-80% without sacrificing quality.

advanced10 min
Designing AI-Powered Products

AI features fail when they break user trust. Learn to design products that set correct expectations, communicate uncertainty honestly, build trust through transparency, and handle failures gracefully — with real examples from production AI products.

intermediate11 min
Streaming UX Patterns

Streaming transforms a 5-second wait into a 500ms perceived response. Learn to build progressive rendering components, skeleton states, cancellation UX, and partial result patterns that make AI features feel instant.

intermediate10 min
Error States for AI

AI features fail in ways traditional software does not — rate limits, hallucinations, timeouts, tool failures, and model outages. Learn to design error boundaries, fallback strategies, and user-facing error messages that keep users productive even when the AI breaks.

intermediate9 min
Chat vs Non-Chat Interfaces

Not every AI feature should be a chatbot. Learn when chat is the right interface (exploratory tasks, ambiguous queries) versus when structured UIs are better (form filling, dashboards, workflows) — with hybrid patterns that combine the best of both.

intermediate10 min
Feedback Collection UX

User feedback is the most valuable signal for improving AI systems — but only if you collect it effectively. Learn to design explicit feedback (thumbs up/down, corrections) and capture implicit signals (copy events, regeneration, abandonment) without annoying users.

intermediate9 min
PII & Data Handling

Every prompt you send to an LLM API can contain personal data — user names, emails, addresses, and more. Learn PII detection, redaction before API calls, data residency requirements (GDPR, CCPA), provider retention policies, and anonymization strategies that keep you compliant.

advanced11 min
Content Moderation & Safety

AI systems can generate or be manipulated into producing harmful content. Learn to build input/output filtering pipelines, use dedicated safety classifiers, handle category taxonomies, and design escalation workflows for when automated moderation is not enough.

advanced10 min
Audit Trails & Explainability

Regulated industries require audit trails for every AI decision. Learn what to log (inputs, outputs, model version, latency, cost, tool calls), how to structure traces for querying, how to provide explainability to end users, and retention policies that balance compliance with cost.

advanced10 min
Vendor Risk & Exit Strategies

Your AI system depends on external APIs that can go down, deprecate models, change pricing, or alter terms of service. Learn to assess vendor risk, build abstraction layers, implement multi-provider failover, and plan exit strategies before you need them.

advanced10 min
Token Budgeting

Estimate, allocate, and control token costs per request, per user, and per feature — with practical formulas, budget caps, and real-time usage tracking.

intermediate9 min
Semantic Caching

Cache similar (not just identical) queries by embedding similarity — a user asking 'What is LangGraph?' gets the cached response for 'Explain LangGraph' instead of a new LLM call.

intermediate8 min
Prompt Compression

Reduce input tokens without losing quality: conversation summarization, context pruning, document compression, and progressive detail reduction. Cut costs 40-60% on long conversations.

intermediate8 min
Bias Detection & Mitigation

Detect, measure, and mitigate bias in LLM outputs — from demographic disparities in classification to stereotyped language in generation. Practical techniques for production agents.

intermediate9 min
Transparency & Explainability

Make AI decisions understandable: reasoning traces, source citations, confidence calibration, and showing your work — so users trust the agent and can verify its outputs.

intermediate8 min
Data Privacy & User Consent

Protect user data beyond PII filtering: informed consent for AI interactions, data retention policies, right to deletion, and minimizing data collection in agent systems.

intermediate8 min
Responsible AI Deployment

Deploy AI agents responsibly: know when NOT to use AI, disclose limitations, design for graceful failure, and establish human oversight — the engineering judgment that prevents harm.

intermediate8 min