All Topics

Prompt Engineering & Structured Output

20%

Explicit criteria, few-shot prompting, structured output via tool_use, validation loops, batch processing, and multi-pass review.

0/6
Explicit Criteria for Precision

Why vague instructions like 'be conservative' fail to improve precision, and how to replace them with explicit categorical criteria that dramatically reduce false positives in code review, data extraction, and CI/CD pipelines.

beginner8 min
Few-Shot Prompting Patterns

How to use few-shot examples to achieve consistent formatting, handle ambiguous cases, and generalize to novel patterns. Covers when few-shot beats zero-shot, how to select examples, and patterns for extraction, tool selection, and classification.

intermediate10 min
Structured Output via tool_use & JSON Schemas

How to use tool_use with JSON schemas to guarantee syntactically valid structured output from Claude. Covers tool_choice modes, schema design patterns (nullable fields, enum + other), and the critical distinction between syntax errors and semantic errors.

intermediate12 min
Validation, Retry & Feedback Loops

How to build validation loops that catch semantic errors in Claude's output and retry with targeted error feedback. Covers retryable vs. non-retryable errors, the detected_pattern field for tracking dismissals, and self-correction via calculated vs. stated totals.

intermediate10 min
Batch Processing Strategies

How to use the Anthropic Message Batches API for 50% cost savings on non-latency-sensitive workloads. Covers when to batch vs. when NOT to, custom_id correlation, failure handling, and the critical constraint that batch requests cannot do multi-turn tool calling.

advanced10 min
Multi-Instance & Multi-Pass Review

Why self-review is unreliable, how independent Claude instances catch errors the generator missed, and how multi-pass architectures (per-file local analysis + cross-file integration) handle large PRs. Includes confidence self-reporting for calibrated review routing.

advanced10 min