AI Engineering Judgment/When (Not) to Use AI
★ OverviewBeginner10 min

AI vs Traditional Code

The fundamental decision framework for when to use AI versus traditional code. Learn to evaluate problems across determinism, cost, latency, and maintainability axes — and stop defaulting to LLMs for everything.

Quick Reference

  • Use traditional code when the problem is deterministic, categories are known, and exact matching suffices
  • Use AI when inputs are ambiguous, natural language is involved, or the classification space is too large to enumerate
  • LLM API calls cost 100-1000x more than equivalent rule-based logic per request
  • Hybrid approaches (rules for common cases, AI for edge cases) often beat pure AI solutions on cost and latency
  • Always benchmark AI vs rules on YOUR data before committing — LLMs are not magic
  • The best AI engineers say 'no' to AI more often than 'yes'

The Decision Framework

Every time you reach for an LLM, you should ask yourself: could I solve this with an if/else statement, a regex, or a lookup table? If yes, you should almost always use traditional code. AI adds latency, cost, and non-determinism. Those trade-offs are only worth it when the problem genuinely requires intelligence — understanding ambiguity, handling natural language, or classifying inputs across a space too large to enumerate by hand.

The 3-Question Test

Before using AI, ask: (1) Can I enumerate all the valid inputs/outputs? If yes, use a lookup table. (2) Can I write rules that cover 95%+ of cases? If yes, use rules with a manual review queue for the rest. (3) Does the input require understanding natural language, context, or nuance? Only if yes should you reach for an LLM.

FactorTraditional CodeLLM-BasedWinner
Determinism100% reproducibleNon-deterministic even at temp=0Traditional
Latency< 1ms typically200ms-5s per callTraditional
Cost per request$0.000001$0.001-$0.10Traditional
MaintenanceLogic changes need deploysPrompt changes, no deploy neededDepends
Ambiguous inputBreaks on edge casesHandles gracefullyLLM
Evolving categoriesRequires code changesAdapts via prompt updatesLLM
Natural languageRegex/NLP pipelines, fragileNative capabilityLLM
ExplainabilityFully traceableBlack box without extra workTraditional