Intermediate11 min

AWS Bedrock

Running LangChain agents on AWS Bedrock: setup, model access, IAM configuration, and production deployment with provisioned throughput.

Quick Reference

  • Install langchain-aws and use ChatBedrock as a drop-in replacement for ChatAnthropic or ChatOpenAI
  • Configure IAM roles with bedrock:InvokeModel and bedrock:InvokeModelWithResponseStream permissions
  • Request model access in the AWS Console before first use — not all models are enabled by default
  • Use provisioned throughput for predictable latency and guaranteed capacity in production workloads
  • Bedrock supports Claude, Llama, Mistral, and others — swap models without changing application code

Why AWS Bedrock

AWS-native LLM access

Bedrock provides AWS-native access to foundation models (Claude, Llama, Mistral) with IAM security, VPC isolation, CloudWatch monitoring, and consolidated AWS billing. No API keys to manage — authentication is IAM.

  • Data never leaves your AWS account — requests are processed within the AWS network, no external API calls
  • SOC2, HIPAA, and PCI compliance inherited from your AWS environment — no separate compliance process
  • Provisioned throughput gives guaranteed capacity with predictable latency — no shared rate limits
  • IAM policies govern model access — control which teams, services, and environments can use which models
  • Consolidated billing — LLM costs appear on your existing AWS bill alongside compute, storage, and networking