Back to AIF-C01 Catalog
Exam Prep

When to Use Which Service

"Exam-focused guide to solving the most common AWS AI/ML service confusion — know exactly which service fits a given scenario."

🎯 When to Use Which AWS AI/ML Service

The AIF-C01 exam loves to test whether you can pick the right service for a given scenario. Many AWS AI/ML services sound similar, but each has a very specific purpose. This guide breaks down every confusing pair so you never pick the wrong answer on exam day.

How to Use This Page: Read each comparison carefully. Focus on the "Exam Decision Rule" at the bottom of each section — these are your quick decision shortcuts. The cheat sheet at the very end is your last-minute revision lifesaver.


1. Amazon SageMaker vs. Amazon Bedrock

The #1 most confusing pair on the AIF-C01 exam. Both deal with AI/ML, but they serve completely different audiences.

FeatureAmazon SageMakerAmazon Bedrock
What It IsEnd-to-end ML platform for building, training, and deploying custom modelsFully managed service for building apps with pre-built foundation models (FMs)
Target UserData scientists, ML engineers (need ML expertise)Application developers (no ML expertise needed)
Model SourceYou build your own model from scratch or use JumpStart pre-trained modelsYou use existing foundation models (Claude, Titan, Llama, etc.)
TrainingYes — full control over training (data, algorithms, hyperparameters)No training — use models as-is, or customize via fine-tuning/continued pre-training
Key ComponentsStudio, Canvas, Autopilot, Ground Truth, Clarify, Pipelines, Feature StoreKnowledge Bases, Agents, Guardrails, Prompt Management
CustomizationFull control (custom algorithms, bring your own container)Fine-tuning, continued pre-training, RAG, prompt engineering
InfrastructureYou choose instance types, manage endpointsFully managed (serverless API calls)
Use CasesFraud detection models, demand forecasting, custom NLP, image classificationChatbots, content generation, summarization, Q&A from company docs

🧠 Exam Decision Rule

If the question says...Choose...
"Build a custom ML model", "train on company data", "data scientist"SageMaker
"Use a foundation model", "generative AI app", "no ML expertise"Bedrock
"Fine-tune a foundation model"Bedrock (fine-tuning in Bedrock)
"Train a model from scratch with custom algorithm"SageMaker
"Business users want to do ML without code"SageMaker Canvas
"Build a chatbot using Claude/Titan/Llama"Bedrock

Exam Tip: Think of it this way — SageMaker = Build ML models, Bedrock = Use existing AI models. If the scenario mentions "foundation model", "generative AI", or "LLM", it's almost always Bedrock.


2. Amazon Lex vs. Amazon Q vs. Amazon Connect

All three involve "conversational AI," but each has a very different scope.

FeatureAmazon LexAmazon QAmazon Connect
What It IsBuild custom chatbots and voice bots (NLU engine)AI assistant for business and developer tasksCloud-based contact center with AI features
Primary PurposeCreate conversational interfaces (intents, slots, fulfillment)Answer questions from company data, generate code, analyze dataHandle phone calls, chats, and customer interactions
Target UserDevelopers building chatbot experiencesBusiness users (Q Business) and developers (Q Developer)Contact center managers and agents
AI CapabilityNatural Language Understanding (NLU) — understands user intentGenerative AI — answers questions, writes code, creates appsAI-powered routing, sentiment analysis, transcription
Key DifferentiatorSame technology that powers AlexaConnects to 40+ enterprise data sources (S3, SharePoint, Slack, etc.)Full contact center solution (phone, chat, routing, analytics)
OutputIntent recognition → trigger Lambda/actionNatural language answers from your company knowledgeCall routing, agent assistance, sentiment tracking

🧠 Exam Decision Rule

If the question says...Choose...
"Build a chatbot", "voice bot", "intents and slots", "like Alexa"Amazon Lex
"AI assistant for employees", "answer questions from company docs"Amazon Q Business
"AI assistant for developers", "code generation in IDE"Amazon Q Developer
"Contact center", "call center", "customer service phone system"Amazon Connect
"Chatbot integrated into a contact center"Amazon Connect + Amazon Lex (they integrate together)

Exam Tip: Lex = build chatbots from scratch. Q = ready-made AI assistant. Connect = phone/chat contact center. Don't confuse them!


3. Amazon Comprehend vs. Amazon Textract vs. Amazon Kendra

All three deal with "understanding text," but they process very different types of input.

FeatureAmazon ComprehendAmazon TextractAmazon Kendra
What It IsNLP service — analyzes text for meaningDocument extraction — pulls text from images/PDFsEnterprise search — finds answers across documents
InputPlain text (already digitized)Scanned documents, PDFs, images (physical or digital)Documents stored in 40+ data sources
What It DoesSentiment analysis, entity detection, key phrases, language detection, PII detectionExtracts text, forms, tables, signatures from documentsAnswers natural language questions by searching your document library
OutputEntities, sentiment scores, key phrases, language codesStructured data (key-value pairs, table rows, raw text)Ranked search results with relevant document excerpts
ML ModelPre-trained NLP models + custom entity recognitionPre-trained document understanding modelsML-powered relevance ranking
Key Use CasesSocial media sentiment, customer feedback analysis, PII redactionInvoice processing, form digitization, ID verification"What is our refund policy?" — searches across all company docs

🧠 Exam Decision Rule

If the question says...Choose...
"Analyze sentiment", "detect entities", "key phrases", "PII in text"Amazon Comprehend
"Extract text from scanned documents", "forms", "tables", "invoices", "OCR"Amazon Textract
"Search across documents", "enterprise search", "natural language questions"Amazon Kendra
"Medical text analysis"Amazon Comprehend Medical
"Extract data from medical forms"Amazon Textract

Exam Tip: Comprehend = analyze text meaning. Textract = extract text from images/documents. Kendra = search and find answers across documents. The input format is key: if it's already text → Comprehend. If it's a scanned document → Textract. If it's "find information across many documents" → Kendra.


4. Amazon Polly vs. Amazon Transcribe vs. Amazon Translate

The "language trio" — each handles a different aspect of language processing.

FeatureAmazon PollyAmazon TranscribeAmazon Translate
What It IsText → Speech (TTS)Speech → Text (ASR)Text → Text in another language
DirectionWritten text ➜ Spoken audioSpoken audio ➜ Written textText in Language A ➜ Text in Language B
Key Features60+ voices, neural voices, SSML, speech marksSpeaker diarization, custom vocabulary, toxicity detection75+ languages, custom terminology, formality control
Use CasesAccessibility (screen readers), audiobooks, voice announcementsMeeting transcription, subtitles, call analyticsMultilingual apps, content localization, real-time chat translation
Exam Keyword"Text-to-speech", "lifelike speech", "voice""Speech-to-text", "transcription", "subtitles""Translation", "multilingual", "localize"

🧠 Exam Decision Rule

If the question says...Choose...
"Convert text to speech", "read aloud", "voice output"Amazon Polly
"Convert speech to text", "transcribe audio", "subtitles", "captions"Amazon Transcribe
"Translate text", "multilingual", "localize content"Amazon Translate
"Transcribe and translate a call"Amazon Transcribe + Amazon Translate
"Medical transcription"Amazon Transcribe Medical

Exam Tip: Remember the flow: Polly = text ➜ speech, Transcribe = speech ➜ text, Translate = language A ➜ language B. Each is one-directional and purpose-built.


5. Amazon Rekognition vs. Amazon Textract

Both work with images, but for completely different purposes.

FeatureAmazon RekognitionAmazon Textract
What It IsComputer vision — identifies objects, faces, scenes in images/videoDocument understanding — extracts text and structure from documents
FocusWhat's in the image (objects, people, activities)What text is on the image/document
CapabilitiesFace detection/comparison, object & scene detection, content moderation, celebrity recognition, custom labelsText extraction (OCR), form data (key-value pairs), table extraction, signature detection
Video SupportYes (analyze video streams and stored video)No (images and documents only)
Use CasesContent moderation, face verification (security), people counting, PPE detectionInvoice processing, form digitization, ID extraction, receipt scanning

🧠 Exam Decision Rule

If the question says...Choose...
"Detect faces", "identify objects", "content moderation", "unsafe images"Amazon Rekognition
"Extract text from a document", "OCR", "form fields", "tables", "invoices"Amazon Textract
"Detect PPE in video", "people counting"Amazon Rekognition
"Read handwriting from a scanned form"Amazon Textract

Exam Tip: Think of it this way — Rekognition asks "What is this image OF?" while Textract asks "What TEXT is on this page?"


6. Amazon Bedrock Knowledge Bases vs. Amazon Kendra (for RAG)

Both can power a RAG (Retrieval-Augmented Generation) workflow, but they're architecturally different.

FeatureBedrock Knowledge BasesAmazon Kendra
What It IsManaged RAG pipeline that connects FMs to your dataEnterprise search engine with ML-powered ranking
Primary UseProvide context to foundation models for accurate generative responsesReturn ranked search results from enterprise documents
ArchitectureData → Chunk → Embed → Vector DB → Retrieve → Augment FM promptData → Index → ML ranking → Return search results
OutputFM-generated natural language answers (grounded in your data)Ranked list of relevant document excerpts
Data ProcessingAuto-chunks, embeds, and stores in vector databaseIndexes documents with ML-powered relevance ranking
ConnectorsS3, Web Crawler, Confluence, SharePoint, Salesforce40+ connectors (S3, SharePoint, RDS, Salesforce, ServiceNow, etc.)
Vector DatabaseOpenSearch Serverless, Aurora PostgreSQL, Pinecone, Redis, MongoDB AtlasBuilt-in (no separate vector DB needed)

🧠 Exam Decision Rule

If the question says...Choose...
"RAG with a foundation model", "ground FM responses in company data"Bedrock Knowledge Bases
"Enterprise search", "search across documents", "find relevant documents"Amazon Kendra
"Reduce hallucinations in generative AI responses"Bedrock Knowledge Bases (RAG)
"Employees need to search internal wikis and documents"Amazon Kendra
"Chatbot that answers from company docs using an FM"Bedrock Knowledge Bases + Bedrock Agent

Exam Tip: Knowledge Bases = RAG for generative AI (feed context to an FM). Kendra = search engine (return relevant documents). If the output needs to be generated text → Knowledge Bases. If the output is a list of documents → Kendra.


7. SageMaker Canvas vs. SageMaker Autopilot vs. SageMaker Studio

These are all part of SageMaker, but target different user skill levels.

FeatureSageMaker CanvasSageMaker AutopilotSageMaker Studio
Target UserBusiness analysts (zero ML knowledge)Citizen data scientists (minimal ML knowledge)Data scientists & ML engineers (full ML expertise)
InterfaceVisual, no-code (point-and-click)AutoML (automated model building)Full IDE (notebooks, experiments, debugging)
ML Knowledge NeededNoneMinimalExpert-level
What It DoesUpload CSV → get predictions (fully guided)Upload data → auto-generates candidate models → selects bestFull ML lifecycle: explore data, build features, train, tune, deploy
CustomizationVery limited (choose problem type, select columns)Moderate (can inspect generated notebooks)Full control (custom code, algorithms, frameworks)
Use CasesSales forecasting, churn prediction (by business teams)Quick prototyping, baseline model creationCustom deep learning, research, production ML pipelines

🧠 Exam Decision Rule

If the question says...Choose...
"Business user", "no code", "no ML experience", "visual interface"SageMaker Canvas
"Automatically build the best model", "AutoML", "compare models"SageMaker Autopilot
"Data scientist", "custom model", "Jupyter notebook", "full control"SageMaker Studio

Exam Tip: Canvas = no-code ML. Autopilot = AutoML. Studio = full IDE for experts. The key differentiator is the user's skill level.


8. Fine-Tuning vs. Continued Pre-Training vs. RAG vs. Prompt Engineering

Four ways to customize or improve an FM's output — the exam loves testing when to use each.

FeaturePrompt EngineeringRAGFine-TuningContinued Pre-Training
What It IsCrafting better prompts to guide the FMRetrieving relevant data and adding it to the prompt as contextTraining the FM on labeled examples to improve task performanceTraining the FM on unlabeled domain text to teach new knowledge
Data NeededNo data neededExternal knowledge base (documents, databases)Labeled data (input → expected output pairs)Unlabeled domain corpus (raw text)
Model ChangesModel is not modifiedModel is not modified (only the prompt changes)Model weights are modified (creates a new model version)Model weights are modified (creates a new model version)
CostLowest (no training cost)Low-moderate (embedding, vector DB, retrieval costs)High (training compute required)Highest (large-scale training compute)
When to UseFirst attempt; simple tasks; format controlFM needs access to current/private dataFM needs to improve at a specific task (tone, format, style)FM needs to learn new domain vocabulary (medical, legal, financial)
LatencyNo additional latencySlight latency (retrieval step)No additional latency (model is already trained)No additional latency
Provides Current DataNoYes (retrieves latest data at query time)No (frozen at training time)No (frozen at training time)

🧠 Exam Decision Rule

If the question says...Choose...
"Improve output quality without training", "better prompts"Prompt Engineering
"FM needs access to company/private/current data"RAG
"Model should respond in a specific format/tone/style"Fine-Tuning (with labeled examples)
"Model doesn't understand domain-specific terminology"Continued Pre-Training (with unlabeled domain text)
"Reduce hallucinations"RAG (ground responses in factual data)
"Company has labeled training data for a specific task"Fine-Tuning
"No training data available"Prompt Engineering or RAG

Exam Tip: Try solutions in this order: Prompt Engineering (cheapest, fastest) → RAG (no training needed, accesses current data) → Fine-Tuning (needs labeled data, changes model) → Continued Pre-Training (needs large unlabeled corpus, most expensive).


9. Bedrock Guardrails vs. Amazon A2I (Human-in-the-Loop)

Both are about making AI safer and more responsible, but they work very differently.

FeatureBedrock GuardrailsAmazon A2I (Augmented AI)
What It IsAutomated filters that block harmful content in FM inputs/outputsHuman review workflows for ML predictions that need human verification
How It WorksDefine policies (content filters, denied topics, PII filters, word filters) → auto-applied to every requestSet confidence thresholds → low-confidence predictions are sent to human reviewers
Review TypeAutomated (no humans needed)Human (real people reviewing predictions)
What It BlocksHate speech, violence, sexual content, PII, specific topicsN/A — it doesn't block, it routes to humans for review
Use CasesContent moderation for chatbots, PII redaction, topic restrictionsDocument review (Textract), image moderation (Rekognition), custom ML review
IntegrationFM invocations, Agents, Knowledge BasesTextract, Rekognition, custom ML models

🧠 Exam Decision Rule

If the question says...Choose...
"Filter harmful content", "block topics", "redact PII", "content moderation for FM"Bedrock Guardrails
"Human review", "human-in-the-loop", "low-confidence predictions", "manual verification"Amazon A2I
"Prevent the FM from discussing competitor products"Bedrock Guardrails (denied topics)
"If AI confidence is below threshold, send to human reviewer"Amazon A2I

Exam Tip: Guardrails = automated safety filters. A2I = send to a human when AI is unsure. One is machine-driven, the other is human-driven.


10. Amazon Personalize vs. Amazon Comprehend vs. Amazon Rekognition (for Recommendations)

The exam may present recommendation scenarios — Personalize is the only dedicated recommendation service.

FeatureAmazon PersonalizeAmazon ComprehendAmazon Rekognition
What It IsBuild real-time personalization and recommendationsNLP service for text analysisComputer vision for image/video analysis
Use Case"Users who bought X also bought Y", personalized content feedsAnalyze text sentiment, detect entitiesDetect objects, faces, scenes in images
InputUser behavior data (clicks, purchases, views)Text dataImages and video
OutputPersonalized recommendations, similar items, personalized rankingsSentiment scores, entities, key phrasesObject labels, face data, moderation labels
ML RequiredNo (managed ML pipeline)No (pre-trained models)No (pre-trained models)

🧠 Exam Decision Rule

If the question says...Choose...
"Product recommendations", "personalized content", "user behavior", "similar items"Amazon Personalize
"Analyze customer reviews", "detect sentiment"Amazon Comprehend
"Recommend products based on image similarity"Amazon Rekognition Custom Labels (for visual similarity)

Exam Tip: If the scenario involves user behavior → recommendations, it's always Personalize. No other AWS service does ML-powered personalization.


11. SageMaker Ground Truth vs. SageMaker Clarify vs. SageMaker Model Monitor

Three SageMaker features that sound similar but serve different stages of the ML lifecycle.

FeatureGround TruthClarifyModel Monitor
What It IsData labeling service for creating training datasetsBias detection and explainability for ML modelsProduction monitoring for deployed ML models
When to UseBefore training (prepare labeled training data)Before and after training (check for bias, explain predictions)After deployment (detect model drift, data quality issues)
InputUnlabeled data (images, text, video)Training data or model predictionsIncoming inference data vs. baseline
OutputLabeled dataset ready for trainingBias reports, feature importance (SHAP values)Alerts for data drift, model drift, bias drift
Key FeatureHuman + automatic labeling (active learning)Fairness metrics, feature attributionContinuous monitoring, CloudWatch integration

🧠 Exam Decision Rule

If the question says...Choose...
"Label training data", "annotation", "human annotators"SageMaker Ground Truth
"Detect bias", "explain predictions", "feature importance", "SHAP"SageMaker Clarify
"Monitor model in production", "data drift", "model degradation"SageMaker Model Monitor

Exam Tip: Remember the ML lifecycle order: Ground Truth (label data) → Clarify (check bias) → Model Monitor (watch production). Each serves a different phase.


12. CloudWatch vs. CloudTrail vs. Bedrock Model Invocation Logging

Three ways to monitor Bedrock — each captures different information.

FeatureCloudWatchCloudTrailModel Invocation Logging
What It TracksPerformance metrics (latency, errors, throttling, invocation count)API calls (who called what API, when, from where)Full request/response payloads (actual prompts and model responses)
FocusOperational health and performanceSecurity audit and governanceContent analysis and debugging
Use Case"Is Bedrock performing well? Are there errors?""Who invoked this model? Was it authorized?""What prompts are users sending? What did the model respond?"
AlertingYes (CloudWatch Alarms)Via EventBridge (events on API calls)No (logging only, analyze in S3/CloudWatch Logs)

🧠 Exam Decision Rule

If the question says...Choose...
"Monitor Bedrock performance", "latency", "error rates"CloudWatch
"Audit who used Bedrock", "API call history", "security investigation"CloudTrail
"Capture full prompts and responses", "debug model outputs"Model Invocation Logging

Exam Tip: CloudWatch = how is it performing? CloudTrail = who did what? Model Invocation Logging = what was said?


13. Amazon Q Business vs. Amazon Q Developer

Same "Q" brand, very different purposes.

FeatureAmazon Q BusinessAmazon Q Developer
Target UserBusiness employees (HR, sales, operations, managers)Software developers (engineers, DevOps)
What It DoesAnswers business questions from company data (policies, docs, wikis)Generates code, debugs, transforms code, explains code, optimizes AWS
Data Sources40+ enterprise connectors (S3, SharePoint, Confluence, Slack, Salesforce, Jira, etc.)IDE context, codebase, AWS documentation
Where It LivesWeb app, Slack, Microsoft TeamsIDE (VS Code, JetBrains), AWS Console, CLI
Key FeaturesNatural language Q&A, document summarization, task automation (Q Apps)Code generation, code review, security scanning, code transformation (.NET → Java)
Access ControlIAM Identity Center + document-level ACL (respects enterprise permissions)IAM-based

🧠 Exam Decision Rule

If the question says...Choose...
"Employees need answers from company documents", "enterprise knowledge"Amazon Q Business
"Generate code", "code review", "upgrade Java version", "IDE assistant"Amazon Q Developer
"Create simple apps from natural language"Amazon Q Apps (part of Q Business)
"Optimize AWS resources", "troubleshoot AWS errors"Amazon Q Developer (in AWS Console)
"Natural language queries in QuickSight"Amazon Q in QuickSight

Exam Tip: Q Business = employee knowledge assistant. Q Developer = developer coding assistant. The user type determines which Q to choose.


14. Foundation Model Selection: When to Pick Which Model

The exam may ask you to choose the right foundation model in Bedrock.

If the scenario needs...Choose this FMWhy
Long document analysis (200K+ tokens)Claude (Anthropic)Largest context window, best at complex reasoning
Open-source / open-weight modelLlama (Meta)Only open-source option in Bedrock
Text-to-image generation (third-party)Stable DiffusionLeading image generation model
Text-to-image generation (Amazon)Titan Image Generator or Nova CanvasAmazon's own image models with watermarking
Video generationAmazon Nova ReelOnly video generation model
Lowest cost text generationAmazon Nova MicroText-only, lowest latency, most cost-effective
Best balance of cost and qualityAmazon Nova ProBalanced across accuracy, speed, and cost
Embeddings for RAG / vector searchAmazon Titan EmbeddingsDefault embedding model for Knowledge Bases
Image + text understanding (multimodal)Amazon Nova Lite or ClaudeMultimodal input support

Exam Tip: Know the model families: Claude = reasoning & safety, Llama = open-source, Titan/Nova = Amazon's models, Stable Diffusion = images. The exam tests model selection by use case, not model architecture.


15. Inference Options: On-Demand vs. Batch vs. Provisioned Throughput

FeatureOn-DemandBatch InferenceProvisioned Throughput
When to UseVariable, unpredictable workloadsLarge-scale, non-time-sensitive processingConsistent, high-volume production workloads
LatencyReal-time responseHours (async processing)Real-time (guaranteed low latency)
CostPay per token (highest per-token cost)Up to 50% cheaper than on-demandCommitted pricing (1 or 6 month terms)
CapacityShared (may be throttled)Shared (batch queue)Dedicated (guaranteed, no throttling)
Use CasesChatbots, APIs, interactive appsBulk document processing, dataset generationProduction apps with SLAs, custom models

🧠 Exam Decision Rule

If the question says...Choose...
"Variable workload", "unpredictable usage"On-Demand
"Process thousands of documents", "not time-sensitive", "cost savings"Batch Inference
"Consistent latency", "no throttling", "production SLA", "custom model"Provisioned Throughput

📋 Ultimate Quick-Reference Cheat Sheet

Use this table for last-minute revision before the exam.

If the exam asks about...And mentions...The answer is...
Building ML models"Custom model", "data scientist", "training"SageMaker
Using foundation models"Generative AI", "chatbot", "Claude", "Titan"Bedrock
No-code ML"Business analyst", "visual", "no code"SageMaker Canvas
AutoML"Auto-select best model", "compare models"SageMaker Autopilot
Chatbot building"Intents", "slots", "voice bot", "Alexa"Amazon Lex
AI assistant (business)"Employee Q&A", "company docs", "enterprise search"Amazon Q Business
AI assistant (developer)"Code generation", "IDE", "code review"Amazon Q Developer
Contact center"Call center", "phone system", "customer calls"Amazon Connect
Text analysis"Sentiment", "entities", "PII", "key phrases"Amazon Comprehend
Document extraction"OCR", "invoices", "forms", "tables", "scanned"Amazon Textract
Enterprise search"Search across documents", "find information"Amazon Kendra
Text to speech"Read aloud", "lifelike voice", "TTS"Amazon Polly
Speech to text"Transcribe", "subtitles", "audio to text"Amazon Transcribe
Translation"Multilingual", "translate", "localize"Amazon Translate
Image/video analysis"Detect faces", "objects", "content moderation"Amazon Rekognition
Recommendations"Personalized", "user behavior", "similar items"Amazon Personalize
Human review"Human-in-the-loop", "low confidence threshold"Amazon A2I
FM safety filters"Block harmful content", "PII redaction", "denied topics"Bedrock Guardrails
RAG pipeline"Ground FM in company data", "reduce hallucinations"Bedrock Knowledge Bases
FM takes actions"Call APIs", "multi-step tasks", "orchestrate"Bedrock Agents
Data labeling"Label training data", "annotation"SageMaker Ground Truth
Bias detection"Fairness", "explainability", "SHAP values"SageMaker Clarify
Model monitoring"Production drift", "data quality", "model degradation"SageMaker Model Monitor
FM customization (labeled data)"Task-specific improvement", "format/style/tone"Fine-Tuning
FM customization (unlabeled data)"Domain knowledge", "medical/legal terminology"Continued Pre-Training
FM customization (no data)"Better prompts", "zero-shot", "few-shot"Prompt Engineering
FM with current/private data"Company data", "reduce hallucinations"RAG
Performance monitoring"Latency", "error rates", "metrics", "alarms"CloudWatch
Security audit"Who did what", "API calls", "compliance"CloudTrail
Responsible AI"Fairness", "transparency", "bias", "governance"SageMaker Clarify + Bedrock Guardrails
Private Bedrock access"No internet", "VPC", "private connectivity"VPC Endpoints (PrivateLink)

🔑 5 Golden Rules for the AIF-C01 Exam

  1. "Foundation model" or "generative AI" → Bedrock. SageMaker is for custom ML model building only.

  2. "No ML expertise needed" → Managed AI Services (Comprehend, Rekognition, Textract, Polly, Transcribe, Translate, Personalize, Kendra). These are pre-trained and ready to use.

  3. "Company's private data" + FM → RAG (Bedrock Knowledge Bases). Never fine-tune just to give the FM access to company data.

  4. "Human review" → A2I. "Automated content filtering" → Guardrails. Don't confuse human review with automated moderation.

  5. Read the trigger words. AWS exam questions hide keywords that map directly to a specific service. Train yourself to spot: "sentiment" → Comprehend, "OCR" → Textract, "chatbot" → Lex, "recommendations" → Personalize.

Final Tip: When in doubt, ask yourself: "Is this about building ML models or using AI services?" Building = SageMaker. Using = Bedrock or AI Services. This single question eliminates 50% of wrong answers.

Amazon Bedrock
SWIPE ZONE
< DRAG ME >