🎯 When to Use Which AWS AI/ML Service
The AIF-C01 exam loves to test whether you can pick the right service for a given scenario. Many AWS AI/ML services sound similar, but each has a very specific purpose. This guide breaks down every confusing pair so you never pick the wrong answer on exam day.
How to Use This Page: Read each comparison carefully. Focus on the "Exam Decision Rule" at the bottom of each section — these are your quick decision shortcuts. The cheat sheet at the very end is your last-minute revision lifesaver.
1. Amazon SageMaker vs. Amazon Bedrock
The #1 most confusing pair on the AIF-C01 exam. Both deal with AI/ML, but they serve completely different audiences.
| Feature | Amazon SageMaker | Amazon Bedrock |
|---|---|---|
| What It Is | End-to-end ML platform for building, training, and deploying custom models | Fully managed service for building apps with pre-built foundation models (FMs) |
| Target User | Data scientists, ML engineers (need ML expertise) | Application developers (no ML expertise needed) |
| Model Source | You build your own model from scratch or use JumpStart pre-trained models | You use existing foundation models (Claude, Titan, Llama, etc.) |
| Training | Yes — full control over training (data, algorithms, hyperparameters) | No training — use models as-is, or customize via fine-tuning/continued pre-training |
| Key Components | Studio, Canvas, Autopilot, Ground Truth, Clarify, Pipelines, Feature Store | Knowledge Bases, Agents, Guardrails, Prompt Management |
| Customization | Full control (custom algorithms, bring your own container) | Fine-tuning, continued pre-training, RAG, prompt engineering |
| Infrastructure | You choose instance types, manage endpoints | Fully managed (serverless API calls) |
| Use Cases | Fraud detection models, demand forecasting, custom NLP, image classification | Chatbots, content generation, summarization, Q&A from company docs |
🧠 Exam Decision Rule
| If the question says... | Choose... |
|---|---|
| "Build a custom ML model", "train on company data", "data scientist" | SageMaker |
| "Use a foundation model", "generative AI app", "no ML expertise" | Bedrock |
| "Fine-tune a foundation model" | Bedrock (fine-tuning in Bedrock) |
| "Train a model from scratch with custom algorithm" | SageMaker |
| "Business users want to do ML without code" | SageMaker Canvas |
| "Build a chatbot using Claude/Titan/Llama" | Bedrock |
Exam Tip: Think of it this way — SageMaker = Build ML models, Bedrock = Use existing AI models. If the scenario mentions "foundation model", "generative AI", or "LLM", it's almost always Bedrock.
2. Amazon Lex vs. Amazon Q vs. Amazon Connect
All three involve "conversational AI," but each has a very different scope.
| Feature | Amazon Lex | Amazon Q | Amazon Connect |
|---|---|---|---|
| What It Is | Build custom chatbots and voice bots (NLU engine) | AI assistant for business and developer tasks | Cloud-based contact center with AI features |
| Primary Purpose | Create conversational interfaces (intents, slots, fulfillment) | Answer questions from company data, generate code, analyze data | Handle phone calls, chats, and customer interactions |
| Target User | Developers building chatbot experiences | Business users (Q Business) and developers (Q Developer) | Contact center managers and agents |
| AI Capability | Natural Language Understanding (NLU) — understands user intent | Generative AI — answers questions, writes code, creates apps | AI-powered routing, sentiment analysis, transcription |
| Key Differentiator | Same technology that powers Alexa | Connects to 40+ enterprise data sources (S3, SharePoint, Slack, etc.) | Full contact center solution (phone, chat, routing, analytics) |
| Output | Intent recognition → trigger Lambda/action | Natural language answers from your company knowledge | Call routing, agent assistance, sentiment tracking |
🧠 Exam Decision Rule
| If the question says... | Choose... |
|---|---|
| "Build a chatbot", "voice bot", "intents and slots", "like Alexa" | Amazon Lex |
| "AI assistant for employees", "answer questions from company docs" | Amazon Q Business |
| "AI assistant for developers", "code generation in IDE" | Amazon Q Developer |
| "Contact center", "call center", "customer service phone system" | Amazon Connect |
| "Chatbot integrated into a contact center" | Amazon Connect + Amazon Lex (they integrate together) |
Exam Tip: Lex = build chatbots from scratch. Q = ready-made AI assistant. Connect = phone/chat contact center. Don't confuse them!
3. Amazon Comprehend vs. Amazon Textract vs. Amazon Kendra
All three deal with "understanding text," but they process very different types of input.
| Feature | Amazon Comprehend | Amazon Textract | Amazon Kendra |
|---|---|---|---|
| What It Is | NLP service — analyzes text for meaning | Document extraction — pulls text from images/PDFs | Enterprise search — finds answers across documents |
| Input | Plain text (already digitized) | Scanned documents, PDFs, images (physical or digital) | Documents stored in 40+ data sources |
| What It Does | Sentiment analysis, entity detection, key phrases, language detection, PII detection | Extracts text, forms, tables, signatures from documents | Answers natural language questions by searching your document library |
| Output | Entities, sentiment scores, key phrases, language codes | Structured data (key-value pairs, table rows, raw text) | Ranked search results with relevant document excerpts |
| ML Model | Pre-trained NLP models + custom entity recognition | Pre-trained document understanding models | ML-powered relevance ranking |
| Key Use Cases | Social media sentiment, customer feedback analysis, PII redaction | Invoice processing, form digitization, ID verification | "What is our refund policy?" — searches across all company docs |
🧠 Exam Decision Rule
| If the question says... | Choose... |
|---|---|
| "Analyze sentiment", "detect entities", "key phrases", "PII in text" | Amazon Comprehend |
| "Extract text from scanned documents", "forms", "tables", "invoices", "OCR" | Amazon Textract |
| "Search across documents", "enterprise search", "natural language questions" | Amazon Kendra |
| "Medical text analysis" | Amazon Comprehend Medical |
| "Extract data from medical forms" | Amazon Textract |
Exam Tip: Comprehend = analyze text meaning. Textract = extract text from images/documents. Kendra = search and find answers across documents. The input format is key: if it's already text → Comprehend. If it's a scanned document → Textract. If it's "find information across many documents" → Kendra.
4. Amazon Polly vs. Amazon Transcribe vs. Amazon Translate
The "language trio" — each handles a different aspect of language processing.
| Feature | Amazon Polly | Amazon Transcribe | Amazon Translate |
|---|---|---|---|
| What It Is | Text → Speech (TTS) | Speech → Text (ASR) | Text → Text in another language |
| Direction | Written text ➜ Spoken audio | Spoken audio ➜ Written text | Text in Language A ➜ Text in Language B |
| Key Features | 60+ voices, neural voices, SSML, speech marks | Speaker diarization, custom vocabulary, toxicity detection | 75+ languages, custom terminology, formality control |
| Use Cases | Accessibility (screen readers), audiobooks, voice announcements | Meeting transcription, subtitles, call analytics | Multilingual apps, content localization, real-time chat translation |
| Exam Keyword | "Text-to-speech", "lifelike speech", "voice" | "Speech-to-text", "transcription", "subtitles" | "Translation", "multilingual", "localize" |
🧠 Exam Decision Rule
| If the question says... | Choose... |
|---|---|
| "Convert text to speech", "read aloud", "voice output" | Amazon Polly |
| "Convert speech to text", "transcribe audio", "subtitles", "captions" | Amazon Transcribe |
| "Translate text", "multilingual", "localize content" | Amazon Translate |
| "Transcribe and translate a call" | Amazon Transcribe + Amazon Translate |
| "Medical transcription" | Amazon Transcribe Medical |
Exam Tip: Remember the flow: Polly = text ➜ speech, Transcribe = speech ➜ text, Translate = language A ➜ language B. Each is one-directional and purpose-built.
5. Amazon Rekognition vs. Amazon Textract
Both work with images, but for completely different purposes.
| Feature | Amazon Rekognition | Amazon Textract |
|---|---|---|
| What It Is | Computer vision — identifies objects, faces, scenes in images/video | Document understanding — extracts text and structure from documents |
| Focus | What's in the image (objects, people, activities) | What text is on the image/document |
| Capabilities | Face detection/comparison, object & scene detection, content moderation, celebrity recognition, custom labels | Text extraction (OCR), form data (key-value pairs), table extraction, signature detection |
| Video Support | Yes (analyze video streams and stored video) | No (images and documents only) |
| Use Cases | Content moderation, face verification (security), people counting, PPE detection | Invoice processing, form digitization, ID extraction, receipt scanning |
🧠 Exam Decision Rule
| If the question says... | Choose... |
|---|---|
| "Detect faces", "identify objects", "content moderation", "unsafe images" | Amazon Rekognition |
| "Extract text from a document", "OCR", "form fields", "tables", "invoices" | Amazon Textract |
| "Detect PPE in video", "people counting" | Amazon Rekognition |
| "Read handwriting from a scanned form" | Amazon Textract |
Exam Tip: Think of it this way — Rekognition asks "What is this image OF?" while Textract asks "What TEXT is on this page?"
6. Amazon Bedrock Knowledge Bases vs. Amazon Kendra (for RAG)
Both can power a RAG (Retrieval-Augmented Generation) workflow, but they're architecturally different.
| Feature | Bedrock Knowledge Bases | Amazon Kendra |
|---|---|---|
| What It Is | Managed RAG pipeline that connects FMs to your data | Enterprise search engine with ML-powered ranking |
| Primary Use | Provide context to foundation models for accurate generative responses | Return ranked search results from enterprise documents |
| Architecture | Data → Chunk → Embed → Vector DB → Retrieve → Augment FM prompt | Data → Index → ML ranking → Return search results |
| Output | FM-generated natural language answers (grounded in your data) | Ranked list of relevant document excerpts |
| Data Processing | Auto-chunks, embeds, and stores in vector database | Indexes documents with ML-powered relevance ranking |
| Connectors | S3, Web Crawler, Confluence, SharePoint, Salesforce | 40+ connectors (S3, SharePoint, RDS, Salesforce, ServiceNow, etc.) |
| Vector Database | OpenSearch Serverless, Aurora PostgreSQL, Pinecone, Redis, MongoDB Atlas | Built-in (no separate vector DB needed) |
🧠 Exam Decision Rule
| If the question says... | Choose... |
|---|---|
| "RAG with a foundation model", "ground FM responses in company data" | Bedrock Knowledge Bases |
| "Enterprise search", "search across documents", "find relevant documents" | Amazon Kendra |
| "Reduce hallucinations in generative AI responses" | Bedrock Knowledge Bases (RAG) |
| "Employees need to search internal wikis and documents" | Amazon Kendra |
| "Chatbot that answers from company docs using an FM" | Bedrock Knowledge Bases + Bedrock Agent |
Exam Tip: Knowledge Bases = RAG for generative AI (feed context to an FM). Kendra = search engine (return relevant documents). If the output needs to be generated text → Knowledge Bases. If the output is a list of documents → Kendra.
7. SageMaker Canvas vs. SageMaker Autopilot vs. SageMaker Studio
These are all part of SageMaker, but target different user skill levels.
| Feature | SageMaker Canvas | SageMaker Autopilot | SageMaker Studio |
|---|---|---|---|
| Target User | Business analysts (zero ML knowledge) | Citizen data scientists (minimal ML knowledge) | Data scientists & ML engineers (full ML expertise) |
| Interface | Visual, no-code (point-and-click) | AutoML (automated model building) | Full IDE (notebooks, experiments, debugging) |
| ML Knowledge Needed | None | Minimal | Expert-level |
| What It Does | Upload CSV → get predictions (fully guided) | Upload data → auto-generates candidate models → selects best | Full ML lifecycle: explore data, build features, train, tune, deploy |
| Customization | Very limited (choose problem type, select columns) | Moderate (can inspect generated notebooks) | Full control (custom code, algorithms, frameworks) |
| Use Cases | Sales forecasting, churn prediction (by business teams) | Quick prototyping, baseline model creation | Custom deep learning, research, production ML pipelines |
🧠 Exam Decision Rule
| If the question says... | Choose... |
|---|---|
| "Business user", "no code", "no ML experience", "visual interface" | SageMaker Canvas |
| "Automatically build the best model", "AutoML", "compare models" | SageMaker Autopilot |
| "Data scientist", "custom model", "Jupyter notebook", "full control" | SageMaker Studio |
Exam Tip: Canvas = no-code ML. Autopilot = AutoML. Studio = full IDE for experts. The key differentiator is the user's skill level.
8. Fine-Tuning vs. Continued Pre-Training vs. RAG vs. Prompt Engineering
Four ways to customize or improve an FM's output — the exam loves testing when to use each.
| Feature | Prompt Engineering | RAG | Fine-Tuning | Continued Pre-Training |
|---|---|---|---|---|
| What It Is | Crafting better prompts to guide the FM | Retrieving relevant data and adding it to the prompt as context | Training the FM on labeled examples to improve task performance | Training the FM on unlabeled domain text to teach new knowledge |
| Data Needed | No data needed | External knowledge base (documents, databases) | Labeled data (input → expected output pairs) | Unlabeled domain corpus (raw text) |
| Model Changes | Model is not modified | Model is not modified (only the prompt changes) | Model weights are modified (creates a new model version) | Model weights are modified (creates a new model version) |
| Cost | Lowest (no training cost) | Low-moderate (embedding, vector DB, retrieval costs) | High (training compute required) | Highest (large-scale training compute) |
| When to Use | First attempt; simple tasks; format control | FM needs access to current/private data | FM needs to improve at a specific task (tone, format, style) | FM needs to learn new domain vocabulary (medical, legal, financial) |
| Latency | No additional latency | Slight latency (retrieval step) | No additional latency (model is already trained) | No additional latency |
| Provides Current Data | No | Yes (retrieves latest data at query time) | No (frozen at training time) | No (frozen at training time) |
🧠 Exam Decision Rule
| If the question says... | Choose... |
|---|---|
| "Improve output quality without training", "better prompts" | Prompt Engineering |
| "FM needs access to company/private/current data" | RAG |
| "Model should respond in a specific format/tone/style" | Fine-Tuning (with labeled examples) |
| "Model doesn't understand domain-specific terminology" | Continued Pre-Training (with unlabeled domain text) |
| "Reduce hallucinations" | RAG (ground responses in factual data) |
| "Company has labeled training data for a specific task" | Fine-Tuning |
| "No training data available" | Prompt Engineering or RAG |
Exam Tip: Try solutions in this order: Prompt Engineering (cheapest, fastest) → RAG (no training needed, accesses current data) → Fine-Tuning (needs labeled data, changes model) → Continued Pre-Training (needs large unlabeled corpus, most expensive).
9. Bedrock Guardrails vs. Amazon A2I (Human-in-the-Loop)
Both are about making AI safer and more responsible, but they work very differently.
| Feature | Bedrock Guardrails | Amazon A2I (Augmented AI) |
|---|---|---|
| What It Is | Automated filters that block harmful content in FM inputs/outputs | Human review workflows for ML predictions that need human verification |
| How It Works | Define policies (content filters, denied topics, PII filters, word filters) → auto-applied to every request | Set confidence thresholds → low-confidence predictions are sent to human reviewers |
| Review Type | Automated (no humans needed) | Human (real people reviewing predictions) |
| What It Blocks | Hate speech, violence, sexual content, PII, specific topics | N/A — it doesn't block, it routes to humans for review |
| Use Cases | Content moderation for chatbots, PII redaction, topic restrictions | Document review (Textract), image moderation (Rekognition), custom ML review |
| Integration | FM invocations, Agents, Knowledge Bases | Textract, Rekognition, custom ML models |
🧠 Exam Decision Rule
| If the question says... | Choose... |
|---|---|
| "Filter harmful content", "block topics", "redact PII", "content moderation for FM" | Bedrock Guardrails |
| "Human review", "human-in-the-loop", "low-confidence predictions", "manual verification" | Amazon A2I |
| "Prevent the FM from discussing competitor products" | Bedrock Guardrails (denied topics) |
| "If AI confidence is below threshold, send to human reviewer" | Amazon A2I |
Exam Tip: Guardrails = automated safety filters. A2I = send to a human when AI is unsure. One is machine-driven, the other is human-driven.
10. Amazon Personalize vs. Amazon Comprehend vs. Amazon Rekognition (for Recommendations)
The exam may present recommendation scenarios — Personalize is the only dedicated recommendation service.
| Feature | Amazon Personalize | Amazon Comprehend | Amazon Rekognition |
|---|---|---|---|
| What It Is | Build real-time personalization and recommendations | NLP service for text analysis | Computer vision for image/video analysis |
| Use Case | "Users who bought X also bought Y", personalized content feeds | Analyze text sentiment, detect entities | Detect objects, faces, scenes in images |
| Input | User behavior data (clicks, purchases, views) | Text data | Images and video |
| Output | Personalized recommendations, similar items, personalized rankings | Sentiment scores, entities, key phrases | Object labels, face data, moderation labels |
| ML Required | No (managed ML pipeline) | No (pre-trained models) | No (pre-trained models) |
🧠 Exam Decision Rule
| If the question says... | Choose... |
|---|---|
| "Product recommendations", "personalized content", "user behavior", "similar items" | Amazon Personalize |
| "Analyze customer reviews", "detect sentiment" | Amazon Comprehend |
| "Recommend products based on image similarity" | Amazon Rekognition Custom Labels (for visual similarity) |
Exam Tip: If the scenario involves user behavior → recommendations, it's always Personalize. No other AWS service does ML-powered personalization.
11. SageMaker Ground Truth vs. SageMaker Clarify vs. SageMaker Model Monitor
Three SageMaker features that sound similar but serve different stages of the ML lifecycle.
| Feature | Ground Truth | Clarify | Model Monitor |
|---|---|---|---|
| What It Is | Data labeling service for creating training datasets | Bias detection and explainability for ML models | Production monitoring for deployed ML models |
| When to Use | Before training (prepare labeled training data) | Before and after training (check for bias, explain predictions) | After deployment (detect model drift, data quality issues) |
| Input | Unlabeled data (images, text, video) | Training data or model predictions | Incoming inference data vs. baseline |
| Output | Labeled dataset ready for training | Bias reports, feature importance (SHAP values) | Alerts for data drift, model drift, bias drift |
| Key Feature | Human + automatic labeling (active learning) | Fairness metrics, feature attribution | Continuous monitoring, CloudWatch integration |
🧠 Exam Decision Rule
| If the question says... | Choose... |
|---|---|
| "Label training data", "annotation", "human annotators" | SageMaker Ground Truth |
| "Detect bias", "explain predictions", "feature importance", "SHAP" | SageMaker Clarify |
| "Monitor model in production", "data drift", "model degradation" | SageMaker Model Monitor |
Exam Tip: Remember the ML lifecycle order: Ground Truth (label data) → Clarify (check bias) → Model Monitor (watch production). Each serves a different phase.
12. CloudWatch vs. CloudTrail vs. Bedrock Model Invocation Logging
Three ways to monitor Bedrock — each captures different information.
| Feature | CloudWatch | CloudTrail | Model Invocation Logging |
|---|---|---|---|
| What It Tracks | Performance metrics (latency, errors, throttling, invocation count) | API calls (who called what API, when, from where) | Full request/response payloads (actual prompts and model responses) |
| Focus | Operational health and performance | Security audit and governance | Content analysis and debugging |
| Use Case | "Is Bedrock performing well? Are there errors?" | "Who invoked this model? Was it authorized?" | "What prompts are users sending? What did the model respond?" |
| Alerting | Yes (CloudWatch Alarms) | Via EventBridge (events on API calls) | No (logging only, analyze in S3/CloudWatch Logs) |
🧠 Exam Decision Rule
| If the question says... | Choose... |
|---|---|
| "Monitor Bedrock performance", "latency", "error rates" | CloudWatch |
| "Audit who used Bedrock", "API call history", "security investigation" | CloudTrail |
| "Capture full prompts and responses", "debug model outputs" | Model Invocation Logging |
Exam Tip: CloudWatch = how is it performing? CloudTrail = who did what? Model Invocation Logging = what was said?
13. Amazon Q Business vs. Amazon Q Developer
Same "Q" brand, very different purposes.
| Feature | Amazon Q Business | Amazon Q Developer |
|---|---|---|
| Target User | Business employees (HR, sales, operations, managers) | Software developers (engineers, DevOps) |
| What It Does | Answers business questions from company data (policies, docs, wikis) | Generates code, debugs, transforms code, explains code, optimizes AWS |
| Data Sources | 40+ enterprise connectors (S3, SharePoint, Confluence, Slack, Salesforce, Jira, etc.) | IDE context, codebase, AWS documentation |
| Where It Lives | Web app, Slack, Microsoft Teams | IDE (VS Code, JetBrains), AWS Console, CLI |
| Key Features | Natural language Q&A, document summarization, task automation (Q Apps) | Code generation, code review, security scanning, code transformation (.NET → Java) |
| Access Control | IAM Identity Center + document-level ACL (respects enterprise permissions) | IAM-based |
🧠 Exam Decision Rule
| If the question says... | Choose... |
|---|---|
| "Employees need answers from company documents", "enterprise knowledge" | Amazon Q Business |
| "Generate code", "code review", "upgrade Java version", "IDE assistant" | Amazon Q Developer |
| "Create simple apps from natural language" | Amazon Q Apps (part of Q Business) |
| "Optimize AWS resources", "troubleshoot AWS errors" | Amazon Q Developer (in AWS Console) |
| "Natural language queries in QuickSight" | Amazon Q in QuickSight |
Exam Tip: Q Business = employee knowledge assistant. Q Developer = developer coding assistant. The user type determines which Q to choose.
14. Foundation Model Selection: When to Pick Which Model
The exam may ask you to choose the right foundation model in Bedrock.
| If the scenario needs... | Choose this FM | Why |
|---|---|---|
| Long document analysis (200K+ tokens) | Claude (Anthropic) | Largest context window, best at complex reasoning |
| Open-source / open-weight model | Llama (Meta) | Only open-source option in Bedrock |
| Text-to-image generation (third-party) | Stable Diffusion | Leading image generation model |
| Text-to-image generation (Amazon) | Titan Image Generator or Nova Canvas | Amazon's own image models with watermarking |
| Video generation | Amazon Nova Reel | Only video generation model |
| Lowest cost text generation | Amazon Nova Micro | Text-only, lowest latency, most cost-effective |
| Best balance of cost and quality | Amazon Nova Pro | Balanced across accuracy, speed, and cost |
| Embeddings for RAG / vector search | Amazon Titan Embeddings | Default embedding model for Knowledge Bases |
| Image + text understanding (multimodal) | Amazon Nova Lite or Claude | Multimodal input support |
Exam Tip: Know the model families: Claude = reasoning & safety, Llama = open-source, Titan/Nova = Amazon's models, Stable Diffusion = images. The exam tests model selection by use case, not model architecture.
15. Inference Options: On-Demand vs. Batch vs. Provisioned Throughput
| Feature | On-Demand | Batch Inference | Provisioned Throughput |
|---|---|---|---|
| When to Use | Variable, unpredictable workloads | Large-scale, non-time-sensitive processing | Consistent, high-volume production workloads |
| Latency | Real-time response | Hours (async processing) | Real-time (guaranteed low latency) |
| Cost | Pay per token (highest per-token cost) | Up to 50% cheaper than on-demand | Committed pricing (1 or 6 month terms) |
| Capacity | Shared (may be throttled) | Shared (batch queue) | Dedicated (guaranteed, no throttling) |
| Use Cases | Chatbots, APIs, interactive apps | Bulk document processing, dataset generation | Production apps with SLAs, custom models |
🧠 Exam Decision Rule
| If the question says... | Choose... |
|---|---|
| "Variable workload", "unpredictable usage" | On-Demand |
| "Process thousands of documents", "not time-sensitive", "cost savings" | Batch Inference |
| "Consistent latency", "no throttling", "production SLA", "custom model" | Provisioned Throughput |
📋 Ultimate Quick-Reference Cheat Sheet
Use this table for last-minute revision before the exam.
| If the exam asks about... | And mentions... | The answer is... |
|---|---|---|
| Building ML models | "Custom model", "data scientist", "training" | SageMaker |
| Using foundation models | "Generative AI", "chatbot", "Claude", "Titan" | Bedrock |
| No-code ML | "Business analyst", "visual", "no code" | SageMaker Canvas |
| AutoML | "Auto-select best model", "compare models" | SageMaker Autopilot |
| Chatbot building | "Intents", "slots", "voice bot", "Alexa" | Amazon Lex |
| AI assistant (business) | "Employee Q&A", "company docs", "enterprise search" | Amazon Q Business |
| AI assistant (developer) | "Code generation", "IDE", "code review" | Amazon Q Developer |
| Contact center | "Call center", "phone system", "customer calls" | Amazon Connect |
| Text analysis | "Sentiment", "entities", "PII", "key phrases" | Amazon Comprehend |
| Document extraction | "OCR", "invoices", "forms", "tables", "scanned" | Amazon Textract |
| Enterprise search | "Search across documents", "find information" | Amazon Kendra |
| Text to speech | "Read aloud", "lifelike voice", "TTS" | Amazon Polly |
| Speech to text | "Transcribe", "subtitles", "audio to text" | Amazon Transcribe |
| Translation | "Multilingual", "translate", "localize" | Amazon Translate |
| Image/video analysis | "Detect faces", "objects", "content moderation" | Amazon Rekognition |
| Recommendations | "Personalized", "user behavior", "similar items" | Amazon Personalize |
| Human review | "Human-in-the-loop", "low confidence threshold" | Amazon A2I |
| FM safety filters | "Block harmful content", "PII redaction", "denied topics" | Bedrock Guardrails |
| RAG pipeline | "Ground FM in company data", "reduce hallucinations" | Bedrock Knowledge Bases |
| FM takes actions | "Call APIs", "multi-step tasks", "orchestrate" | Bedrock Agents |
| Data labeling | "Label training data", "annotation" | SageMaker Ground Truth |
| Bias detection | "Fairness", "explainability", "SHAP values" | SageMaker Clarify |
| Model monitoring | "Production drift", "data quality", "model degradation" | SageMaker Model Monitor |
| FM customization (labeled data) | "Task-specific improvement", "format/style/tone" | Fine-Tuning |
| FM customization (unlabeled data) | "Domain knowledge", "medical/legal terminology" | Continued Pre-Training |
| FM customization (no data) | "Better prompts", "zero-shot", "few-shot" | Prompt Engineering |
| FM with current/private data | "Company data", "reduce hallucinations" | RAG |
| Performance monitoring | "Latency", "error rates", "metrics", "alarms" | CloudWatch |
| Security audit | "Who did what", "API calls", "compliance" | CloudTrail |
| Responsible AI | "Fairness", "transparency", "bias", "governance" | SageMaker Clarify + Bedrock Guardrails |
| Private Bedrock access | "No internet", "VPC", "private connectivity" | VPC Endpoints (PrivateLink) |
🔑 5 Golden Rules for the AIF-C01 Exam
"Foundation model" or "generative AI" → Bedrock. SageMaker is for custom ML model building only.
"No ML expertise needed" → Managed AI Services (Comprehend, Rekognition, Textract, Polly, Transcribe, Translate, Personalize, Kendra). These are pre-trained and ready to use.
"Company's private data" + FM → RAG (Bedrock Knowledge Bases). Never fine-tune just to give the FM access to company data.
"Human review" → A2I. "Automated content filtering" → Guardrails. Don't confuse human review with automated moderation.
Read the trigger words. AWS exam questions hide keywords that map directly to a specific service. Train yourself to spot: "sentiment" → Comprehend, "OCR" → Textract, "chatbot" → Lex, "recommendations" → Personalize.
Final Tip: When in doubt, ask yourself: "Is this about building ML models or using AI services?" Building = SageMaker. Using = Bedrock or AI Services. This single question eliminates 50% of wrong answers.