# AI Fundamentals 101

**A plain-language guide to core AI concepts — from machine learning to agents, written from a software engineer's perspective.**

## Why I Wrote This Series

When I started building AI-powered systems, I ran into a problem that no tutorial warned me about: **I didn't actually understand the fundamentals.**

I could call an OpenAI API and get a response. I could copy-paste a LangChain example and make a chatbot. But when something went wrong — when my RAG pipeline returned irrelevant results, when my agent got stuck in a loop, when my model's accuracy tanked on new data — I didn't have the foundational knowledge to diagnose the problem.

The issue was that most "AI fundamentals" content falls into two camps: either it's academic papers full of math notation, or it's marketing fluff that tells you AI will change the world without explaining *how* it actually works. There was nothing in between for engineers who need to understand the concepts well enough to build real systems.

So I wrote this series. Every concept is explained through the lens of "why does this matter when you're building something?" I use Python examples to make abstract ideas concrete, and I reference my own projects — home lab monitoring, personal knowledge bases, DevOps automation — instead of made-up business scenarios.

This series is your map of the AI landscape before you start building.

***

## How This Fits with Other Series

| Series                                                                                                                            | Focus                                                             |
| --------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- |
| **AI Fundamentals 101 (this)**                                                                                                    | The concepts — what everything is and how it fits together        |
| [Machine Learning 101](https://blog.htunnthuthu.com/ai-and-machine-learning/machine-learning-101)                                 | Hands-on ML with scikit-learn — algorithms, evaluation, pipelines |
| [AI Engineer 101](https://blog.htunnthuthu.com/ai-and-machine-learning/artificial-intelligence/ai-engineer-101)                   | The AI engineer role — tooling, LLMs, embeddings, production APIs |
| [PyTorch 101](https://blog.htunnthuthu.com/ai-and-machine-learning/artificial-intelligence/pytorch-101)                           | Deep learning from scratch — tensors, autograd, neural networks   |
| [LLM API Development 101](https://blog.htunnthuthu.com/ai-and-machine-learning/artificial-intelligence/llm-api-development-101)   | Production LLM applications with Claude and FastAPI               |
| [RAG 101](https://blog.htunnthuthu.com/ai-and-machine-learning/artificial-intelligence/rag-101)                                   | End-to-end retrieval-augmented generation with pgvector           |
| [AI Agent Development 101](https://blog.htunnthuthu.com/ai-and-machine-learning/artificial-intelligence/ai-agent-development-101) | Building agents with ReAct loops, memory, and tool use            |

**Read this series first** if you're new to AI. It gives you the vocabulary and mental models that every other series assumes you have.

***

## What You Will Learn

### Part 1: What is Artificial Intelligence?

* The definition of AI — and why most definitions are wrong
* A brief history: symbolic AI → machine learning → deep learning → generative AI → agentic AI
* The 7 types of AI: reactive, limited memory, theory of mind, self-aware, ANI, AGI, ASI
* Key terminology every engineer needs: models, training, inference, parameters, weights

### Part 2: Machine Learning, Deep Learning, and Foundation Models

* Machine learning: learning from data instead of writing rules
* Supervised, unsupervised, and reinforcement learning — with Python examples
* Deep learning: neural networks and what makes them "deep"
* Foundation models: pre-trained, general-purpose models that changed everything
* Ten real-world ML use cases you interact with daily

### Part 3: Natural Language Processing — NLP, NLU, and NLG

* What is NLP and why it's the backbone of modern AI
* NLP vs NLU vs NLG — the processing pipeline explained
* Tokenization, stemming, and text preprocessing with Python
* Named entity recognition, sentiment analysis, and text classification
* From rule-based chatbots to LLM-powered assistants

### Part 4: Large Language Models and Generative AI

* What makes a language model "large" — parameters, data, and compute
* How transformers work — attention is all you need, in plain language
* Generative AI: text, image, code, and multimodal generation
* The cost equation: why LLMs are expensive and what prompt caching solves
* Limitations: hallucination, reasoning gaps, and the knowledge cutoff problem

### Part 5: RAG, Fine-Tuning, and Prompt Engineering

* Three strategies for customizing AI: RAG, fine-tuning, and prompt engineering
* When to use each and the trade-offs
* RAG explained: retrieval + generation for grounded answers
* Multimodal RAG: going beyond text
* Practical comparison with Python examples

### Part 6: AI Agents and Communication Protocols

* What makes an AI agent different from a plain LLM call
* Agent architectures: ReAct, tool use, and planning loops
* MCP vs API vs gRPC — how agents connect to tools and data
* A2A vs MCP — agent-to-agent vs agent-to-tool communication
* Human-in-the-loop: when AI should ask before acting

### Part 7: The AI Stack and Building Real AI Systems

* The modern AI stack: hardware → models → frameworks → applications
* Adding AI to existing applications — embedded AI patterns
* Why most AI projects fail (and how to avoid the "AI graveyard")
* NeuroSymbolic AI: combining neural networks with logical reasoning
* Responsible AI: bias, fairness, transparency, and accountability

***

## Prerequisites

* Basic programming knowledge (Python preferred)
* Curiosity about how AI works under the hood
* No math background required — every concept is explained in plain language

If you've read [Python 101](https://blog.htunnthuthu.com/getting-started/programming/python-101), you're ready.

***

## Stack

| Tool                 | Purpose                     |
| -------------------- | --------------------------- |
| Python 3.12          | All code examples           |
| scikit-learn         | ML demonstrations           |
| NLTK / spaCy         | NLP examples                |
| transformers         | Hugging Face model examples |
| matplotlib / seaborn | Visualizations              |

```
# requirements.txt
scikit-learn>=1.5
nltk>=3.9
matplotlib>=3.9
seaborn>=0.13
numpy>=2.0
pandas>=2.2
```

***

## Series Structure

| Part                                                                                                                                                | Title                                                  | Key Topics                                    |
| --------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ | --------------------------------------------- |
| [Part 1](https://blog.htunnthuthu.com/ai-and-machine-learning/artificial-intelligence/ai-fundamentals-101/part-1-what-is-artificial-intelligence)   | What is Artificial Intelligence?                       | History, types, terminology, landscape        |
| [Part 2](https://blog.htunnthuthu.com/ai-and-machine-learning/artificial-intelligence/ai-fundamentals-101/part-2-ml-dl-foundation-models)           | Machine Learning, Deep Learning, and Foundation Models | ML types, neural networks, pre-trained models |
| [Part 3](https://blog.htunnthuthu.com/ai-and-machine-learning/artificial-intelligence/ai-fundamentals-101/part-3-nlp-nlu-nlg)                       | Natural Language Processing — NLP, NLU, and NLG        | Text processing, NER, sentiment, chatbots     |
| [Part 4](https://blog.htunnthuthu.com/ai-and-machine-learning/artificial-intelligence/ai-fundamentals-101/part-4-llms-and-generative-ai)            | Large Language Models and Generative AI                | Transformers, tokens, Gen AI, cost, limits    |
| [Part 5](https://blog.htunnthuthu.com/ai-and-machine-learning/artificial-intelligence/ai-fundamentals-101/part-5-rag-finetuning-prompt-engineering) | RAG, Fine-Tuning, and Prompt Engineering               | Three customization strategies compared       |
| [Part 6](https://blog.htunnthuthu.com/ai-and-machine-learning/artificial-intelligence/ai-fundamentals-101/part-6-ai-agents-and-protocols)           | AI Agents and Communication Protocols                  | Agents, MCP, A2A, gRPC, human-in-the-loop     |
| [Part 7](https://blog.htunnthuthu.com/ai-and-machine-learning/artificial-intelligence/ai-fundamentals-101/part-7-ai-stack-and-building-systems)     | The AI Stack and Building Real AI Systems              | AI stack, embedded AI, failure modes, ethics  |

***

## Reference

This series uses IBM Technology's [AI Fundamentals playlist](https://www.youtube.com/playlist?list=PLOspHqNVtKADfxkuDuHduUkDExBpEt3DF) as a reference for topic coverage and structure, combined with hands-on experience from personal projects.

***

*Let's start with the basics:* [*Part 1 — What is Artificial Intelligence?*](https://blog.htunnthuthu.com/ai-and-machine-learning/artificial-intelligence/ai-fundamentals-101/part-1-what-is-artificial-intelligence)
