AI Fundamentals 101

A plain-language guide to core AI concepts β€” from machine learning to agents, written from a software engineer's perspective.

Why I Wrote This Series

When I started building AI-powered systems, I ran into a problem that no tutorial warned me about: I didn't actually understand the fundamentals.

I could call an OpenAI API and get a response. I could copy-paste a LangChain example and make a chatbot. But when something went wrong β€” when my RAG pipeline returned irrelevant results, when my agent got stuck in a loop, when my model's accuracy tanked on new data β€” I didn't have the foundational knowledge to diagnose the problem.

The issue was that most "AI fundamentals" content falls into two camps: either it's academic papers full of math notation, or it's marketing fluff that tells you AI will change the world without explaining how it actually works. There was nothing in between for engineers who need to understand the concepts well enough to build real systems.

So I wrote this series. Every concept is explained through the lens of "why does this matter when you're building something?" I use Python examples to make abstract ideas concrete, and I reference my own projects β€” home lab monitoring, personal knowledge bases, DevOps automation β€” instead of made-up business scenarios.

This series is your map of the AI landscape before you start building.


How This Fits with Other Series

Series
Focus

AI Fundamentals 101 (this)

The concepts β€” what everything is and how it fits together

Hands-on ML with scikit-learn β€” algorithms, evaluation, pipelines

The AI engineer role β€” tooling, LLMs, embeddings, production APIs

Deep learning from scratch β€” tensors, autograd, neural networks

Production LLM applications with Claude and FastAPI

End-to-end retrieval-augmented generation with pgvector

Building agents with ReAct loops, memory, and tool use

Read this series first if you're new to AI. It gives you the vocabulary and mental models that every other series assumes you have.


What You Will Learn

Part 1: What is Artificial Intelligence?

  • The definition of AI β€” and why most definitions are wrong

  • A brief history: symbolic AI β†’ machine learning β†’ deep learning β†’ generative AI β†’ agentic AI

  • The 7 types of AI: reactive, limited memory, theory of mind, self-aware, ANI, AGI, ASI

  • Key terminology every engineer needs: models, training, inference, parameters, weights

Part 2: Machine Learning, Deep Learning, and Foundation Models

  • Machine learning: learning from data instead of writing rules

  • Supervised, unsupervised, and reinforcement learning β€” with Python examples

  • Deep learning: neural networks and what makes them "deep"

  • Foundation models: pre-trained, general-purpose models that changed everything

  • Ten real-world ML use cases you interact with daily

Part 3: Natural Language Processing β€” NLP, NLU, and NLG

  • What is NLP and why it's the backbone of modern AI

  • NLP vs NLU vs NLG β€” the processing pipeline explained

  • Tokenization, stemming, and text preprocessing with Python

  • Named entity recognition, sentiment analysis, and text classification

  • From rule-based chatbots to LLM-powered assistants

Part 4: Large Language Models and Generative AI

  • What makes a language model "large" β€” parameters, data, and compute

  • How transformers work β€” attention is all you need, in plain language

  • Generative AI: text, image, code, and multimodal generation

  • The cost equation: why LLMs are expensive and what prompt caching solves

  • Limitations: hallucination, reasoning gaps, and the knowledge cutoff problem

Part 5: RAG, Fine-Tuning, and Prompt Engineering

  • Three strategies for customizing AI: RAG, fine-tuning, and prompt engineering

  • When to use each and the trade-offs

  • RAG explained: retrieval + generation for grounded answers

  • Multimodal RAG: going beyond text

  • Practical comparison with Python examples

Part 6: AI Agents and Communication Protocols

  • What makes an AI agent different from a plain LLM call

  • Agent architectures: ReAct, tool use, and planning loops

  • MCP vs API vs gRPC β€” how agents connect to tools and data

  • A2A vs MCP β€” agent-to-agent vs agent-to-tool communication

  • Human-in-the-loop: when AI should ask before acting

Part 7: The AI Stack and Building Real AI Systems

  • The modern AI stack: hardware β†’ models β†’ frameworks β†’ applications

  • Adding AI to existing applications β€” embedded AI patterns

  • Why most AI projects fail (and how to avoid the "AI graveyard")

  • NeuroSymbolic AI: combining neural networks with logical reasoning

  • Responsible AI: bias, fairness, transparency, and accountability


Prerequisites

  • Basic programming knowledge (Python preferred)

  • Curiosity about how AI works under the hood

  • No math background required β€” every concept is explained in plain language

If you've read Python 101, you're ready.


Stack

Tool
Purpose

Python 3.12

All code examples

scikit-learn

ML demonstrations

NLTK / spaCy

NLP examples

transformers

Hugging Face model examples

matplotlib / seaborn

Visualizations


Series Structure

Part
Title
Key Topics

What is Artificial Intelligence?

History, types, terminology, landscape

Machine Learning, Deep Learning, and Foundation Models

ML types, neural networks, pre-trained models

Natural Language Processing β€” NLP, NLU, and NLG

Text processing, NER, sentiment, chatbots

Large Language Models and Generative AI

Transformers, tokens, Gen AI, cost, limits

RAG, Fine-Tuning, and Prompt Engineering

Three customization strategies compared

AI Agents and Communication Protocols

Agents, MCP, A2A, gRPC, human-in-the-loop

The AI Stack and Building Real AI Systems

AI stack, embedded AI, failure modes, ethics


Reference

This series uses IBM Technology's AI Fundamentals playlistarrow-up-right as a reference for topic coverage and structure, combined with hands-on experience from personal projects.


Let's start with the basics: Part 1 β€” What is Artificial Intelligence?

Last updated