Skip to content
Gradland
Interactive Guide — Based on Official Anthropic Courses

Master Claude Code

30 hands-on lessons across 5 levels — from how AI works to building team-scale AI infrastructure. Sourced from Anthropic's official training courses. Tick each lesson as you complete it.

0%
0/30 done
🌱
Foundation
Understand how AI actually works, learn the 4D fluency framework, install Claude Code, and run your first real session.
0/6

Claude is a large language model (LLM) — a neural network trained on vast amounts of text. It predicts the most useful continuation of any input. That makes it brilliant at language, reasoning, and code — and it is also why it can sometimes be confidently wrong.

What Claude does well:

• Reading, understanding, and summarising long documents

• Writing and editing code in dozens of languages

• Explaining complex technical concepts in plain English

• Reasoning through multi-step problems step by step

• Adapting its tone — technical for engineers, plain English for a CEO

What Claude does not do:

• Browse the internet in real time (unless given a tool for it)

• Remember previous conversations (each session starts fresh by default)

• Guarantee factual accuracy — always verify important claims

• Know about events after its training cutoff date

Context window — Claude reads everything you put in a session as one big document. The context window is the maximum size of that document. Claude Code uses Claude's full 200 000-token window (≈150 000 words). When it fills up, old content is automatically compressed.

Claude Code is different from Claude.ai: Claude.ai is a chatbot. Claude Code is an *agent* — it reads your actual files, runs commands, makes edits, and loops until the task is done.

Context window — what counts toward it
Everything in one session counts toward the 200K-token limit:

  Your messages          (cheapest)
  Claude's responses     (moderate)
  File contents read     (can be large)
  Tool call results      (bash output, search results)
  CLAUDE.md content      (read at startup every session)

Rule of thumb:
  1 token ≈ 4 characters ≈ 0.75 words
  200K tokens ≈ 150,000 words ≈ 500 pages of text
  A typical mid-sized codebase ≈ 50K–200K tokens
🎯 Exercise

Go to claude.ai and ask: "Explain how you work — what is a transformer, what is a token, and what are your limitations?" Compare its self-description to what you just read.

💡 Pro tip

Claude is not a search engine. It generates plausible text, not ground truth. Treat its output like advice from a brilliant but occasionally overconfident colleague — useful, but always verify critical facts.

Official docs ↗