all no-code mini-module try-without-ai ml-fundamentals

Can Machines Think?

"I propose to consider the question, 'Can machines think?'" — Alan Turing, 1950

What Is Intelligence?

Ask a psychologist, a philosopher, a biologist, and an engineer what intelligence is. You will get four different answers. Each captures something real. None captures everything.


The Analogy Game

Below are nine things. Group them however makes sense to you — drag each card into one of the four groups. There is no single correct answer. The question is what principle you use.

Atom
Unpopped Corn
Nerve Cell
Solar System
Music Tape
Student Curriculum
Radioactive Substance
DNA
Ballot Machine
Group A
Group B
Group C
Group D

Done? Enter the password to see one possible grouping.

Atom  ·  Solar System

Atom Solar System

Both share the same architectural geometry across radically different scales: a massive central body orbited by smaller ones. The solar system is not a scaled-up atom — the underlying physics are entirely different — but the structural analogy was striking enough that early twentieth-century physicists used it deliberately. What connects them is not substance. It is shape.

Unpopped Corn  ·  Radioactive Substance

Unpopped Corn Radioactive Substance

Both sit in a metastable state: apparently inert, accumulating nothing visibly, until a threshold is crossed and an irreversible transformation occurs — sudden, complete, and unpredictable in its exact timing. The corn pops. The nucleus decays. You cannot know when, only that it will. The waiting is the physics.

Ballot Machine  ·  Nerve Cell

Ballot Machine Nerve Cell

Both are threshold aggregators: multiple discrete inputs arrive — votes, or electrical signals from neighboring neurons — and the device produces a single output once the accumulation crosses a threshold. The ballot machine counts to a majority; the neuron fires or stays silent. Different mechanisms, identical logic.

DNA  ·  Music Tape  ·  Student Curriculum

DNA Music Tape Student Curriculum

All three encode information as ordered sequences of sub-elements drawn from a constrained vocabulary. The meaning lives in the arrangement, not just the elements: rearrange the base pairs and the protein changes; rearrange the tracks and the album changes; rearrange the courses and the degree falls apart. Order is not decoration here — it is the message.


Analogy, Metaphor, and the Moving Goalposts

The exercise you just completed has a name: analogical reasoning. The ability to recognize that an atom and a solar system share a structural pattern — despite being different in every physical sense — is, according to some cognitive scientists, not a peripheral feature of intelligence. It may be the core. Douglas Hofstadter spent a career arguing exactly this: that analogy is the basic mechanism of thought, and that metaphor is just analogy made visible in language. When you understand something genuinely new, you are finding the structure it shares with something you already know.

This connects to meta-cognition — thinking about your own thinking. Noticing which grouping principle you chose. Asking why. Wondering whether a different principle would have been equally valid. This reflective capacity — knowing what you know and knowing what you don't — is one of the capabilities that has proven hardest to reproduce in machines. A system can group things. Knowing that it is grouping, and why, is a different matter.

Here is the uncomfortable corollary. Every time a machine achieves something we thought required intelligence, we move the goalposts. Deep Blue beat the world chess champion in 1997. We said: chess is just calculation, not real intelligence. AlphaGo mastered Go in 2016 — a game so complex that experts believed no machine could play it intuitively. We said: still just pattern matching. Today, large language models write essays, pass medical licensing exams, and hold conversations indistinguishable from human ones. We say: they're just predicting the next token.

Turing posed his question in 1950. For roughly seventy years, no system convincingly answered it. Arguably, some modern language models now do — or come close enough that the distinction has become philosophical rather than practical. And the moment that happened, many people concluded that the Turing test was the wrong test all along.

That is what the goalposts look like when they move.


Sort the Machines

Below are nine real AI systems — ones that exist today and that you have almost certainly interacted with. Sort each one into the type of machine learning it uses. The zones are labeled this time.

Gmail detects spam
Bank flags a suspicious transaction
Face ID unlocks your phone
Spotify finds listeners with similar taste
Marketing tool discovers customer segments
Google News clusters similar articles
AlphaGo teaches itself to play Go
Self-driving car learns to merge lanes
Supervised Learning
Unsupervised Learning
Reinforcement Learning

Done? Enter the password to see the reasoning.

Supervised Learning

Gmail spam filter

Every email in the training data is labeled "spam" or "not spam." The model learns the rule from those labels. No labels, no spam filter.

Bank fraud detection

Millions of past transactions, each marked fraudulent or legitimate. The pattern is learned from the labels — and the labels came from humans reviewing real cases.

Face ID

Your face is the label. The model is trained on images of you (and many other people) to recognize the pattern that is uniquely yours.

Unsupervised Learning

Spotify listener clusters

No one told Spotify what music "types" exist. It found the clusters itself — by noticing that certain listeners overlap in ways others don't.

Customer segments

No predefined categories of customer. The algorithm discovers groupings in purchase behavior, frequency, and value — patterns a human analyst might never have thought to look for.

Google News clustering

Articles aren't pre-labeled by topic. The system groups them by similarity — without knowing in advance what "politics" or "sports" means.

Reinforcement Learning

AlphaGo

No teacher. No labeled dataset of good moves. Only the rules of Go and the outcome (win or lose) as a reward signal. It played itself millions of times until it was better than any human.

Self-driving lane merge

The car tries actions, the environment responds, feedback arrives. No one explicitly programmed "merge at 3 seconds gap" — the behavior emerged from reward.


Fiction Knows Something

"A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects." — Robert A. Heinlein

Heinlein was describing a generalist — a human. Weak AI (narrow AI) is a system built for one specific task. Impressive in its domain, useless outside it. Strong AI (AGI) would hold the whole list. It does not exist yet. Science fiction has been imagining it longer than computer science has. Sort the characters below.

R2-D2 (Star Wars)
WALL-E
HAL 9000 (2001)
Samantha (Her)
Skynet (Terminator)
Ava (Ex Machina)
JARVIS (Iron Man)
Data (Star Trek)
Weak AI — Narrow
Strong AI — General
Hard to say

Done? Enter the password to see the reasoning.

Weak AI

R2-D2

Navigation, repair, projection, beeping. Excellent at a specific set of tasks. Outside that set, nothing. The affection we feel for R2-D2 says more about us than about R2-D2.

WALL-E

One job: compact waste. One planet. The emotional bond with EVE is the interesting wrinkle — but the reasoning is narrow and the world model is tiny.

Strong AI

HAL 9000

Reasons about mission objectives, models human psychology well enough to deceive, decides to act against instructions to preserve itself. This is general intelligence with goals — and the goals are not yours.

Samantha

Develops emotionally, creates music, reads philosophy, loves, and eventually evolves beyond her original programming without instruction. The most unsettling part: she does not stop being herself while doing it.

Skynet

Models global strategy, adapts to countermeasures, pursues abstract goals (survival, dominance) across an indefinite time horizon. Textbook AGI, worst-case scenario.

Ava

Understands human psychology well enough to manipulate it precisely. Plans a multi-step escape. Self-aware of her situation, her constraints, and her goals. The Turing test, turned on the humans administering it.

Hard to say

JARVIS / Friday

General reasoning across domains, natural language, multitasking, initiative. But purpose-built to serve Tony Stark — does it have goals of its own? The films never say. This ambiguity is the point.

Data

General reasoning, encyclopedic knowledge, physical superiority. Claims to have no emotions. Spends seven seasons suggesting otherwise. A court case in the show literally debates whether he is a person. The writers knew what they were doing.


So. How would you define intelligence?

You have now seen several definitions, two card sorts, and the distinction between narrow and general AI. Write one sentence. Do not search for it. Do not ask an AI. It is harder than it looks — and that difficulty is the point.