What even is a neural network?
Think of it like this:
-
A neuron is the basic unit. It takes inputs, multiplies them by weights, adds a bias, and passes the result through an activation function (like sigmoid or ReLU).
-
A neural network is just a bunch of these neurons organized into layers.
Neurons → Layers → Network
This structure lets us learn complex, non-linear relationships.
Anatomy of a Neuron (a.k.a. Perceptron)
You’ve seen the diagram:
-
Inputs:
x₀ (bias)
,x₁
,x₂
, ...,xₙ
-
Weights:
θ₁
,θ₂
, ...,θₙ
-
Output:
hθ(x) = g(z)
Wherez = θᵗx
andg
is the activation function (usually sigmoid:1 / (1 + e^(-z))
)
That’s just logistic regression—but the building block of bigger things.
Layers in a Neural Net
Andrew Ng breaks it down clean:
-
Input layer: where your data enters the network
-
Hidden layers: where the model learns internal features (this is where the magic happens)
-
Output layer: final prediction, classification, etc.
Each layer passes outputs (activations) to the next.
Each connection has a weight, learned during training.
Notation you’ll see over and over:
-
a⁽¹⁾
: activations in layer 1 (input layer) -
Θ⁽¹⁾
: weight matrix between layer 1 and 2 -
a⁽²⁾
: activations in layer 2 (hidden layer), etc.
Basically:
Comments
Post a Comment