Building an AND Gate with a Single Neuron
One of the simplest and most powerful ideas in neural networks is that a single neuron can be trained to perform logical operations. In this post, we'll walk through how a single artificial neuron can model the AND gate—a foundational building block of digital logic.
Understanding the AND Gate
The AND operator returns 1
only when both inputs are 1
; otherwise, it returns 0
. Here's the truth table:
$x_1$ | $x_2$ | $x_1$ AND $x_2$ |
---|---|---|
0 | 0 | 0 |
0 | 1 | 0 |
1 | 0 | 0 |
1 | 1 | 1 |
Our goal is to build a neuron that learns to replicate this logic.
The Neuron: Architecture and Activation
A neuron, also called a perceptron, takes input values ($x_1$, $x_2$), computes a weighted sum, adds a bias term, and passes the result through an activation function.
We add a bias input $x_0 = 1$ so the neuron has more flexibility. The neuron's hypothesis function is:
Where:
-
$x_0 = 1$ is the bias.
-
$\theta_0$, $\theta_1$, $\theta_2$ are the neuron’s weights.
-
$g(z)$ is the activation function.
For this example, we'll use the sigmoid function:
This function smoothly maps input values to a range between 0 and 1.
Choosing the Right Weights
The behavior of our neuron depends on the weights we assign. We'll use:
Let’s test these weights with each possible input combination.
Case 1: $x_1 = 0$, $x_2 = 0$
Case 2: $x_1 = 0$, $x_2 = 1$
Case 3: $x_1 = 1$, $x_2 = 0$
Case 4: $x_1 = 1$, $x_2 = 1$
These results perfectly replicate the AND gate. The neuron outputs a value close to 1
only when both inputs are 1
.
Decision Boundary and Intuition
The weighted sum $z$ creates a linear decision boundary:
This equation defines a line in the input space. On one side of the line, the neuron outputs a value close to 0
. On the other side—where both $x_1$ and $x_2$ are 1
—the output is close to 1
.
Comments
Post a Comment