Watch a network learn the XOR function
This simulation demonstrates how a neural network learns the XOR (exclusive OR) function through backpropagation. XOR is a classic problem because it's not linearly separable, requiring at least one hidden layer to solve.
Input 1 | Input 2 | Expected | Predicted |
---|---|---|---|
0 | 0 | 0 | - |
0 | 1 | 1 | - |
1 | 0 | 1 | - |
1 | 1 | 0 | - |
A neural network consists of layers of interconnected neurons. Each connection has a weight, and each neuron has a bias. The network learns by adjusting these weights and biases to minimize prediction error.
Key Components:
Why XOR? The XOR function outputs 1 when inputs differ and 0 when they're the same. It's impossible to separate with a single line, making it a perfect test for neural networks with hidden layers.