Neural Network Training

Watch a network learn the XOR function

This simulation demonstrates how a neural network learns the XOR (exclusive OR) function through backpropagation. XOR is a classic problem because it's not linearly separable, requiring at least one hidden layer to solve.

XOR Truth Table & Predictions

Input 1 Input 2 Expected Predicted
0 0 0 -
0 1 1 -
1 0 1 -
1 1 0 -

Network Architecture

Training Loss Over Time

Current Loss

-

Epoch

0

Accuracy

0%

Status

Ready

Understanding Neural Networks

A neural network consists of layers of interconnected neurons. Each connection has a weight, and each neuron has a bias. The network learns by adjusting these weights and biases to minimize prediction error.

Key Components:

  • Forward Propagation: Input flows through the network to produce an output
  • Activation Functions: Non-linear functions (like sigmoid) that enable complex patterns
  • Backpropagation: Algorithm to calculate gradients and update weights
  • Gradient Descent: Optimization method to minimize loss

Why XOR? The XOR function outputs 1 when inputs differ and 0 when they're the same. It's impossible to separate with a single line, making it a perfect test for neural networks with hidden layers.