The single-layer perceptron (SLP) is the simplest type of neural network used for pattern (i.e. vector) classification, where the patterns are linearly separable – patterns which lie on opposite sides of a hyperplane . In its most basic form, a perceptron consists of a single M&P neuron.

The M&P neuron has inputs (x_{1},...,x_{n}) with a set of weights (w_{1},...,w_{n}), and an external bias of b. The induced local field, u of the neuron is therefore

u = SUM(w_{i}x_{i}) + b

The neuron uses the signum function as its threshold function, and thus produces a bipolar output, y, where

y = sgm(u)

There is clearly no point in allowing for more than one output from a single neuron, since there are no other neurons in the network to feed signals to, and all outputs would simply be functions of the same induced local field.

Such a perceptron can classify a m-dimensional vector (x_{1},x_{2},...,x_{m}) into two different classes, C_{1} or C_{2}. We do this by specifying a decision rule for classification: if the output is –1 the input vector belongs to class C_{1}, and if the output is 1 the input vector belongs to class C_{2}. Hence, this SLP can classify patterns that belong to two linearly separable classes. Adding more neurons allows the classification to extend to more then two such classes.