An early example of neural network without any hidden layers and with a single (possibly nonlinear) output unit.
Questions tagged [perceptron]
151 questions
39
votes
6 answers
What's the difference between logistic regression and perceptron?
I'm going through Andrew Ng's lecture notes on Machine Learning.
The notes introduce us to logistic regression and then to perceptron. While describing Perceptron, the notes say that we just change the definition of the threshold function used for…

GrowinMan
- 831
- 2
- 8
- 8
26
votes
3 answers
Multi-layer perceptron vs deep neural network
This is a question of terminology. Sometimes I see people refer to deep neural networks as "multi-layered perceptrons", why is this? A perceptron, I was taught, is a single layer classifier (or regressor) with a binary threshold output using a…

enumaris
- 1,075
- 2
- 9
- 19
21
votes
3 answers
From the Perceptron rule to Gradient Descent: How are Perceptrons with a sigmoid activation function different from Logistic Regression?
Essentially, my question is that in multilayer Perceptrons, perceptrons are used with a sigmoid activation function. So that in the update rule $\hat{y}$ is calculated as
$$\hat{y} = \frac{1}{1+\exp(-\mathbf{w}^T\mathbf{x}_i)}$$
How does this…
user39663
17
votes
1 answer
Why pure exponent is not used as activation function for neural networks?
The ReLU function is commonly used as an activation function in machine learning, as well, as its modifications (ELU, leaky ReLU).
The overall idea of these functions is the same: before x = 0 the value of the function is small (its limit to…

MefAldemisov
- 273
- 6
15
votes
1 answer
Clarification about Perceptron Rule vs. Gradient Descent vs. Stochastic Gradient Descent implementation
I experimented a little bit with different Perceptron implementations and want to make sure if I understand the "iterations" correctly.
Rosenblatt's original perceptron rule
As far as I understand, in Rosenblatt's classic perceptron algorithm, the…
user39663
14
votes
4 answers
What is the difference between a neural network and a perceptron?
Is there any difference between the terms "neural network" and "perceptron"?

RockTheStar
- 11,277
- 31
- 63
- 89
12
votes
2 answers
Decision boundary plot for a perceptron
I am trying to plot the decision boundary of a perceptron algorithm and I am really confused about a few things. My input instances are in the form $[(x_{1},x_{2}), y]$, basically a 2D input instance ($x_{1}$ and $x_{2}$) and a binary class target…

user2502020
- 163
- 1
- 2
- 6
11
votes
1 answer
How to kernelize a simple perceptron?
Classification problems with nonlinear boundaries cannot be solved by a simple perceptron. The following R code is for illustrative purposes and is based on this example in Python):
nonlin <- function(x, deriv = F) {
if (deriv) x*(1-x)
else…

vonjd
- 5,886
- 4
- 47
- 59
10
votes
4 answers
Difference between MLP(Multi-layer Perceptron) and Neural Networks?
I am wondering about the differences. Based on my understanding, MLP is one kind of neural networks, where the activation function is sigmoid, and error term is cross-entropy(logistics) error. Looking for help, thanks!

DQ_happy
- 527
- 1
- 6
- 16
9
votes
4 answers
Can a perceptron with sigmoid activation function perform nonlinear classification?
Consider the perceptron as illustrated in the figure above.
I know:
If the activation function is linear, i.e. the first three cases, then
the perceptron is equivalent to a linear classifier.
However, I just wonder:
Is the perceptron equivalent…

xmllmx
- 233
- 1
- 3
- 6
8
votes
1 answer
Support Vector Machine with Perceptron Loss
Typical support vector classifier uses the following optimization procedure:
$$\min \dfrac{1}{2}||w||^2 + C\sum_{i=1}^N \zeta_i$$
$$y_i(w^Tx_i+b) \geq 1 - \zeta_i$$
$$\zeta_i \geq 0$$
This hinge loss setup slightly penalizes the correctly classified…

Cagdas Ozgenc
- 3,716
- 2
- 29
- 55
7
votes
2 answers
What is the difference between MLP and RBF?
What are the main differences between two types of feedforward networks such as multilayer perceptrons (MLP) and radial basis function (RBF)?
What are the fundamental differences between these two types?

kenorb
- 549
- 2
- 7
- 19
7
votes
1 answer
When to use RBF networks instead of multilayer perceptron?
I understand that a radial basis function neural network (RBF) usually has 1 hidden layer, and it differs from a multi-layer perceptron (MLP) via its activation and combination functions among other things, but how do I decide when a data…

confused00
- 211
- 2
- 5
5
votes
2 answers
Intuition behind perceptron algorithm with offset
I was looking for an intuition for the perceptron algorithm with offset rule, why the update rule is as follows:
cycle through all points until convergence:
$\textbf{if }\, y^{(t)} \neq \theta^{T}x^{(t)} + \theta_0 \, \textbf{ then}\\\
\quad…

Charlie Parker
- 5,836
- 11
- 57
- 113
5
votes
1 answer
Why aren't neural networks used with RBF activation functions (or other non-monotonic ones)?
In most work I've seen, MLPs (multilayer perceptron, the most typical feedforward neural network) and RBF (radial basis function) networks are compared as distinct models, where
MLP neuron outputs $\sigma(\mathbf{w}^\top \mathbf{x})$.
$\sigma$…

Christabella Irwanto
- 577
- 5
- 11