Skip to main content

Questions tagged [perceptron]

Perceptron is a basic linear classifier that outputs binary labels.

2 votes
1 answer
57 views

I'm watching Lecture 4 ("Curse of Dimensionality / Perceptron") from Cornell’s CS4780 (Spring 2017) by Prof. Kilian Weinberger on YouTube. In this lecture, he applies the bias trick to ...
xF6's user avatar
  • 23
1 vote
1 answer
118 views

The neural network is simply: y=max(max(x*w+b,0)*v+d,0) w,b is weight and bias of first neuron. v,d is weight and bias of second neuron. If data is for example: <...
Tbon's user avatar
  • 11
0 votes
0 answers
144 views

I need to implement classical perceptron algorithm from scratch using numpy and pandas for an assignments. I have done so using this algorithm: I have a linearly seperable dataset of 568 rows and 30 ...
Prad's user avatar
  • 1
0 votes
1 answer
555 views

I have a dataset in which the response variable is Sick(1) or not sick (2). As for the variables, there are a few numeric ones (2/14), all the others are variables by levels (example: 1-Abdominal pain,...
PicaR's user avatar
  • 334
0 votes
1 answer
74 views

I perceive a single perceptron as a single linear function $y = a_1x_1 + a_2x_2 + ... + a_nx_n + b_0$ with a goal to calculate the best weights combination $ w_1, w_2, ..., w_n $ that minimizes the ...
geo199's user avatar
  • 3
1 vote
1 answer
188 views

I wanted to visualize how a perceptron learns, so I made a class that performs gradient descent. To show the decision, I plot a plane showing where positive examples and negative examples are, ...
K. Shores's user avatar
  • 113
1 vote
1 answer
54 views

Suppose we have the following fully connected network made of perceptrons with a sign function as the activation unit, what issue arises, when trying to train this network with gradient descent?
WhatAMesh's user avatar
  • 131
1 vote
0 answers
41 views

why should I prefer L1 over L2, in fully-connected-layer or convolution? why use dropout between 2 layers, when there is the option of regularising a layer(or both) with something like L1 or L2? and ...
Naveen Reddy Marthala's user avatar
2 votes
1 answer
1k views

I have been asked to implement the Pegasos algorithm as below. It is similar to the Peceptron algorithm but includes eta and lambda terms. However, there is no bias term below and I don't know how ...
Matthew Martin's user avatar
1 vote
3 answers
111 views

I find it strange that so many deep learning tricks and improvements have been invented in the past decade but I never heard about someone trying out different models of the artificial neuron other ...
Jan Pisl's user avatar
  • 205
1 vote
0 answers
36 views

I have a multilayer perceptron model that is trained to recognize handwritten English letters from an image. In the training set each image matrix had 784 pixel values. The labels of these images ...
teller.py3's user avatar
1 vote
2 answers
54 views

I'm trying to add a bias neuron to my neural network that uses the backpropagation algorithm. I'm trying to figure out how I should go about this, should I treat the bias neuron as a regular neuron? ...
davegri's user avatar
  • 131
1 vote
0 answers
39 views

So I'm trying to classify some fashion mnist like photos into either boots or sneakers. I'm using a perception from sklearn to do so. The data set is a CSV containing pixel values. The model is ...
Fiach ONeill's user avatar
2 votes
1 answer
506 views

I have searched various sources to find out what distinguishes the McCulloch-Pitts neuron from the perceptron invented by Rosenblatt. In most sources only one of these elements is considered, in ...
Pablo's user avatar
  • 21
1 vote
1 answer
473 views

Suppose I have a perceptron with one-hidden layer, with the input - one real number $x \in \mathbb{R}$, and the activation function of the output layers - threshold functions: $$ \theta(x) = \begin{...
spiridon_the_sun_rotator's user avatar

15 30 50 per page
1
2 3 4 5 6