Timeline for Properly using activation functions of neural network
Current License: CC BY-SA 4.0
9 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Jul 8, 2018 at 10:49 | comment | added | ShellRox | I understood linear regression in-depth from Professor Gilbert Strang's linear algebra course but neural networks obviously seem to have different concept. Thanks for the links and course they will be very helpful! | |
| Jul 8, 2018 at 10:34 | comment | added | Green Falcon | Now as the answer to your question. In normal equation, you can learn almost anything but the main problem of that is that first, you don't know what polynomials to use and second, if it happens that you know, if you have so many input features the number of entries of your matrix increases drastically. Consequently people use neural nets which uses the current usual features and most of the time you don't need to add extra polynomial terms as the inputs. I highly recommend you to take a look at professor Andrew Ng's ML class in coursera. The week that has teached neural nets. | |
| Jul 8, 2018 at 10:32 | comment | added | Green Falcon | @ShellRox I can understand due to the fact that I've had those difficulties too :) Welcome to our community. I first recommend you taking a look at here, here and here. | |
| Jul 8, 2018 at 10:27 | vote | accept | ShellRox | ||
| Jul 8, 2018 at 10:27 | comment | added | ShellRox | No, unfortunately I'm not. I'm trying to understand how single neuron separator works before switching to multilayer perceptron. I thought inputs were weighted by optimal linear function (if data is small by normal equations), then summed up and passed to squashing function which has output that is classified between -1, 0 and 1, is this correct? I'm guessing multi-layer perceptrons are way to empty the confusion. Thanks a lot. | |
| Jul 8, 2018 at 10:12 | comment | added | Green Falcon | About your first part of the comment, neural nets can have one neuron or multiple neurons within different layers. Are you familiar with MLP? | |
| Jul 8, 2018 at 10:01 | comment | added | ShellRox | Thank you for the answer. My apologies for confusion but when you mentioned neural network which part of it did you mean? Inputs are weighted and summed up and then passed to squashing function in hidden layer which eliminates non-linearity (is this process reiterated for input weight optimization?). From your tensorflow playground link I saw that activation functions are weighted too. | |
| Jul 8, 2018 at 9:59 | history | edited | Green Falcon | CC BY-SA 4.0 | added 251 characters in body |
| Jul 8, 2018 at 9:37 | history | answered | Green Falcon | CC BY-SA 4.0 |