http://neuralnetworksanddeeplearning.com/chap1.html
Perceptrons cam compute anything since they are NAND gates. NN is a
network of perceptrons which can adjust weights and biases, hence
better than a conventional laid out circuit. Their inputs and outputs
are 0/1. Output = w.x + b where w.x is dot product of weights and
inputs and b is bias. Bias is -threshold.
But inputs to/outputs from Sigmoid neurons can be 0.683, i.e. anything
between 0 to 1. Output activation function = 1/(1 + e^(-z)) where z =
w.x + b. If we plot it, it's a smoothed version of step function or
Perceptron. Which gives it the property that small changes in inputs
result in small changes in output, unlike Perceptron. This property is
helpful in tuning of a NN, otherwise small changes in inputs will
result in significant changes down the line.
Still, Sigmoids and Perceptrons are similar in the sense that for
large z, output is 1 for small z output is 0.
Essentially, Δoutput is a linear function of the changes Δwj and Δb in
the weights and bias.
We can use other activation functions too, but σ(z) ≡ 1/(1+e^-z) is
popular since exponential has nice differential properties.
Subscribe to:
Post Comments (Atom)
Blog Archive
-
▼
2017
(64)
-
▼
February
(14)
- Andrew Ng course
- Anomaly detection - Andrew Ng
- neural network notes
- Neural network notes - 2
- Neural networks notes 1
- letsencrypt interfact not found
- exponential function and e
- tensorflow notes
- Numpy vs Tensorflow Matrix multiplication
- conda commands
- PyCharm with Anaconda - Using a specific environment
- scala notes
- scala notes
- Changing the port for react app(create-react-app)
-
▼
February
(14)
No comments:
Post a Comment