Origins of Neural Networks
-
Single Machine Learning is not useful because it has low expressive power.
-
It must be linearly separable.
- Classification problem.
-
Method
- The sign of the dot product between the weight vector (perpendicular to the separation line) and the x vector indicates whether the two vectors are facing the same side.
- If they are not facing the same side, update the weight vector to x vector + weight vector.
- Do this for all data.
#getting Started with Machine Learning in Python
-
There are two ways to represent a perceptron.
- One is like the image above, where the bias is one of the inputs (always with a value of 1), and the weight becomes the bias.
- The other is a type that holds the bias as an internal value of the perceptron.
- The former is more commonly used.
-
Even the AND operator can be represented by a perceptron as shown below:
-
Combining perceptrons like the one above can create XOR, a simple neural network: #Udacity_Intro_to_Deep_Learning_with_PyTorch
-
Using functions like the sigmoid function, the output must be converted to a continuous value.
- This is because a discrete value does not change much when slightly moved.
- Same as Loss Function.
-
With the sigmoid function, when the value of x is sufficiently large, the gradient (used for error) becomes almost zero.