Before we go deeper into Neural networks, I thought it would be better if we could learn how to predict output values for new inputs through a trained neural network. You need to do a small change to the previous code. This is assuming that the current neural network is fully optimized. In this example I'm not going to re-train the network with the new input, I'm just using the updated weights.
- import numpy as la
- from xlwings import xrange
- # input data
- X1 = la.array([[0, 0, 1], [1, 0, 1], [0, 1, 1]])
- # output data
- X2 = la.array([[0, 1, 0]]).T
- # new input
- X3 = la.array([1, 1, 1])
- # sigmoid function
- def sigmoid(x, derivative=False):
- if (derivative == True):
- return x * (1 - x)
- return 1 / (1 + la.exp(-x))
- # Generate random numbers
- la.random.seed(1)
- # initialize weights with mean 0
- synapse0 = 2 * la.random.random((3, 1)) - 1
- for iterations in xrange(10000):
- # forward propagation
- layer0 = X1
- layer1 = sigmoid(la.dot(layer0, synapse0))
- # error calculation
- layer1_error = X2 - layer1
- # multiply slopes by the error
- # (reduce the error of high confidence predictions)
- layer1_delta = layer1_error * sigmoid(layer1, True)
- # weight update
- synapse0 += la.dot(layer0.T, layer1_delta)
- #output for new input data
- predictedOutput = sigmoid(la.dot(X3, synapse0))
- print("Trained output:")
- print(layer1)
- print("Output for the new input:")
- print(predictedOutput)
The new output will be as follows;
Trained output:
[[ 0.01225605]
[ 0.98980175]
[ 0.0024512 ]]
Output for the new input:
[ 0.9505469]
|
Our new input row is highlighted in the below table.
Input
|
Output
|
||
0
|
0
|
1
|
0
|
1
|
0
|
1
|
1
|
0
|
1
|
1
|
0
|
1
|
1
|
1
|
1
|
9. X3 = la.array([1, 1, 1])
With this line, we assign the new input to X3.
39. predictedOutput= sigmoid(la.dot(X3, synapse0))
With this line, we multiply the new input by the updated weights. The network is already optimized to the given input. Therefore neural network can just apply the finalized weights to the new input.
By adding the following code line, after the 'for loop', you can see the updated weights.
print(synapse0)
If we don't train the network further, for more data, the same updated weights will be applied to new data and predict the output value.
You can now check the output by reducing the number of iterations drastically. The value of updated weights will be different and the error will be quite high. There is no optimal number of iterations. However you should iterate the training until there is no significant decrease in error.
If you have any issues or questions with this post, please feel free to contact me at any time. 😊
It's better if you can include related mathematical equations separately to the code like sigmoid function in your previous post. Then it would be very easy to understand the underlying concepts. [output calculations, error calculation, deltaW...etc]
ReplyDeleteSure,I'll do it
DeleteThanks