Friday, December 2, 2016

Let's build a Neural Network with a few lines of code CONTINUED

Before we go deeper into Neural networks, I thought it would be better if we could learn how to predict output values for new inputs through a trained neural network. You need to do a small change to the previous code. This is assuming that the current neural network is fully optimized. In this example I'm not going to re-train the network with the new input, I'm just using the updated weights.

  1. import numpy as la
  2. from xlwings import xrange
  3. # input data
  4. X1 = la.array([[0, 0, 1], [1, 0, 1], [0, 1, 1]])
  5. # output data
  6. X2 = la.array([[0, 1, 0]]).T
  7. # new input
  8. X3 = la.array([1, 1, 1])  
  9. # sigmoid function
  10. def sigmoid(x, derivative=False):
  11.     if (derivative == True):
  12.         return x * (1 - x)
  13.     return 1 / (1 + la.exp(-x))
  14. # Generate random numbers
  15. la.random.seed(1)
  16. # initialize weights with mean 0
  17. synapse0 = 2 * la.random.random((3, 1)) - 1
  18. for iterations in xrange(10000):
  19.     # forward propagation
  20.     layer0 = X1
  21.     layer1 = sigmoid(la.dot(layer0, synapse0))
  22.     # error calculation
  23.     layer1_error = X2 - layer1
  24.     # multiply slopes by the error
  25.     # (reduce the error of high confidence predictions)
  26.     layer1_delta = layer1_error * sigmoid(layer1, True)
  27.     # weight update
  28.     synapse0 += la.dot(layer0.T, layer1_delta)
  29. #output for new input data
  30. predictedOutput = sigmoid(la.dot(X3, synapse0)
  31. print("Trained output:")
  32. print(layer1)
  33. print("Output for the new input:"
  34. print(predictedOutput)

The new output will be as follows;

Trained output:
[[ 0.01225605]
 [ 0.98980175]
 [ 0.0024512 ]]
Output for the new input:
[ 0.9505469]



Our new input row is highlighted in the below table.



Input
Output
0
0
1
0
1
0
1
1
0
1
1
0
1
1
1
1


9. X3 = la.array([111])  

With this line, we assign the new input to X3. 



39predictedOutput= sigmoid(la.dot(X3, synapse0)


With this line, we multiply the new input by the updated weights. The network is already optimized to the given input. Therefore neural network  can just apply the finalized weights to the new input. 



By adding the following code line, after the 'for loop',  you can see the updated weights.

print(synapse0)


If  we don't train the network further, for more data, the same updated weights will be applied to new data and predict the output value.

You can now check the output by reducing the number of iterations drastically. The value of updated weights will be different and the error will be quite high. There is no optimal number of iterations. However you should iterate the training until there is no significant decrease in error.

If you have any issues or questions with this post, please feel free to contact me at any time. 😊



2 comments:

  1. It's better if you can include related mathematical equations separately to the code like sigmoid function in your previous post. Then it would be very easy to understand the underlying concepts. [output calculations, error calculation, deltaW...etc]

    ReplyDelete