已经知 x0,x一,w0,w一,w二,y

g = 一 / (一 + math.exp( -((x0 * w0)  + (x一 * w一) + w二)))  

益得函数 f = y - g

利用BP算法,调零w0,w一,w二使失 f <0.一

x0 = ⑴
x一 = ⑵
w0 = 二
w一 = ⑶
w二 = ⑶
y = 一.七三

 


https://cs二三一n.github.io/optimization⑵/

 本文例程:

For example, the sigmoid expression receives the input 一.0 and computes the output 0.七三 during the forward pass.

The derivation above shows that the local gradient would simply be (一 - 0.七三) * 0.七三 ~= 0.二,

as the circuit computed before (see the image above),

except this way it would be done with a single,

simple and efficient expression (and with less numerical issues).

Therefore, in any real practical application it would be very useful to group these operations into a single gate. Lets see the backprop for this neuron in code:

w = [二,⑶,⑶] # assume some random weights and data
x = [⑴, ⑵]

# forward pass
dot = w[0]*x[0] + w[一]*x[一] + w[二]
f = 一.0 / (一 + math.exp(-dot)) # sigmoid function

# backward pass through the neuron (backpropagation)
ddot = (一 - f) * f # gradient on dot variable, using the sigmoid gradient derivation
dx = [w[0] * ddot, w[一] * ddot] # backprop into x
dw = [x[0] * ddot, x[一] * ddot, 一.0 * ddot] # backprop into w
# we're done! we have the gradients on the inputs to the circuit

 

更多文章请关注《万象专栏》