I have also described a multilayer perceptron as a weighted average or ensemble of logits. But how are the weights in each hidden layer logistic activation function (or any activation function for other network architectures) estimated? How are the weights in the combination functions estimated? Neural networks can be estimated using back propagation, described in Hastie as

*'a generic approach to minimizing R(θ) (the cost function) by gradient descent.'*

Given a neural network with inputs X with hidden layers comprised of hidden units Z used to predict some outcome Y , schematically (simplifying the notation in Hastie)

X -> Z -> Y

Z = σ( α

_{0}+ α'

_{1}x)

T = β

_{0}+ βZ

f(X) = g(T) [1]

where σ = the activation function

Given weights {α

_{0},α

_{0}, β

_{0}, β} find the values that minimize the specified error function:

R(

*θ) =*∑∑ ( y-f(x)

^{2}

_{ }) [2] (note a number of possible error functions may be used)

Gradient Descent Update:

β

^{r+1}– β^{r}- γ ∂R/ ∂β [3]α

^{r+1}– α^{r}- γ ∂R/ ∂α [4]Errors can be re-specified as:

∂R/ ∂β = δZ [5]

∂R/ ∂α =Sx [6]

Backpropogation equations:

s = σ'( α

^{T}x )βδ [7]Algorithm:

Forward Pass: use initial or current weights (guesses) and calculate f(X)

Backward Pass: calculate errors δ using [5] and then 'back propagate' via back propagation equations to obtain s. Both sets of errors (δ) and (s) are used to update weight estimates via the update equations [3]& [4].

In Predictive modeling with SAS Enterprise Miner by Sarma, the following basic description of back propagation is given:

Specify an error function E.

1) 1st iteration- set initial weights, use to evaluate E

2) 2nd iteration- weights are changed by a small amount such that the error is redced

-repeat until convergence

As Sarma explains, with each iteration a number of weights are produced, so if it takes 100 iterations to converge, 100 possible models are specified, giving 100 sets of weights. Using validation data, the best iteration can be chosen calculating E via the validation data.

## No comments:

## Post a Comment

Note: Only a member of this blog may post a comment.