Y = f(g(h(x)) or

x -> hidden layers ->Y

Example 1 With a logistic/sigmoidal activation function, a neural network can be visualized as a sum of weighted logits:

Y = α Σ w

_{i}e

^{θi}/1 + e

^{θi}+ ε

w

_{i}= weights θ = linear function Xβ

Y= 2 + w

_{1}Logit A + w

_{2}Logit B + w

_{3}Logit C + w

_{4}Logit D

( Adapted from ‘A Guide to Econometrics, Kennedy, 2003)

**Example 2**

Where: Y= W

_{0}+ W_{1}Logit H_{1}+ W_{2}Logit H_{2}+ W_{3}Logit H_{3}+ W_{4}Logit H_{4}and H

_{1}= logit(*w*_{10}+w_{11}x_{1}+ w_{12}x_{2})H

_{2}= logit(*w*_{20}+w_{21}x_{1}+ w_{22}x_{2})H

_{3}= logit(*w*_{30}+w_{31}x_{1}+ w_{32}x_{2})The links between each layer in the diagram correspond to the weights (w’s) in each equation. The weights can be estimated via back propagation.

*( Adapted from ‘A Guide to Econometrics, Kennedy, 2003 and Applied Analytics Using SAS Enterprise Miner 6.1)*

**MULTILAYER PERCEPTRON:**a neural network architecture that has one or more hidden layers, specifically having linear combination functions in the hidden and output layers, and sigmoidal activation functions in the hidden layers. (note: a basic logistic regression function can be visualized as a single layer perceptron)

**RADIAL BASIS FUNCTION (architecture):**a neural network architecture with exponential or softmax (generalized multinomial logistic) activation functions and radial basis combination functions in the hidden layers and linear combination functions in the output layers.

**RADIAL BASIS FUNCTION:**A combination function that is based on the Euclidean distance between inputs and weights

**ACTIVATION FUNCTION:**formula used for transforming values from inputs and the outputs in a neural network.

**COMBINATION FUNCTION:**formula used for combining transformed values from activation functions in neural networks.

**HIDDEN LAYER:**The layer between input and output layers in a neural network.

## No comments:

## Post a Comment

Note: Only a member of this blog may post a comment.