1 / 13

Supervised learning: Mixture Of Experts (MOE) Network

Supervised learning: Mixture Of Experts (MOE) Network . MOE Module. P(y|x, f). a 1 (x ). Gating Network. a 2 (x ). P(y|x, q 1 ). a 3 (x ). P(y|x, q 2 ). P(y|x, q 3 ). Local Expert. Local Expert. Local Expert. x.

dinesh
Download Presentation

Supervised learning: Mixture Of Experts (MOE) Network

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Supervised learning: Mixture Of Experts (MOE) Network

  2. MOE Module P(y|x,f) a1 (x) Gating Network a2 (x) P(y|x, q1) a3 (x) P(y|x, q2) P(y|x, q3) Local Expert Local Expert Local Expert x

  3. The objective is to estimate the model parameters so as to attain the highest probability of the training set given the estimated parameters. For a given input x , the posterior probability of generating class y given x using K experts can be computed as P( y | x , Φ) = Σj P( y | x , Θj) aj( x )

  4. Each RBF Gaussian kernel can be viewed as an local expert. MAXNET GatingNET

  5. MOE Classifier MAXNET ΣkP(Ek|x) P(ωc|x,Ek) ωwinner P(Ek|x,) P(ωc|x,Ek)

  6. Mixture of Experts The MOE [Jacobs91] exhibits an explicit relationship with statistical pattern classification methods as well as a close resemblance to fuzzy inference systems. Given a pattern, each expert network estimates the pattern's conditional a posteriori probability on the (adaptively-tuned or pre-assigned) feature space. Each local expert network performs multi-way classification over K classes by using either K independent binomial model, each modeling only one class, or one multinomial model for all classes.

  7. Two Components of MOE • local experts: • gating network:

  8. Local Experts • The design of modular neural networks hinges upon the choice of local experts. • Usually, a local expert is adaptively trained to extract a certain it local feature particularly relevant to its local decision. • Sometimes, a local expert can be assigned a predetermined feature space. • Based on the local feature, a local expert gives its local recommendation .

  9. LBF vs RBF Local Expertss Hyperplane Kernel function MLP RBF

  10. Mixture of Experts Class 2 Class 1

  11. Mixture of Experts Expert #2 Expert #1

  12. Gating Network • The gating network serves the function of computing the proper weights to be used for the final weighted decision. • A probabilistic rule is used to integrate recommendations from several local experts taking into account the experts' confidence levels.

  13. The training of the local experts as well as (the confidence levels in) the gating network of the MOE network is based on the expectation-maximization (EM) algorithm.

More Related