1 / 25

Hub Queue Size Analyzer

Hub Queue Size Analyzer. Implementing Neural Networks in practice. Task. Provide an opportunity for DataArt employees to know queue size in DataArt Hub via PM. Choosing implementation method. Create a service which takes an image from camera in DataArt Hub and performs image recognition

meagan
Download Presentation

Hub Queue Size Analyzer

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hub Queue Size Analyzer Implementing Neural Networks in practice

  2. Task • Provide an opportunity for DataArt employees to know queue size in DataArt Hub via PM

  3. Choosing implementation method • Create a service which takes an image from camera in DataArt Hub and performs image recognition • Additional module makes decision about queue size based on recognition results • It was recommended to implement Neural Network for image recognition because of following reasons: • We don’t need exact solution • Recognition error is not fatal • We have small amount (4) of possible queue states • Alternative solution (image analyzing with wavelets, analyzing pixels etc.) is too long and expensive to implement

  4. Introducing Neural Networks computational model that is inspired by the structure and functional aspects of biological neural networks.

  5. 3 words about Neural Network structure

  6. Neural Network Advantages • We program only structure of system, not behavior. • Structure is universal • Structure is flexible • We provide an image on the input and get recognition result on the output. • Simple and fast recognition • We do not care about algorithm for image analyzing • No need of PhD • We can reuse existing neural network in similar AND different tasks with minor changes • Reusability • Parallel equations • Fast & furious

  7. How does it work • Every 5 seconds image is downloaded from web

  8. How does it work • Image is converted to gray scale

  9. How does it work Auto leveling Image is compressed, auto leveled and transformed to double array

  10. How does it work Processed image is sent to input layer of network. If weights are correct on the output layer we get desired result (queue size). Weights Brightness adjustment

  11. How does it work Threshold value and integration of result are implemented Result picture are created Threshold, integration Brightness adjustment

  12. How does it work Uploading results on server Threshold, integration Brightness adjustment

  13. Implementation details • All Neural Network logic was placed in class library for reusability. This library can be used in other projects • For monitoring network condition and making extra training if needed administrative tool was developed.

  14. Administrative tool • Creating and training network • Retraining existing networks • Enlarging training set • Monitoring of correct work for all processes • Monitoring current network error and dependability of recognition result. • Saving and loading weights from file

  15. Neural Network minuses • Very careful approach to training set creation • Each pattern must be representative • Training set must cover all typical situations • large diversity of training for real-world operation • We should always check what our network has learned • Limited number of input, hidden and output nodes • Large computations during training process

  16. Part 2 Explaining principles of Neural network

  17. How does NN work?

  18. Couple words about sigmoid function

  19. Bias node No bias Bias

  20. NN Training Trial and error method • Initial weights have to be small enough • Feed with sample data set (training set) • Get the output value • Use error (output minus target value) as a criteria of success in the training algorithm • Change is small, number of iterations is big

  21. Algorithm overview • Initialise the network with small random weights • maxWeight < 5.0 / (inputNodesNum*maxInputNodeValue) • Repeat for each input pattern in the input collection • Present an input pattern to the input layer of the network. • Get output values • Calculate network’s summary error • Reduce the error by changing NN weights properly (back propagation) • Propagate an error value back to each hidden neuron that is proportional to their contribution of the network’s activation error. • Adjust the weights feeding each hidden neuron to reduce their contribution of error for this input pattern. • Repeat step 2 until the network is suitably trained.

  22. Back Propagation • Is a way to reduce summary NN error and improve its performance • “Blind paratrooper” method • 3 steps:

  23. What can be customized in NN? • Nodes number • Network error • Maximum number of training iterations • Learning rate

  24. Where NN has been used already • Image/sound recognition • Stock market • Data classification • Medicine • Detecting credit card fraud. • Forecast engines • Geo routing systems • Aviation • NASA • Etc

  25. Questions

More Related