International E-publication: Publish Projects, Dissertation, Theses, Books, Souvenir, Conference Proceeding with ISBN.  International E-Bulletin: Information/News regarding: Academics and Research

Mathematical Analysis for Training ANNs Using Basic Learning Algorithms

Author Affiliations

  • 1Dept. of Computer Science and Engineering, Institute of Technology, Guru Ghasidas Vishwavidyalaya, Central University, Bilaspur, CG, India

Res. J. Computer & IT Sci., Volume 4, Issue (7), Pages 6-13, July,20 (2016)

Abstract

Artificial Neural Network is the Stream of Computer Science that considers the construction of programs that are analogous to the working of human brain. There are three basic entities of Artificial Neural Network i.e. Artificial Neuron, Network Topology and Learning Rule. In this paper we will have an insider to some of the basic learning rules. The pattern in which the various units present in the ANN are structured is dependent on the training algorithm used to train the network. Thus, we can say that Learning rules used for the design of ANN are being structured. In this paper we have discussed two classifications of learning algorithms on the basis of the procedure exhibited while training. The two classifications are Supervised and Unsupervised Learning. Some of the Learning rules discussed in this paper are Error Correction Learning, Hebbian Learning and Competitive Learning. The basic structure incorporated by the Learning algorithm is discussed in the paper. Also various features exhibited are concluded on the basis of the mathematical explanation given specifying the working of Learning Algorithm. In the paper Supervised learning which employs the help of teacher while training generally produces accurate result. As the teacher in the supervised learning paradigm by virtue of his experience in the field for Artificial Neural Network is constructed produces desired response hence guides the process towards producing correct and consistent result. Unsupervised learning take help of the set of adaptation rules for making changes in the adaptable parameters of the network. As the system is unaware of the desired response and is guided by quasi biological process the Learning algorithm is less accurate.

References

  1. McCulloch W.S. and Pitts W. (1943)., A logical calculus of the ideas immanent in nervous activity., Bull. Math. Biophy., 5, 115-133.
  2. Hebb D. O. (1949)., The Organization of Behaviour : A Neuropsychological Theory., Wiley, Newyork, United States of America.
  3. Minsky M.L. (1954)., Theory of neural-analog reinforcement systems and its application to the brain-model problem., Doctorate Thesis, Princeton University, Princeton, NJ.
  4. Rosenblatt D.E. (1958)., The perceptron: A probabilistic model for information storage and organization in the brain., Psychological Review, 65, 386-408.
  5. Widrow B. and Hoff M.E. (1960)., Adaptive switching circuits., IRE WESCON Convention Record, 4, 96-104.
  6. Minsky M.L. and Papert S.A. (1969)., Perceptron : An introduction to computational geometry., Cambridge, MA : MIT Press.
  7. Werbos P.J. (1974)., Beyond regression : New tools for prediction and analysis in the behavioural sciences., Doctorate Thesis, Harward University, Cambridge, MA.
  8. Ackley D.H., Hinton, G.E. and Sejnowski, T.J. (1985)., A Learning Algorithm for Boltzmann Machines., Cognitive Science, 9, 147-169.
  9. Rumelhart D.E., Hinton G.E. and Williams R.J. (1986)., Learning internal representations by error propagation., Parallel Distributed Processing : Explorations in the Microstructure of Cognition. 1, Cambridge, MA : MIT Press, 318-362.
  10. Haykin S. (1994)., Neural Networks : A Comprhensive Foundation., New York : Macmillan College Publishing Company Inc.
  11. Sathya R. and Abraham A. (2013)., Comparison of Supervised and Unsupervised Learning Algorithm for Pattern Classification., International Journal of Advance Research in Artificial Intelligence, 2, 34-38.
  12. Hormozi H., Hormozi E. and Nohooji H.R. (2012)., The Classification of the Applicable Machine Learning Methods in Robot Manipulators., International Journal of Machine Learning and Computing, 2, 560-563.
  13. Aruna S., Rajagopalan S.P. and Nandakishore, L.V. (2011)., An Emperical Comparison of Supervised Learning Algorithms in Disease Detection., International Journal of Information Technology Convergence and Services, 1, 81-92.
  14. Altug S., Chow M.Y. and Trussell H.J. (1999)., Fuzzy Inference Systems Implemented on Neural Architectures for Motor Fault Detection and Diagnosis., IEEE Transactions on Industrial Electronics, 46, 1069-1079.
  15. Shieh C.S. and Lin C.T. (2002)., A Vector Neural Network for Emitter Identification., IEEE Transactions on Antennas and Propagation, 50, 1120-1127.
  16. Jacobs R.A. (1988)., Increased Rates of Convergence through Learning Rate Adaptation., Neural Networks, I, 295-307.
  17. Rupp M. and Sayed A.H. (1997)., Supervised Learning of Perceptron and Output Feedback Dynamic Networks : A Feedback Analysis Via the Small Gain Theorem., IEEE Transactions of Neural Networks, 8, 612-622.
  18. Sehgal P., Gupta S. and Kumar D. (2012)., Minimization of Error in Training a Neural Network using Gradient Descent method., International Journal of Technical Research, 1, 10-12.
  19. Loganathan C. and Girija K.V. (2013)., Hybrid Learning for Adaptive Neuro Fuzzy Inference Systems., International Journal of Engineering and Science, 2, 6-13.
  20. Saduf M.A.W. (2013)., Comparative Study of Backpropagation Learning Algorithms for Neural Networks., International Journal of Advanced Research in Computer Science and Software Engineering, 3, 1151-1156.
  21. Budura G., Botoca C. and Miclau N. (2006)., Competitive Learning Algorithms for Data Clustering., Facta universitatis - series: Electronics and Energetics, 19, 261-269.
  22. Papageorgiou E.I., Stylios C.D. and Groumpos P.P. (2004)., Active Hebbian Learning Algorithm to Train Fuzzy Cognitive Maps., Elsevier International Journal of Approximate Reasoning, 37, 219-249.
  23. Maind S.B., and Wankar P. (2014)., Research Paper on basic of Artificial Neural Network., International Journal of Recent and Innovation Trends in Computing and Communication, 2, 96-100.
  24. Ghosh, A. and Nath A. (2014)., Cryptography Algorithms using Artificial Neural Networks., International Journal of Advance Research in Computer Science and Management Studies, 2, 375-381.
  25. Panchal F. and Panchal M. (2015)., Optimizing Number of Hidden Nodes for Artificial Neural Network using Competitive Learning Approach., International Journal of Computer Science and Mobile Computing, 4, 358-364.
  26. Muhammad A.U., Musa A.G. and Yarima K.I. (2015)., Survey on Training Neural Networks., International Journal of Adavnce Research in Computer Science and Software Engineering, 5, 169-173.
  27. Singh V.K. (2015)., One Solution to XOR Problem using Multilayer Perceptron having Minimum Configuration., International Journal of Science and Engineering, 3(3), 32-41.
  28. Singh V.K. (2015)., Two Solution to the XOR Problem using Minimum Configuration MLP., International Journal of Adavance Engineering Science and Technological Research, 3(3), 16-20.
  29. Singh V.K. (2016)., Proposing Solution to XOR Problem using Minimum Configuration MLP., Procedia Computer Science Elsevier, ScienceDirect, International Conference on Computational Modeling and Security, CMS 2016, Bangalore, India, 11-13 Feb, 255-262.
  30. Singh V.K. (2016)., Minimum Configuration MLP for solving the XOR problem., Proceeding of IEEE International Conference on Computing for Sustainable Global Development and Accepted for Publishing in IEEE Explore, IEEE Conference ID:37465, INDIA Com-2016, Delhi, India, 16-18 March, 168-173.
  31. Singh V.K. (2016)., Mathematical Explanation to Solution for Ex-NOR problem using MLFFN., International Journal of Information Sciences and Techniques, 6(1/2), 105-122.