Acta Scientific Computer Sciences

Research Article Volume 4 Issue 12

A Multi-hyperplane Separation Method to Train Shallow Classifiers

Ákos Hajnal1,2*

1Institute for Computer Science and Control (SZTAKI), Eötvös Loránd Research Network (ELKH), Laboratory of Parallel and Distributed Systems, Hungary
2Óbuda University, John von Neumann Faculty of Informatics, Hungary

*Corresponding Author: Ákos Hajnal, Institute for Computer Science and Control (SZTAKI), Eötvös Loránd Research Network (ELKH), Laboratory of Parallel and Distributed Systems, Hungary.

Received: September 23, 2022; Published: November 08, 2022

Abstract

This paper presents a novel approach to train classifiers implemented as a shallow neural network. The proposed solution is based on the original Perceptron algorithm but extends to the multi-hyperplane case. Consequently, it allows of solving not only linearly separable problems. Besides simplicity, the advantage of the method is its tolerance to imbalanced data, which can occur in practice. The applicability of the method has been demonstrated on several artificial and on real-life datasets.

Keywords: Machine Learning; Artificial Neural Networks; Shallow Neural Network; Classification; Perceptron; Multi-Hyperplane Separation; Mistake-Driven Algorithm

References

  1. Robbins H and Monro S. “A Stochastic Approximation Method”. The Annals of Mathematical Statistics3 (1951): 400-407.
  2. Haskell B C. "The Method of Steepest Descent for Non-linear Minimization Problems". Quarterly of Applied Mathematics 3 (1944): 258-261.
  3. Rumelhart D., et al. “Learning representations by back-propagating errors”. Nature 323 (1986): 533-536.
  4. Su J., et al. “One pixel attack for fooling deep neural networks”. IEEE Transactions on Evolutionary Computation5 (2019): 828-841.
  5. Rosenblatt F. “The perceptron: A probabilistic model for information storage and organization in the brain”. Psychology Review 65 (1958): 386-407.
  6. Cortes C and Vapnik V. “Support-vector networks”. Machine Learning3 (1995): 273-297.
  7. Rosenblatt F. “Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms”. Spartan Books, Washington, D.C., (1962).
  8. Minsky ML and Papert SA. “Perceptrons”. MIT Press, Cambridge, MA, (1969).
  9. Schölkopf B and Smola A. “Learning with Kernels”. MITPress, Cambridge, MA, (2002).
  10. Gallant S I. “Perceptron-based learning algorithms”. IEEE Transactions on Neural Networks2 (1990): 179-191.
  11. Varma M. “Extreme classification”. Communications of the ACM11 (2019): 44-45.
  12. LeCun Y., et al. “The MNIST dataset of handwritten digits”. (1999).

Citation

Citation: Ákos Hajnal. “A Multi-hyperplane Separation Method to Train Shallow Classifiers". Acta Scientific Computer Sciences 4.12 (2022): 02-07.

Copyright

Copyright: © 2022 Ákos Hajnal. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.




Metrics

Acceptance rate35%
Acceptance to publication20-30 days

Indexed In




News and Events


  • Certification for Review
    Acta Scientific certifies the Editors/reviewers for their review done towards the assigned articles of the respective journals.
  • Submission Timeline for Upcoming Issue
    The last date for submission of articles for regular Issues is July 10, 2024.
  • Publication Certificate
    Authors will be issued a "Publication Certificate" as a mark of appreciation for publishing their work.
  • Best Article of the Issue
    The Editors will elect one Best Article after each issue release. The authors of this article will be provided with a certificate of "Best Article of the Issue"
  • Welcoming Article Submission
    Acta Scientific delightfully welcomes active researchers for submission of articles towards the upcoming issue of respective journals.

Contact US