Submitted for publication (PrePrint)
New bounds on classification error rates for the error-correcting output code (ECOC) approach in machine learning are presented. These bounds have exponential decay complexity with respect to codeword length and theoretically validate the effectiveness of the ECOC approach. Bounds are derived for two different models: the first under the assumption that all base classifiers are independent and the second under the assumption that all base classifiers are mutually correlated up to first-order. Moreover, we perform ECOC classification on six datasets and compare their error rates with our bounds to experimentally validate our work and show the effect of correlation on classification accuracy.
Hieu D. Nguyen, Mohammed Sarosh Khan, Nicholas Kaegi, Shen-Shyang Ho, Jonathan Moore, Logan Borys, & Lucas Lavalva. (2021) Ensemble Learning using Error Correcting Output Codes: New Classification Error Bounds. arXiv:2109.08967 [cs.LG]
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
This is a preprint deposited in arXiv with a CC-BY-NC-ND license.