C-Loss-Based Doubly Regularized Extreme Learning Machine

Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back–propagating errors. Nature. 1986;323:533–6.

Article  Google Scholar 

Vapnik V, Golowich S, Smola A. Support vector method for function approximation, regression estimation, and signal processing. The 9th Int Conf Neural Inform Proc Sys. 1996;281–287.

Furfaro R, Barocco R, Linares R, Topputo F, Reddy V, Simo J, et al. Modeling irregular small bodies gravity field via extreme learning machines and Bayesian optimization. Adv Space Res. 2020;67(1):617–38.

Article  Google Scholar 

Huang GB, Zhu QY, Siew CK. Extreme learning machine: theory and applications. Neurocomputing. 2006;70(1–3):489–501.

Article  Google Scholar 

Kaleem K, Wu YZ, Adjeisah M. Consonant phoneme based extreme learning machine (ELM) recognition model for foreign accent identification. The World Symp Software Eng. 2019;68–72.

Liu X, Huang H, Xiang J. A personalized diagnosis method to detect faults in gears using numerical simulation and extreme learning machine. Knowl Based Syst. 2020;195(1): 105653.

Article  Google Scholar 

Fellx A, Daniela G, Liviu V, Mihaela–Alexandra P. Neural network approaches for children's emotion recognition in intelligent learning applications. The 7th Int Conf Education and New Learning Technol. 2015;3229–3239.

Huang GB, Zhou H, Ding X. Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern B. 2011;42(2):513–29.

Article  Google Scholar 

Huang S, Zhao G, Chen M. Tensor extreme learning design via generalized Moore-Penrose inverse and triangular type–2 fuzzy sets. Neural Comput Applical. 2018;31:5641–51.

Article  Google Scholar 

Bai Z, Huang GB, Wang D. Sparse Extreme learning machine for classification. IEEE Trans Cybern. 2014;44(10):1858–70.

Article  Google Scholar 

Wang Y, Yang L, Yuan C. A robust outlier control framework for classification designed with family of homotopy loss function. Neural Netw. 2019;112:41–53.

Article  Google Scholar 

Deng WY, Zheng Q, Lin C. Regularized extreme learning machine. IEEE symposium on computational intelligence and data mining. 2009;2009:389–95.

Article  Google Scholar 

Balasundaram S, Gupta D. 1–Norm extreme learning machine for regression and multiclass classification using Newton method. Neurocomputing. 2014;128:4–14.

Article  Google Scholar 

Christine DM, Ernesto DV, Lorenzo R. Elastic–net regularization in learning theory. J complexity. 2009;25(2):201–30.

MathSciNet  Article  Google Scholar 

Luo X, Chang XH, Ban XJ. Regression and classification using extreme learning machine based on L-1-norm and L-2-norm. Neurocomputing. 2016;174:179–86.

Article  Google Scholar 

Abhishek S, Rosha P, Jose P. The C–loss function for pattern classification. Pattern Recognit. 2014;47(1):441–53.

Article  Google Scholar 

Zhao YP, Tan JF, Wang JJ. C–loss based extreme learning machine for estimating power of small–scale turbojet engine. Aerosp Sci Technol. 2019;89(6):407–19.

Article  Google Scholar 

Jing TT, Xia HF, and Ding ZM. Adaptively-accumulated knowledge transfer for partial domain adaptation. In Proceedings of the 28th ACM International Conference on Multimedia. 2020;1606–1614.

Fu YY, Zhang M, Xu X, et al. Partial feature selection and alignment for multi-source domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021;16654–16663.

Khalajmehrabadi A, Gatsis N, Pack D. A joint indoor WLAN localization and outlier detection scheme using LASSO and Elastic-Net optimization techniques. IEEE Trans Mob Comput. 2017;16(8):1–1.

Article  Google Scholar 

Boyd S, Vandenberghe L, Faybusovich L. Convex optimization IEEE Trans Automat Contr. 2006;51(11):1859.

Article  Google Scholar 

Huang GB, Wang DH, Lan Y. Extreme learning machines: a survey. Int J Mach Learn Cyb. 2011;2(2):107–22.

Article  Google Scholar 

Peng HY, Liu CL. Discriminative feature selection via employing smooth and robust hinge loss. IEEE T Neur Net Lear. 2019;99:1–15.

Google Scholar 

Lei Z, Mammadov MA. Yearwood J. From convex to nonconvex: a loss function analysis for binary classification. 2010 IEEE International Conference On Data Mining Workshops. 2010;1281–1288.

Hajiabadi H, Molla D, Monsefi R, et al. Combination of loss functions for deep text classification. Int J Mach Learn Cyb. 2019;11:751–61.

Article  Google Scholar 

Hajiabadi H, Monsefi R, Yazdi HS. RELF: robust regression extended with ensemble loss function. Appl Intell. 2018;49:473.

Google Scholar 

Zou H, Hastie T. Addendum: Regularization and variable selection via the elastic net. J Roy Stat Soc. 2010;67(5):768–768.

Article  Google Scholar 

Golub GH, Loan CFV. Matrix computations 3rd edition. Johns Hopkins studies in mathematical sciences. 1996.

Dinoj S “Swiss roll datasets”, http://people.cs.uchicago.edu/~dinoj/manifold/swissroll.html, accessed on 12 Apr 2021.

UCI machine learning repository http://archive.ics.uci.edu/ml/datasets.php, accessed on 12 Apr 2021

Kaggle datasets https://www.kaggle.com/, accessed on 12 April 2021

Hua XG, Ni YQ, Ko JM, et al. Modeling of temperature–frequency correlation using combined principal component analysis and support vector regression technique. J Comput Civil Eng. 2007;21(2):122–35.

Article  Google Scholar 

Frost P, Kailath T. An innovations approach to least–squares estimation––part III: nonlinear estimation in white Gaussian noise. IEEE Trans Automat Contr. 2003;16(3):217–26.

Article  Google Scholar 

Demšar J. Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res. 2006;7:1–30.

MathSciNet  MATH  Google Scholar 

Iman L, Davenport JM. Approximations of the critical region of the Friedman statistic. Commun Stat–Simul C. 1998;571–595.

Fei Z, Webb GI, Suraweera P, et al. Subsumption resolution: an efficient and effective technique for semi–naive Bayesian learning. Mach Learn. 2012;87(1):93–125.

MathSciNet  Article  Google Scholar 

留言 (0)

沒有登入
gif