A Novel Sparse Learning Method: Compressible Bayesian Elastic Net Model
Cited by
Export citation
- BibTex
- RIS
- TXT
@Article{JICS-6-295,
author = {Keyang Cheng, Qirong Mao, Xiaoyang Tan and Yongzhao Zhan},
title = {A Novel Sparse Learning Method: Compressible Bayesian Elastic Net Model},
journal = {Journal of Information and Computing Science},
year = {2024},
volume = {6},
number = {4},
pages = {295--302},
abstract = {In this paper, we study the combination of compression and Bayesian elastic net. By including a
compression operation into the ℓ1 and ℓ2 regularization, the assumption on model sparsity is relaxed to
compressibility: model coefficients are compressed before being penalized, and sparsity is achieved in a
compressed domain rather than the original space. We focus on the design of compression operations, by
which we can encode various compressibility assumptions and inductive biases. We show that use of a
compression operation provides an opportunity to leverage auxiliary information from various sources. The
compressible Bayesian elastic net has another two major advantages. Firstly, as a Bayesian method, the
distributional results on the estimates are straightforward, making the statistical inference easier. Secondly, it
chooses the two penalty parameters simultaneously, avoiding the “double shrinkage problem” in the elastic
net method. We conduct extensive experiments on braincomputer interfacing, handwritten character
recognition and text classification. Empirical results show clear improvements in prediction performance by
including compression in Bayesian elastic net. We also analyze the learned model coefficients under
appropriate compressibility assumptions, which further demonstrate the advantages of learning compressible
models instead of sparse models.
},
issn = {1746-7659},
doi = {https://doi.org/},
url = {http://global-sci.org/intro/article_detail/jics/22671.html}
}
TY - JOUR
T1 - A Novel Sparse Learning Method: Compressible Bayesian Elastic Net Model
AU - Keyang Cheng, Qirong Mao, Xiaoyang Tan and Yongzhao Zhan
JO - Journal of Information and Computing Science
VL - 4
SP - 295
EP - 302
PY - 2024
DA - 2024/01
SN - 6
DO - http://doi.org/
UR - https://global-sci.org/intro/article_detail/jics/22671.html
KW - Sparse Learning, compression operation, Bayesian elastic net
AB - In this paper, we study the combination of compression and Bayesian elastic net. By including a
compression operation into the ℓ1 and ℓ2 regularization, the assumption on model sparsity is relaxed to
compressibility: model coefficients are compressed before being penalized, and sparsity is achieved in a
compressed domain rather than the original space. We focus on the design of compression operations, by
which we can encode various compressibility assumptions and inductive biases. We show that use of a
compression operation provides an opportunity to leverage auxiliary information from various sources. The
compressible Bayesian elastic net has another two major advantages. Firstly, as a Bayesian method, the
distributional results on the estimates are straightforward, making the statistical inference easier. Secondly, it
chooses the two penalty parameters simultaneously, avoiding the “double shrinkage problem” in the elastic
net method. We conduct extensive experiments on braincomputer interfacing, handwritten character
recognition and text classification. Empirical results show clear improvements in prediction performance by
including compression in Bayesian elastic net. We also analyze the learned model coefficients under
appropriate compressibility assumptions, which further demonstrate the advantages of learning compressible
models instead of sparse models.
Keyang Cheng, Qirong Mao, Xiaoyang Tan and Yongzhao Zhan. (2024). A Novel Sparse Learning Method: Compressible Bayesian Elastic Net Model.
Journal of Information and Computing Science. 6 (4).
295-302.
doi:
Copy to clipboard