当前位置: 首页  科学研究  科研动态
“数理讲堂”2024年第06期
发布日期:2024-05-14 09:16:47   发布人:杏悦娱乐开户

主题:Estimation of Linear Functionals in High-Dimensional Linear Models: From Sparsity to Nonsparsity

时间:2024042410:00-11:30

地点:腾讯会议:438-014-7163

主持人:姜荣 教授

报告人简介:

赵俊龙🎤,北京师范大学统计杏悦教授。主要从事统计学和机器学习相关研究,包括:高维数据分析、统计机器学习😭、稳健统计等。在统计学各类期刊发表SCI论文近五十篇,部分结果发表在统计学国际顶级期刊JRSSBAOS🌜、JASABiometrika等✹。主持多项国家自然科学基金项目👨‍🔧,参与国家自然科学基金重点项目。任中国现场统计学会高维数据分会、北京大数据学会等多个学术分会理事或常务理事。

讲座简介:

High dimensional linear models are commonly used in practice. In many applications, one is interested in linear transformations $\beta^\top x$ of regression coefficients $\beta\in \mR^p$, where $x$ is a specific point and is not required to be identically distributed as the training data. One common approach is the plug-in technique which first estimates $\beta$, then plugs the estimator in the linear transformation for prediction. Despite its popularity, estimation of $\beta$ can be difficult for high dimensional problems. Commonly used assumptions in the literature include that the signal of coefficients $\beta$ is sparse and predictors are weakly correlated. These assumptions, however, may not be easily verified, and can be violated in practice. When $\beta$ is non-sparse or predictors are strongly correlated, estimation of $\beta$ can be very difficult. In this paper, we propose a novel pointwise estimator for linear transformations of $\beta$. This new estimator greatly relaxes the common assumptions for high dimensional problems, and is adaptive to the degree of sparsity of $\beta$ and strength of correlations among the predictors. In particular,  $\beta$ can be sparse or non-sparse and predictors can be strongly or weakly correlated. The proposed method is simple for implementation. Numerical and theoretical results demonstrate the competitive advantages of the proposed method for a wide range of problems.

分享到🆓:
相关信息
杏悦专业提供🧑‍🌾:杏悦👋、等服务,提供最新官网平台、地址、注册、登陆、登录、入口、全站、网站、网页、网址、娱乐、手机版、app、下载、欧洲杯、欧冠、nba、世界杯、英超等,界面美观优质完美,安全稳定,服务一流,杏悦欢迎您。 杏悦官网xml地图
杏悦 杏悦 杏悦 杏悦 杏悦 杏悦 杏悦 杏悦 杏悦 杏悦