Topics of the Conference
The conference will focus on the basic mathematical theories and methods that have the potential to bridge the significant gap between the relatively limited theoretical foundation and the revolutionary practical success of Deep Learning (DL) and Large Language Models (LLMs). Topics of the conference include, but are not limited to
1. Expressive Power of Deep Neural Networks and LLMs, including approximation to continuous functions, memorization of finite dataset, capacity measures like VC dimension and Rademacher complexity.
2. Statistical Property for DL and LLMs, generalization bounds in various scenarios: including algorithmic-free or algorithmic-dependent, i.i.d. dataset or OOD dataset, and overparametrized models.
3. Optimization Methods for Training, including convergence rate, consistency of the learning process, computational complexity, and training for overparametrized models.
4. Theory of LLLs, including the scaling law and the 'emergency' phenomena.
5. DL with Rich Mathematical Structures, including graph neural networks, geometric learning, and PDE-informed learning
6. Robustness and Security of DL and LLMs, including theory of adversarial learning, provable robustness, trade-off between accuracy and robustness.
7. AI-driven Computation and Reasoning, including DL-driven PDE solving, optimization, optimal control, and deep reasoning.
Plenary Speakers
Pierre Alquier SSEC Business School - Asia Pacific
Sanjeev Arora Princeton University
Peter Bartlett University of California (Berkeley)
Wei Cai Southern Methodist University
Weinan E Peking University
Jianqing Fan Princeton University
Xianfeng Gu Stony Brook University
Lei Guo AMSS, Chinese Academy of Sciences
Anders C. Hansen University of Cambridge
Gitta Kutyniok Ludwig-Maximilians-Universitat Munchen
Zhenguo Li Huawei Noah's Ark Lab
Huazhen Lin Southwest University of Finance and Economics
Zhiquan Luo Chinese University of Hong Kong, Shenzhen
Yi Ma University of Hong Kong
Hao Ni University College London
Weijie Su University of Pennsylvania
Zongben Xu Xian Jiaotong University
Ya-Xiang Yuan AMSS, Chinese Academy of Sciences
Invited Speakers
Bin Dong Peking University
Cong Fang Peking University
Dan Hu Shanghai Jiaotong University
Wenbing Huang Renmin University of China
Yuling Jiao Wuhan University
Jinglai Li University of Birmingham
Ming Li Zhejiang Normal University
Wenda Li University of Edinburgh
Yiming Li Nanyang Technological University
Qian Lin Tsinghua University
Tao Lin Westlake University
Zhouchen Lin Peking University
Qing Ling Zhongshan University
Yong Liu Renmin University of China
Zhengying Liu Huawei Noah's Ark Lab
Luo Luo Fudan University
Shaogao Lv Nanjing Audit University
Qi Meng AMSS, Chinese Academy of Sciences
Deyu Meng Xi'an Jiaotong University
Zuoqiang Shi Tsinghua University
Hang Su Tsinghua University
Yisen Wang Peking University
Xiao Wang Beihang University
Xiaoyu Wang Hong Kong University of Science and Technology
Xian Wei East China Normal University
Zhewei Wei Renmin University of China
Ke Wei Fudan University
Lei Wu Peking University
Tailin Wu Westlake University
Kelin Xia Nanyang Technological University
Hehu Xie AMSS, Chinese Academy of Sciences
Zhiqin Xu Shanghai Jiaotong University
Zhengfeng Yang East China Normal University
Zhou Yu East China Normal University
Kun Yuan Peking University
Bohua Zhan Huawei 2012 Programming Language Laboratory
Jiaojiao Zhang KTH Royal Institute of Technology
Shihua Zhang AMSS, Chinese Academy of Sciences
Chuan Zhou AMSS, Chinese Academy of Sciences
Tao Zhou AMSS, Chinese Academy of Sciences
The conference is organized and supported by
Hua Loo-Keng Center for Mathematical Sciences
Academy of Mathematics and Systems Science, Chinese Academy of Sciences
https://amss.cas.cn/