Hao Liang




Bridging Distributional and Risk-Sensitive Reinforcement Learning: Balancing Statistical, Computational, and Risk Considerations [link]
Hao Liang
Present at ICML 2024 Workshop on FoRLaC

Bridging Distributional and Risk-sensitive Reinforcement Learning with Provable Regret Bounds [link] JMLR
Hao Liang, Zhi-Quan Luo
An earlier version presented at NeurIPS 2021 Workshop on Ecological Theory of RL

An Economic and Low-carbon Dispatch Algorithm for Microgrids with Electric Vehicles PSET 2024
Jiayu Cheng, Hao Liang, Xiaoying Tang, Shuguang Cui
Best oral presentation

Optimistic Thompson Sampling for No-regret Learning in Unknown Games [arXiv]
Yingru Li, Liangqi Liu, Wenqiang Pu, Hao Liang, Zhi-Quan Luo

Regret Bounds for Risk-sensitive Reinforcement Learning with Lipschitz Dynamic Risk Measures [AISTATS] AISTATS 2024
Hao Liang, Zhi-Quan Luo
An earlier version presented at ICML 2023 Workshop on New Frontiers in Learning, Control, and Dynamical Systems

A Distribution Optimization Framework for Confidence Bounds of Risk Measures [ICML] ICML 2023
Hao Liang, Zhi-Quan Luo


Working Papers


Is Pure Exploitation Sufficient for Sequential Decision-Making with Exogenous Information?
Hao Liang

Revisiting Minimax Lower Bounds in Unknown Matrix Game
Hao Liang, Yingru Li, Zhi-Quan Luo

A Convergence Analysis of Categorical Distributional Reinforcement Learning Algorithm
Hao Liang, Zhi-Quan Luo