Hsin-Yu Liu’s Website
About me
I am an accomplished AI/ML researcher specializing in Deep Reinforcement Learning, Deep Learning, and Machine Learning. I have a proven track record of leveraging cutting-edge AI/ML technologies to tackle complex technical and business challenges, delivering innovative and impactful solutions.
I hold a Ph.D. in Computer Engineering from the University of California, San Diego (UCSD), where my research focused on advancing the frontiers of AI and its practical applications.
My PI is Professor Rajesh K. Gupta, and our lab is Microelectronic Embedded Systems Laboratory (MESL).
Research focus
- Reinforcement Learning: Online/offline policy regularization, offline-to-online RL, transfer-RL
Animation: Agarwal, R., Schuurmans, D. & Norouzi, M.. (2020). An Optimistic Perspective on Offline Reinforcement Learning International Conference on Machine Learning (ICML).
Papers
- Policy Regularization in Model-Free Building Control via Comprehensive Approaches from Offline to Online Reinforcement Learning
Ph.D. Dissertation, Jun. 2024- Developed a novel policy regularization framework for reinforcement learning in HVAC control systems, ensuring safe and efficient operation in real-world settings.
- Released the first open-source building batch reinforcement learning dataset, enabling benchmarking and advancing research in energy-efficient building management.
- Adaptive Policy Regularization for Offline-to-Online Reinforcement Learning in HVAC Control
NeurIPS CCAI & ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation (BuildSys)’ Nov. 2024- Automatic policy regularization fine-tuning from offline to online via average Q-value estimators
- 40.3% performance improvement from the state-of-the-art methods
- Rule-based Policy Regularization for Reinforcement Learning-based Building Control
ACM International Conference on Future Energy Systems (e-Energy). Jun. 2023- Adaptively incorporates existing policy and RL policies with higher estimated values in policy learning, applicable for both online and offline settings
- Larger than 40% of average episode reward increases for both Online and Offline approaches
- B2RL-An open-source dataset for building batch reinforcement learning
ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation (BuildSys’) - RL Energy Management. Nov. 2022- First Released the first open-source Building Batch RL dataset for benchmarking purposes
- Safe HVAC Control via Batch Reinforcement Learning
International Conference on Cyber-Physical Systems. (ICCPS). May. 2022- Pioneered the development and deployment of Batch RL in real-world building environments
- Incorporate KL-divergence for penalizing large policy update with 16.7% energy reduction
- Offline Reinforcement Learning with Munchausen Regularization
NeurIPS Offline RL Workshop. Dec. 2021- Developed RL policy regularization techniques to penalize large policy updates via KL Divergence
- METRICS 2.0: A machine-learning based optimization system for IC design
Workshop on Open-Source EDA Technology (WOFSET) 2018- Proposed new EDA metrics for EDA-ML studies, marking the first integration of such metrics
- SVM Learning for GFIS Trimer Health Monitoring in Helium-Neon Ion Beam Microscopy
Advanced Process Conference (APC), 2019- Developed SVM for image classification for automated trimer monitoring with >95% precisions
Personal Life
I enjoy:
- Heavy Metal Music,
- Baseball (Go! Padres!)
- Hiking
- Exploring nature.