top of page

Current
Projects

Healthcare Optimization and AI

healthcare.jpeg

Recently, artificial intelligence (AI) models are revolutionizing healthcare, offering powerful tools to analyze large amounts of data and improve patient care, research, and operational efficiency. These models employ patient records, demographic data, public health data, and financial data to forecast future events, facilitate resource allocation decisions, and guide personalized care. Yet, even the most sophisticated health AI models often fail to demonstrate impact when used by healthcare providers in complex healthcare systems. In this research, we develop new optimization and AI algorithms to improve operational efficiency and patient care in complex healthcare organizations, as well as health AI methods that are robust and transferable to diverse healthcare settings and patient populations. Research in collaboration with the Laboratory for Transformative Administration (LTA) in the Duke Department of Surgery, and Duke AI Health.

Selected Publications:

X. Konti, H. Riess, M. Giannopoulos, Y. Shen, M. J. Pencina, N. J. Economou-Zavlanos, M. M. Zavlanos, "Distributionally Robust Clustered Federated Learning: A Case Study in Healthcare," Proc. 63rd IEEE Conference on Decision and Control (CDC), Milan, Italy, Dec. 2024, accepted.

Y. Shen, P. Xu, and M. M. Zavlanos, "Wasserstein Distributionally Robust Policy Evaluation and Learning for Contextual Bandits," Transactions on Machine Learning Research (TMLR), Jan. 2024. Featured Certification.

 

Y. Shen, J. Dunn, and M. M. Zavlanos, "Risk-Averse Multi-Armed Bandits with Unobserved Confounders: A Case Study in Emotion Regulation in Mobile Health," Proc. 61st IEEE Conference on Decision and Control (CDC), Cancun, Mexico, Dec. 2022.

Robotics and Autonomous Systems Learning and Control

robot.png

Artificial intelligence (AI) is playing a pivotal role in the research and development of robotics and autonomous systems. By equipping machines with the ability to perceive, learn, and reason, AI enables them to understand and respond to complex situations, learn from experience, acquire new skills and adapt to new situations, and improve their performance over time. In this research, we investigate new methods for safe learning and control of autonomous systems, as well as methods for transfer learning across different environments, tasks, and observation modalities. Research supported by AFOSR.

Selected Publications:

 

P. Jian, E. Lee, Z. I. Bell, M. M. Zavlanos, and B. Chen, "Perception Stitching: Zero-Shot Perception Encoder Transfer for Visuomotor Robot Policies," under review.

 

P. Jian, E. Lee, Z. I. Bell, M. M. Zavlanos, and B. Chen, "Policy Stitching: Learning Transferable Robot Policies," Proc. 7th Annual Conference on Robot Learning (CoRL), ser. Proc. of Machine Learning Research (PMLR), J. Tan, M. Toussaint, and K. Darvish, Eds., vol. 229, pp. 3789-3808, Nov. 2023.

 

P. Vlantis, L. J. Bridgeman, and M. M. Zavlanos, "Failing with Grace: Learning Neural Network Controllers that are Boundedly Unsafe," Proc. 5th Conference on Learning for Dynamics and Control (L4DC), ser. Proc. of Machine Learning Research (PMLR), N. Matni, M. Morari, and G. J. Pappas, Eds., vol. 211, pp. 954-965, Jun. 2023.

Optimization for Machine Learning and AI

network.jpeg

Optimization algorithms are at the core of machine learning and AI. In fact, many machine learning and AI models can be viewed as solutions to appropriate optimization problems. In this research, we develop new optimization algorithms for machine learning and AI that can handle distributed agents and data, distribution shifts in the data, and non-stationary environments, and analyze their convergence and complexity properties. Research supported by NSF and AFOSR.

Selected Publications:

 

S. Wang, Z. Wang, X. Yi, M. M. Zavlanos, K. H. Johansson, and S. Hirche, "Risk-Averse Learning with Non-Stationary Distributions," under review.

 

Z. Wang, Y. Shen, M. M. Zavlanos, and K. H. Johansson, "Outlier-Robust Distributionally Robust Optimization via Unbalanced Optimal Transport," 38th Annual Conference on Neural Information Processing Systems (NeurIPS), Vancouver, Canada, Dec. 2024, accepted.

 

Y. Zhang and M. M. Zavlanos, "Cooperative Multi-Agent Reinforcement Learning with Partial Observations," IEEE Transactions on Automatic Control, vol. 69, no. 2, pp. 968-981, Feb. 2024.

Black-Box Optimization and AI

bb.png

Zeroth-order (or derivative-free) optimization methods enable the optimization of black-box models that are available only in the form of input-output data and are common in simulation-based optimization, training of Deep Neural Networks, and reinforcement learning. In the absence of input-output models, exact first or second order information (gradient or hessian) is unavailable and cannot be used for optimization. Therefore, zeroth-order methods rely on input-output data to obtain approximations of the gradients that can be used as descent directions. In this research, we develop new zeroth-order algorithms for distributed and non-stationary optimization and learning problems with reduced variance and improved complexity. Research supported by NSF and AFOSR.

Selected Publications:

 

Z. Wang, X. Yi, Y. Shen, M. M. Zavlanos, and K. H. Johansson, "Asymmetric Feedback Learning in Online Convex Games," under review.

 

Y. Zhang, Y. Zhou, K. Ji, and M. M. Zavlanos, "A New One-Point Residual-Feedback Oracle for Black-Box Learning and Control," Automatica, vol. 136, pp. 110006, Feb. 2022.

bottom of page