Research

In July 2025, I joined BruinML as an Applied Researcher, advised by William Chang. Our work focuses on bandits and MDPs, but we also explore a range of topics—from theoretical problems in optimization and diffusion to applied challenges in medical imaging and financial modeling.

For course and personal projects, see the Projects page.

* Alphabetical authorship

Selected Publications

  1. Accelerating Low-Frequency Convergence for Limited-Angle DBT via Two-Channel Fidelity in PDHG (CT Meeting 2026)
    Taro Iyadomi, Ricardo Parada, Anna Kim, Lily Jiang, Emil Sidky, William Chang
    Enhanced Chambolle-Pock algorithm with two-channel approach for breast cancer detection, achieving faster convergence while maintaining reconstruction quality on simulated breast phantoms. Accepted CT-Meeting 2026.
    ArXiv Coming soon | GitHub | PDF

Ongoing

  1. Efficient Thompson Sampling for Graph-Structured Bandits
    Developing an efficient Thompson Sampling algorithm for pure exploration in graph-structured bandits, providing theoretical guarantees and practical implementations for large graphs.

  2. Accelerated JKO Schemes for Training Normalizing Flow Neural Networks
    Developing accelerated JKO and Sobolev gradient ascent methods for optimal transport and generative modeling, enabling scalable computation of Wasserstein barycenters and efficient training of normalizing flows as neural ODEs via momentum-based updates in Wasserstein space.

  3. Continuous-Time RL (w/ Prof. Yuhua Zhu, UCLA)
    Exploring theoretical limits of value estimation in continuous-time, infinite-horizon settings by treating Bellman equations as stochastic differential equations.