Masatoshi Uehara
Biography
I am a Ph.D. student at Cornell in the computer science department (I am on the 2022-2023 academic job market.). Previously, I was a Ph.D. student at Harvard in the statistics department. I am interested in statistical methodology and learning theory for sequential decision making.
My google scholar is here. My CV is here. I am from Japan. My name in 漢字 is 上原 雅俊.
My interests are reinforcement learning, causal inference, and online learning. These days, I am working on the application of RL to medicine and social sciences.
My recent works are summarized as follows:
Statistically Efficient Offline Policy Evaluation: Doubly robust and semiparametrically efficient estimators (1,2), function approximation with minimax loss (1,2). The slide is here.
Robust Offline RL under insufficient coverage: Model-based pessimistic offline RL methods that work under partial coverage. Recent model-free work is here. Talk is here.
Representation learning in Online RL: Model-based RL in low-rank MDPs. Talk is here.
Statistically Efficient RL in POMDPs: OPE with general function approximation, Online RL with general function approximation, Computationally and statistically efficient PAC RL on deterministic transitions (but emissions can be stochastic). The slide is here.
Causal inference + RL: With unmeasured confounders ([1], [2]), Data combination
Data-driven inverse problems via Machine learning methods: Semiparametric IV methods to estimate functionals without identification (+how to perform inference), Nonparametric IV methods without identification
My advisor is Nathan Kallus (Cornell). My committee members are Wen Sun (Cornell), Thorsten Joachims (Cornell), Victor Chernozhukov (MIT), Xiao-Li Meng (Harvard), Nan Jiang (UIUC).
Publication
Red means I am the co-first/corresponding author. Blue means alphabetical order following the convention. The other papers follow the contribution-based ordering.
(Journal Articles )
Nathan Kallus and Masatoshi Uehara. Efficiently breaking the curse of horizon: Double reinforcement learning in infinite-horizon processes. Operations research, 2021.
Takeru Matsuda, Masatoshi Uehara, and Aapo Hyvarinen. Information criteria for non-normalized models. Journal of Machine Learning Research, 2021.
Nathan Kallus and Masatoshi Uehara. Double reinforcement learning for efficient off-policy evaluation in markov decision processes. Journal of Machine Learning Research, 2020. (This is a longer version of Double reinforcement learning for efficient and robust off-policy evaluation )
(Conference Proceedings)
Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, and Masatoshi Uehara. Minimax Instrumental Variable Regression and L2 Convergence Guarantees without Identification or Closedness. arXiv preprint arXiv:2302.05404 COLT 2023
Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, and Masatoshi Uehara. Inference on strongly identified functionals of weakly identified functions. arXiv preprint arXiv:2208 COLT 2023
Masatoshi Uehara, Ayush Sekhari, Jason D. Lee, Nathan Kallus, Wen Sun. Computationally Efficient PAC RL in POMDPs with Latent Determinism and Conditional Embeddings. arXiv preprint arXiv:2206.12081 ICML 2023
Wenhao Zhan, Masatoshi Uehara, Wen Sun, Jason D. Lee. PAC Reinforcement Learning for Predictive State Representations ICLR 2023
Masatoshi Uehara, Ayush Sekhari, Jason D. Lee, Nathan Kallus, Wen Sun. Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems. Neurips 2022 + arXiv preprint arXiv:2206.12020
Chengchun Shi, Masatoshi Uehara, Jiawei Huang, and Nan Jiang. A minimax learning approach to off-policy evaluation in partially observable markov decision processes. ICML(Long presentation), 2022. (Slide)
Xuezhou Zhang, Yuda Song, Masatoshi Uehara, Mengdi Wang, Wen Sun, and Alekh Agarwal. Efficient reinforcement learning in block mdps: A model-free representation learning approach. ICML 2022. Presented at RL THEORY VIRTUAL SEMINAR 2021 by Xuezhou.
Masatoshi Uehara, Xuezhou Zhang, and Wen Sun. Representation learning for online and offline rl in low-rank mdps. ICLR (Spotlight), 2022. Oral Paper in Ecological Theory of Reinforcement Learning Workshop at Neurips. Presented at RL THEORY VIRTUAL SEMINAR 2021. (Talk is here . Slide )
Masatoshi Uehara and Wen Sun. Pessimistic model-based offline rl: Pac bounds and posterior sampling under partial coverage. ICLR, 2022. Presented at RL THEORY VIRTUAL SEMINAR 2021. (Talk is here. Slide )
Jonathan D Chang, Masatoshi Uehara, Dhruv Sreenivas, Rahul Kidambi, and Wen Sun. Mitigating covariate shift in imitation learning via offline data without great coverage. Neurips, 2021.
Nathan Kallus, Yuta Saito, and Masatoshi Uehara. Optimal off-policy evaluation from multiple logging policies. ICML, 2021.
Yichun Hu, Nathan Kallus, and Masatoshi Uehara. Fast rates for the regret of offline reinforcement learning. COLT, 2021. Presented at RL THEORY VIRTUAL SEMINAR 2021/11/26 by Yichucn. (“Minor Revision” requested from Mathematics of Operations Research)
Masatoshi Uehara, Masahiro Kato, and Shota Yasui. Off-policy evaluation and learning for external validity under a covariate shift. NeurIPS (Spotlight), 2020. (Talk is here)
Nathan Kallus and Masatoshi Uehara. Doubly robust off-policy value and gradient estimation for deterministic policies. NeurIPS, 2020. (Talks is here )
Masatoshi Uehara, Jiawei Huang, and Nan Jiang. Minimax weight and q-function learning for off-policy evaluation. ICML, 2020.
Nathan Kallus and Masatoshi Uehara. Statistically efficient off-policy policy gradients. ICML, 2020.
Nathan Kallus and Masatoshi Uehara. Double reinforcement learning for efficient and robust off-policy evaluation ICML, 2020
Masatoshi Uehara, Takeru Matsuda, and Jae Kwang Kim. Imputation estimators for unnormalized models with missing data. AISTATS, 2020.
Masatoshi Uehara, Takafumi Kanamori, Takashi Takenouchi, and Takeru Matsuda. Unified estimation framework for unnormalized models with statistical efficiency. AISTATS, 2020.
Nathan Kallus and Masatoshi Uehara. Intrinsically efficient, stable, and bounded off-policy evaluation for reinforcement learning. NeurIPS, 2019.
(Unpublished Articles Under Revision)
Masatoshi Uehara , Chengchun Shi, and Nathan Kallus. An overview of off-policy evaluation in reinforcement learning. arXiv preprint arXiv:2212.06355 (“Major Revision” requested from Statistical Science )
Masatoshi Uehara, Masaaki Imaizumi, Nan Jiang, Nathan Kallus, Wen Sun, and Tengyang Xie. Finite sample analysis of minimax offline reinforcement learning: Completeness, fast rates and first-order efficiency. arXiv preprint arXiv:2102.02981, 2021. (Rejection with Resubmission from Annals of Statistics) (SLIDE )
Nathan Kallus and Masatoshi Uehara. Efficient evaluation of natural stochastic policies in offline reinforcement learning. arXiv preprint arXiv:2006.03886, 2020. (Minor Revision requested from Biometrika)
(Working Drafts)
Masatoshi Uehara, Nathan Kallus, Jason D. Lee, Wen Sun. Refined Value-Based Offline RL under Realizability and Partial Coverage. arXiv preprint arXiv:2302.02392.
Masatoshi Uehara, Haruka Kiyohara, Andrew Bennett, Victor Chernozhukov, Nan Jiang, Nathan Kallus, Chengchun Shi, and Wen Sun. Future-Dependent Value-Based Off-Policy Evaluation in POMDPs. arXiv preprint arXiv:2207.13081 (SLIDE)
Nathan Kallus, Xiaojie Mao, and Masatoshi Uehara. Causal inference under unmeasured confounding with negative controls: A minimax learning approach. arXiv preprint arXiv:2103.14029, 2021.
Nathan Kallus, Xiaojie Mao, and Masaotshi Uehara. Localized debiased machine learning: Efficient estimation of quantile treatment effects, conditional value at risk, and beyond. arXiv preprint arXiv:1912.12945, 2020. Presented at Online Causal Inference Seminar 2020/9/15.
Masatoshi Uehara and Jae Kwang Kim. Semiparametric response model with nonignorable nonresponse. arXiv preprint arXiv:1810.12519, 2018.
Masatoshi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Generative adversarial nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920, 2016.
---- Tutorials/Talks -----
About Dynamic treatment regime
About 深層CRESTミーティング