About

I am a computer scientist and artificial intelligence researcher. Summer 2024 I will join the Computer Science department at the University of Saskatchewan as an Assistant Professor.

My research focuses on the underlying computational principles of intelligent decision making. I am particularly interested in developing new methods and algorithms to build systems that can re-use previously learned knowledge to more efficiently "think" and solve complex decision making task. Most of my work falls into reinforcement learning, machine learning, and sequential decision making.

Prospective students

For my start at the University of Saskatchewan, I am looking for a Thesis Master's student with strong programming skills and a solid foundation in undergrad-level math and computer science. If you are hard working, interested in learning more about machine learning, and eager to develop the next AI algorithm, please consider applying to the Master's in Computer Science Program.

You can also reach out to me via email.

Short biography

Lucas Lehnert is a postdoctoral researcher as part of the FAIR team at Meta. Lucas' research focuses on the underlying computational principles of intelligent decision making and how intelligent agents can generalize and re-use previously learned knowledge. Before joining Meta, he worked at the Mila Quebec AI Institute and the Université de Montréal. In 2021 Lucas completed his PhD at Brown University in Computer Science and before that his MSc and BSc in Computer Science at McGill University. His interdisciplinary work was recognized by an NIH training grant in interactionist cognitive neuroscience and a best student workshop paper award.

Publications

Lucas Lehnert, Sainbayar Sukhbaatar, Paul Mcvay, Michael Rabbat, Yuandong Tian
Beyond A∗: Better Planning with Transformers via Search Dynamics Bootstrapping
arXiv: 2402.14083 [cs.AI], 2024

Rohan Chitnis, Yingchen Xu, Bobak Hashemi, Lucas Lehnert, Urun Dogan, Zheqing Zhu, Olivier Delalleau
IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control
arXiv: 2306.00867 [cs.LG], 2023 [to appear at ICRA 2024]

Arnav Kumar Jain, Lucas Lehnert, Irina Rish, Glen Berseth
Maximum State Entropy Exploration using Predecessor and Successor Representations
Advances in Neural Information Processing Systems 36 pre-proceedings (NeurIPS 2023) [arXiv] [Code]

Lucas Lehnert, Michael J. Frank, and Michael L. Littman
Reward-predictive clustering
arXiv: 2211.03281 [cs.LG], 2022

Lucas Lehnert
Encoding Reusable Knowledge in State Representations
PhD Dissertation, Brown University, 2021

Lucas Lehnert, Michael L. Littman, and Michael J. Frank
Reward-predictive representations generalize across tasks in reinforcement learning
PLOS Computational Biology, 2020 [Code] [Docker Hub] [bioRxiv]

Lucas Lehnert and Michael L. Littman
Successor Features Combine Elements of Model-Free and Model-based Reinforcement Learning
Journal of Machine Learning Research (JMLR), 2020 [arXiv]

Lucas Lehnert and Michael L. Littman
Transfer with Model Features in Reinforcement Learning
Lifelong Learning: A Reinforcement Learning Approach workshop at FAIM, Stockholm, Sweden, 2018 [arXiv]

David Abel, Dilip S. Arumugam, Lucas Lehnert, and Michael L. Littman
State Abstractions for Lifelong Reinforcement Learning
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:10-19, 2018 [PDF]

David Abel, Dilip Arumugam, Lucas Lehnert, and Michael L. Littman
Toward Good Abstractions for Lifelong Learning
NIPS workshop on Hierarchical Reinforcement Learning, 2017 [PDF]

Lucas Lehnert, Romain Laroche, and Harm van Seijen
On Value Function Representation of Long Horizon Problems
In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, 2018 [PDF]

Lucas Lehnert, Stefanie Tellex, and Michael L. Littman
Advantages and Limitations of Using Successor Features for Transfer in Reinforcement Learning
Lifelong Learning: A Reinforcement Learning Approach workshop @ICML, Sydney, Australia, 2017 [arXiv]
Best Student Paper Award

Lucas Lehnert and Doina Precup
Using Policy Gradients to Account for Changes in Behavior Policies Under Off-policy Control
The 13th European Workshop on Reinforcement Learning (EWRL 2016) [pdf]

Lucas Lehnert and Doina Precup
Policy Gradient Methods for Off-policy Control
arXiv: 1512.04105 [cs.AI], 2015

Lucas Lehnert
Off-policy control under changing behaviour
Master of Science Thesis, McGill University, 2017 [pdf]

Lucas Lehnert and Doina Precup
Building a Curious Robot for Mapping
Autonomously Learning Robots workshop (ALR 2014), NIPS 2014 [pdf]

Arthur Mensch, Emmanuel Piuze, Lucas Lehnert, Adrianus J. Bakermans, Jon Sporring, Gustav J. Strijkers, and Kaleem Siddiqi
Connection Forms for Beating the Heart. Statistical Atlases and Computational Models of the Heart - Imaging and Modelling Challenges
8896: 83-92. Springer International Publishing, 2014

Talks

Encoding Reusable Knowledge in State Representations
Invited talk at Mila Tea Talks, Mila - Quebec Artificial Intelligence Institute, Montréal, Canada, 2020 [recording]

Should intelligent agents learn how to behave optimally or learn how to predict future outcomes?
Invited talk at Structure for Efficient Reinforcement Learning (SERL) at RLDM 2019, Montréal, Canada, 2019

Transfer Learning Using Successor State Features
Invited talk at the workshop ICML’2017 RL late breaking results event, at ICML, Sydney, Australia, 2017 [slides]