Lucas Lehnert

PhD Candidate
Computer Science Department
Carney Institute for Brain Science
Brown University

lucas_lehnert at brown dot edu View Lucas Lehnert's LinkedIn profile Link to Lucas Lehnert's github

Since fall 2016 I am a PhD student working with Michael L. Littman and Michael J. Frank at Brown University. Before joining Brown University I obtained a Master's and Bachelor's degree in computer science from McGill University under the supervision of Doina Precup.

I am interested in developing algorithms that learn to solve a wide variety of problems without having to adapt the algorithm to the task at hand. Most of my work falls into the categories of reinforcement learning, artificial intelligence, and machine learning. During my undergraduate studies, I also worked on projects in computer vision, signal processing, and robotics.


Lucas Lehnert, Michael L. Littman, and Michael J. Frank
Reward-predictive representations generalize across tasks in reinforcement learning
PLOS Computational Biology, 2020
[Code] [Docker Hub] [bioRxiv]

Lucas Lehnert and Michael L. Littman
Successor Features Combine Elements of Model-Free and Model-based Reinforcement Learning
Journal of Machine Learning Research (JMLR), 2020

Lucas Lehnert and Michael L. Littman
Transfer with Model Features in Reinforcement Learning
Lifelong Learning: A Reinforcement Learning Approach workshop at FAIM, Stockholm, Sweden, 2018 [arXiv]

David Abel, Dilip S. Arumugam, Lucas Lehnert, and Michael L. Littman
State Abstractions for Lifelong Reinforcement Learning
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:10-19, 2018 [PDF]

David Abel, Dilip Arumugam, Lucas Lehnert, and Michael L. Littman
Toward Good Abstractions for Lifelong Learning
NIPS workshop on Hierarchical Reinforcement Learning, 2017 [PDF]

Lucas Lehnert, Romain Laroche, and Harm van Seijen
On Value Function Representation of Long Horizon Problems
In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, 2018 [PDF]

Lucas Lehnert, Stefanie Tellex, and Michael L. Littman
Advantages and Limitations of Using Successor Features for Transfer in Reinforcement Learning
Lifelong Learning: A Reinforcement Learning Approach workshop @ICML, Sydney, Australia, 2017 [arXiv]
Best Student Paper Award

Lucas Lehnert and Doina Precup
Using Policy Gradients to Account for Changes in Behavior Policies Under Off-policy Control
The 13th European Workshop on Reinforcement Learning (EWRL 2016) [pdf]

Lucas Lehnert and Doina Precup
Policy Gradient Methods for Off-policy Control
arXiv: 1512.04105 [cs.AI], Dec. 13, 2015

Lucas Lehnert
Off-policy control under changing behaviour
Master of Science Thesis, McGill University, 2017 [pdf]

Lucas Lehnert and Doina Precup
Building a Curious Robot for Mapping
Autonomously Learning Robots workshop (ALR 2014), NIPS 2014 [pdf]

Arthur Mensch, Emmanuel Piuze, Lucas Lehnert, Adrianus J. Bakermans, Jon Sporring, Gustav J. Strijkers, and Kaleem Siddiqi
Connection Forms for Beating the Heart. Statistical Atlases and Computational Models of the Heart - Imaging and Modelling Challenges
8896: 83-92. Springer International Publishing, 2014


Lucas Lehnert. "Transfer Learning Using Successor State Features"
Invited talk at the workshop ICML’2017 RL late breaking results event, at ICML, Sydney, Australia, 2017