Lucas Lehnert

PhD Student in Computer Science

lucas_lehnert at brown dot edu   View Lucas Lehnert's LinkedIn profile

Since Fall 2016 I am a PhD student at Brown University advised by Michael Littman. Before joining Brown University I obtained a Master's and Bachelor's degree in Computer Science from McGill University under the supervision of Doina Precup.

I am interested in developing algorithms that can learn to solve a wide variety of problems without having to adopt the algorithm to the task at hand. Most of my work falls into the category of Reinforcement Learning, Artificial Intelligence, and Machine Learning. During my undergraduate studies I also worked on projects in computer vision, signal processing, and robotics.


Lucas Lehnert, Michael L. Littman
Successor Features Support Model-based and Model-free Reinforcement Learning
arXiv preprint arXiv:1901.11437, 2019

Lucas Lehnert, Michael L. Littman
Transfer with Model Features in Reinforcement Learning
Lifelong Learning: A Reinforcement Learning Approach Workshop at FAIM, Stockholm, Sweden, 2018 [arXiv]

David Abel, Dilip S. Arumugam, Lucas Lehnert, Michael L. Littman
State Abstractions for Lifelong Reinforcement Learning
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:10-19, 2018 [PDF]

Lucas Lehnert, Romain Laroche, and Harm van Seijen
On Value Function Representation of Long Horizon Problems
In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, 2018 [PDF]

David Abel, Dilip Arumugam, Lucas Lehnert, and Michael L. Littman
Toward Good Abstractions for Lifelong Learning
NIPS Workshop on Hierarchical Reinforcement Learning, 2017. [PDF]

Lucas Lehnert, Stefanie Tellex, and Michael L. Littman
Advantages and Limitations of using Successor Features for Transfer in Reinforcement Learning
Lifelong Learning: A Reinforcement Learning Approach Workshop @ICML, Sydney, Australia, 2017 [arXiv]
Best Student Paper Award

Lucas Lehnert and Doina Precup
Using Policy Gradients to Account for Changes in Behavior Policies under Off-policy Control
The 13th European Workshop on Reinforcement Learning (EWRL 2016) [pdf]

Lucas Lehnert and Doina Precup
Policy Gradient Methods for Off-policy Control
arXiv: 1512.04105 [cs.AI], Dec. 13, 2015

Lucas Lehnert
Off-policy control under changing behaviour
Master of Science Thesis, McGill University, 2017. Submitted August 2016. [pdf]

Lucas Lehnert and Doina Precup
Building a Curious Robot for Mapping
Autonomously Learning Robots workshop (ALR 2014), NIPS 2014 [pdf]

Arthur Mensch, Emmanuel Piuze, Lucas Lehnert, Adrianus J. Bakermans, Jon Sporring, Gustav J. Strijkers, and Kaleem Siddiqi
Connection Forms for Beating the Heart. Statistical Atlases and Computational Models of the Heart - Imaging and Modelling Challenges
8896: 83-92. Springer International Publishing, 2014


Website theme created by - Copyright 2014