About Me

I am currently a 5th year PhD Candidate at Toronto Metropolitan University, under the supervision of Nariman Farsad and Isaac Woungang. Previously, I completed my MSc in Computer Science at Brock University under the supervision of Beatrice Ombuki-Berman, and I received my BSc. in Computer Science from Trent University.

Research

My research aims to enable autonomous agents to acquire the ability to accomplish multiple tasks using a single policy. I am interested in deep reinforcement learning, multi-task reinforcement learning, continual/lifelong reinforcement learning, inverse reinforcement learning, and intrinsic motivation for reinforcement learning. In addition to reinforcement learning, I am interested in generative modelling, self-supervised learning, and vision-language models.

Past Experience

I am currently a part-time lecturer in the Computer Science Department at Brock University. Previously, I was an intern at Royal Bank of Canada working on supporting their technical infrastructure using AIOps methods. Upon the completion of my MSc, I was the Lead Machine Learning Developer at Castle Ridge Asset Management.

Publications

Publication diagram
Overcoming State and Action Space Disparities in Multi-Domain, Multi-Task Reinforcement Learning
Reginald McLean, Kai Yuan, Issac Woungang, Nariman Farsad, Pablo Samuel Castro
Accepted at Morphology-Aware Policy and Design Learning Workshop @ CoRL 2024
Current multi-task reinforcement learning (MTRL) methods have the ability to perform a large number of tasks with a single policy. However when attempting to interact with a new domain, the MTRL agent would need to be re-trained due to differences in domain dynamics and structure. Because of these limitations, we are forced to train multiple policies even though tasks may have shared dynamics, leading to needing more samples and is thus sample inefficient. In this work, we explore the ability of MTRL agents to learn in various domains with various dynamics by simultaneously learning in multiple domains, without the need to fine-tune extra policies. In doing so we find that a MTRL agent trained in multiple domains induces an increase in sample efficiency of up to 70% while maintaining the overall success rate of the MTRL agent.
Publication diagram
Video Language Critic: Transferable Reward Functions for Language-Conditioned Robotics
Minttu Alakuijala, Reginald McLean, Isaac Woungang, Nariman Farsad, Samuel Kaski, Pekka Marttinen, Kai Yuan
Accepted at Workshop on Language and Robot Learning: Language as an Interface @ CoRL 2024
Natural language is often the easiest and most convenient modality for humans to specify tasks for robots. However, learning to ground language to behavior typically requires impractical amounts of diverse, language-annotated demonstrations collected on each target robot. In this work, we aim to separate the problem of what to accomplish from how to accomplish it, as the former can benefit from substantial amounts of external observation-only data, and only the latter depends on a specific robot embodiment. To this end, we propose Video-Language Critic, a reward model that can be trained on readily available cross-embodiment data using contrastive learning and a temporal ranking objective, and use it to score behavior traces from a separate actor. When trained on Open X-Embodiment data, our reward model enables 2x more sample-efficient policy training on Meta-World tasks than a sparse reward only, despite a significant domain gap. Using in-domain data but in a challenging task generalization setting on Meta-World, we further demonstrate more sample-efficient training than is possible with prior language-conditioned reward models that are either trained with binary classification, use static images, or do not leverage the temporal information present in video data.
Publication diagram
Swarm Based Algorithms for Neural Network Training
Reginald McLean, Beatrice Ombuki-Berman, Andries P. Engelbrecht
Accepted at 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)
The purpose of this paper is to compare the abilities and deficiencies of various swarm based algorithms for training artificial neural networks. This paper uses seven algorithms, seven regression problems, sixteen classification problems, and four bounded activation functions to compare algorithms in regards to loss, accuracy, hidden unit saturation, and overfitting. It was found that particle swarm optimization is the top algorithm for regression problems based on loss, firefly algorithm was the top algorithm for classification problems when examining accuracy and loss. The ant colony optimization and artificial bee colony algorithms caused the least amount of hidden unit saturation, with the bacterial foraging optimization algorithm producing the least amount of overfitting.

reginald k mclean at gmail dot com

Department of Computer Science
Toronto Metropolitan University
Toronto, Ontario
Canada