This exciting project aims to create a robot learning framework that integrates the wide linguistic capabilities of Large Language Models (LLMs) with robot perceptual information (e.g., visual and/or haptic perception) for language learning covering word grounding and bottom-up sentence formation, which can make robots develop language and understand human communication across contexts (more details and research questions are available in the link below). More details about the topic and how to apply are available at:
Find a PhD Studentship - Cross-Modal Language Learning with Large Language Models through Human-Robot Interaction on jobs.ac.uk, the top job board in higher education. Click to view more! |
Dr. Amir Aly
Lecturer in Artificial Intelligence and Robotics / AI Consultant
Programme Manager of Artificial Intelligence
Center for Robotics and Neural Systems (CRNS)
School of Engineering, Computing, and Mathematics
Room A307 Portland Square, Drake Circus, PL4 8AA
University of Plymouth, UK