Prospective: Robotics and Embedded Software

Program Requirements

All EECS MEng students should expect to complete four (4) technical courses within the EECS department at the graduate level, the Fung Institute's engineering leadership curriculum, as well as a capstone project that will be hosted by the EECS department. You must select a project from the list below.

2020-2021 Capstone Projects*

For the capstone projects for Master of Engineering in Electrical Engineering and Computer Science (EECS) our department believes that our students are going to have a significantly better experience if the projects are followed closely by an EECS professor throughout the academic year. To ensure this, we have asked the faculty in each area for which the Master of Engineering is offered in our department to formulate one or more project ideas that the incoming students will have to choose from.

Back to M.Eng. Admissions.

*we will continue to update this list as more proposals are received from our faculty for the upcoming year

Project 1

Title -MRI compatible robotic system (advisor Prof. Chunlei Liu)

Description The goal of this project is to build and test a robotic system to operate inside an MRI scanner. The robot is designed to maneurver an object inside the bore of a 3T MRI scanner with specific spatial and safety contraints. As such, the robot has to be made with materials and components that are compatible with the challenging electromagnetic enviroment associated with an MRI scanner.

Projects 2 & 3

Title - Vision Correcting Display (advisor Prof. Brian Barsky)

Description - Vision problems such as near-sightedness, far-sightedness, as well as others, are due to optical aberrations in the human eye. These conditions are prevalent, and the number of people who have these hardships is growing rapidly. Correcting optical aberrations in the human eye is traditionally done optically using eyeglasses, contact lenses, or refractive surgeries; these are sometime not convenient or not always available to everyone. Furthermore, higher order aberrations are not correctable with eyeglasses. This research is investigating a novel approach which involves a new computation based aberration-correcting light field display: by incorporating the person’’s own optical aberrations into the computation, content shown on the display is modified such that the viewer will be able to see the display in sharp focus without using corrective eyewear. Our research involves the analysis of image formation models; through the retinal light field projection, it is possible to compensate for the optical blurring on the target image by a process of prefiltering with the inverse blur. As part of this project, we are building a light field display prototype that supports our desired inverse light field prefiltering. We are working towards extending the capability to correct for higher order aberrations. This latter aspect is particularly exciting since it would enable people for whom it is not possible to see displays in sharp focus using eyeglasses to be able to do so using no corrective eyewear. This is a broad project that incorporates many different aspects. A variety of different backgrounds for students is welcome. Students are free to choose what is the most interesting part for them.


Title - Assistive Technology for Navigation, Selection, Pointing, and Clicking in a Mouse-free Environment by Capturing Hand Movements (advisor Prof. Brian Barsky)

Description -This project is concerned with assistive technology that enables users with fine motor control difficulties to navigate, select, point, and click without physically manipulating a mouse. The idea is to use a camera to capture the user’s hand movements. There are many individuals who do not have the ability to control the pointer easily by moving a physical mouse. Unfortunately, there no viable alternatives due to the fact that the mouse has become an essential input device for all modern computers. Inability to control the mouse could be caused by impaired sensation from a wide variety of conditions and illnesses. This project aims to help those with impaired sensation by developing a computer-vision based input system with a camera as its input device. The current project is developing a system that comprises three modules of detection, tracking, and response: (1) The detection stage extracts the hand and recognizes its gesture which can then be used to alter the users' control; for example, a specific movement could correspond to a click of the mouse. (2) The hand is traced by a tracker and its movement is filtered with an anti-shake filter to perform a more stable movement. (3) In the response stage, the granularity (e.g., dots per inch) of the cursor is adjusted according to the user’s speed of hand movement.