Intelligent Personal Emergency Response (PERS) and Fall Detection System
Keywords: Fall detection, aging-in-place, safety, older adults, aging, smart homes.
Overview of Research
Falls in the home are the most common cause of injury among older adults and the most expensive category of injury for the Canadian healthcare system, costing in total over $6.2 billion in 2004 alone . The most prevalent cause of falls-related injuries were from falls that occurred on the same level (e.g., slipping, stumbling or tripping while walking across a floor), followed by falls on stairs.
In an attempt to improve safety in the home, there have been several attempts to create a system that allows older adults to call for help in emergency situations. The most commonly adapted device has a wearable panic button, which connects the occupant to a live operator via speakerphone when it is pressed (e.g., Lifeline). Such a device is, however, inherently flawed as it requires the user to remember to wear it, want to wear it, as well as be physically and mentally capable to activate it after an injury has occurred. These limitations are exacerbated if the occupant has a cognitive or health impairment, such as Alzheimer’s disease or a stroke.
In response to these limitations, our research team has developed a hands-free automated emergency response system. Demonstrated in Figure 1, the system operates through one or more Ceiling Mounted Units and a Central Control Unit. Each Ceiling Mounted Unit contains a vision sensor (i.e., a non-recording video camera), a microphone, speakers, a processor, and a smoke detector. Multiple Ceiling Mounted Units may be networked together to ensure that all desired areas of a home can be monitored. One Central Control Unit is installed in each home and is responsible for coordinating the Ceiling Mounted Units and relaying communications to the outside world. The system operates entirely autonomously, requiring no input from the user to detect or react to a fall. Using computer vision techniques, the system tracks a person as they moves about his/her home (Figure 2). Measures such as the dimensions of the tracked person's silhouette and shadows are fed into a machine learning classifier that detects if there is an acute (emergency) event, such as a fall (Figure 3). The images in Figures 2 and 3 are for demonstrative purposes only - in the working version of the system no images are recorded or viewed to ensure the occupants' privacy is maintained.
When an acute event is detected, the closest Ceiling Mounted Unit uses speech recognition technology to have a dialogue with the person in distress to determine if and what type of assistance they would like. The dialogue has been developed to quickly assess the severity of the situation through asking a series of simple ‘yes’ or ‘no’ questions. The user is able to call 911, speak with an operator, or place a call to a neighbour or family member on a predefined custom call list. If the user does not respond or the system does not understand the user’s responses (e.g., severe injury, stroke, or unconsciousness), then the system automatically contacts an emergency response centre so that appropriate action can be taken.
Through the selection of the type of aid they would like to receive, the user remains fully in control of the situation and his/her health, promoting feelings of dignity, autonomy, and independence without compromising safety. Similarly, through the use of a passive (computer vision based) system, the system is able to operate effectively regardless of the cognitive or physical abilities of the user, ensuring that the safety of the user is increased while not adding any burden or inconvenience.
Current work on the project involve employing wide angle lenses to increase the area of coverage, improving robustness with respect to lighting changes, and applying state-of-the-art statistical analysis and machine learning techniques in order to reduce the burden of collecting and labelling training data during the system development process.
Figure 2: Tracking results for a person walking. (a) original webcam image and (b) extracted silhouette of person (light blue) and their shadow (dark blue).
Figure 3: Tracking results for a person after a detected fall. (a)Original webcam image and (b) extracted silhouette of person (light blue) and their cast shadow (dark blue).
University of Toronto Connaught Start-up
Canadian Institutes of Health Research (CIHR) NET Grant
Alex Mihailidis, University of Toronto
Yani Ioannou, TRI
Babak Taati, University of Toronto
Jasper Snoek, University of Toronto
Jennifer Boger, University of Toronto