I’m a second year Ph.D. student in Computer Science and Engineering Department. I’m working on assistive technology, specially wearables and smartphone-based assistive technologies to improve the health and well-being of an individual.
Health management technology is moving forward at faster rate to promote individual and community well-being. We’re using different types of wearables in addition to smartphones to track health and well-being. However, these wearables came with their burden of wearing and charging as well as cost and eventually large scale reach. Compared to the wearables smartphones have much wider reach in terms of their global coverage and it will take time to make it available for a large scale like smartphones. Moreover, due to the additional burden imposed by the wearables, many times people stops using them after few days compared to the smartphone-based health and well-being management technology.
Sleep is one of the major factors that affects our mood, health and well-being as well as our productivity. Now a days, wearables are really good in determining sleep duration. However, due to their burden, cost and low reach, they’re still not popular assistive technology as compared to smartphone-based assistive technologies. Therefore, it’s better to detect sleep durations using smartphone.
Smartphones are enriched with a variable set of sensors with their capabilities. Often additional sources of relevant data helps to better understand/detect human behavior and activities, such as walking, sleeping, etc. However, some of the data are sensitive and involves privacy issues, e.g., location data, voice data, etc. Therefore, a smarter approach of sleep detection from phone sensors can be hierarchical, i.e., a tread off between types of sensory information the users are willing to provide versus sleep detection accuracy they can get from the machine learning model running in their smartphone.