Research Projects

I am a Human-Computer Interaction (HCI) and Accessibility researcher focused on enhancing quality of life for people who are blind or have low vision. Within quality of life, I have two main thrusts of enhancing exercise and art exploration. I am also interested in interdisciplinary HCI research.

Exercise Technologies for People who are Blind or have Low Vision

The majority of my research is focused on developing new software for mainstream technologies to help people who are blind have more independent access to exercise. For example, we are developing a smartphone application that will delivery navigation feedback to people who are visually impaired on 400-meter jogging tracks. This is follow up research to paper I presented at ASSETS 2018. We explored the benefits and drawbacks to aural and haptic feedback while walking around a jogging track.

Eyes-Free Yoga

Download Eyes-Free Yoga! Number of downloads: 263

I designed and developed Eyes-Free Yoga on the Microsoft Kinect. It emulates a yoga instructor, teaches six yoga poses, and provides custom auditory-only feedback. After a player is guided into the pose by the game, they receive the custom feedback to improve their pose. I collaborated with ten yoga instructors through the design, development, and evaluation. I ran a study with 16 people who are blind or low vision. I presented this at the Grace Hopper Conference and published a paper at ASSETS 2013. In April 2017, I published a journal article at TACCESS about this and an 8-week deployment study with four people using the full version of Eyes-Free Yoga.

Art Exploration for People who are Blind or have Low Vision

I collaborated with Meredith Ringel Morris and Neel Joshi at Micrsoft Research to develop the concept of a proxemic audio interface, where the level of detail in an audio presentation increases as a person moves closer to an object of interest. However, audio interfaces are different from visual because the person needs an extra amount of detailed information at the beginning of the experience to set context. We conducted a lab study with 13 participants with visual impairments, and collaborated with Keith Salmon and Daniel Thornton to create an installation that was shown in Seattle and the United Kingdom. I will be presenting this work at Ubicomp 2017!

Other Interdisciplinary HCI Research

ShoulderCam: A system for clinicians to measure shoulder range of motion

I worked with Frederick A. Matsen III, M.D. to develop software for the Microsoft Kinect to help clinicians measure shoulder range of motion before and after surgery to assess the patient's progress. I worked closely with stakeholders in the hospital system to develop and iterate on the system. We published a journal article in the Journal of Shoulder and Elbow Surgery that shows that the Kinect is a viable substitution for a goniometer in the clinic, and a paper at PervasiveHealth 2016 showing that most clinicians preferred using the Kinect system as opposed to a goniometer.

Incloodle: Evaluating an Interactive Application for Young Children with Mixed Abilities

I worked with Kiley Sobel on her project to develop an application that supports children who are neurodiverse and neurotypical to play with one another. We explored the use of technology enforcement (e.g. requiring two faces detected by camera before taking a picture) and characters during a lab study with pairs of children. We published this paper at CHI 2016.

A User Powered American Sign Language Dictionary

I worked with Danielle Bragg on her project to develop an American Sign Language (ASL) to English Dictionary. When students who are learning ASL and see a sign that they do not understand, it is not trivial to search for the English equivalent. As a result, we developed a system that allows for people to search using attributes of ASL corresponding to the grammar: handshape, location, orientation, movement, and facial expressions. Additionally, the system allows for users to make mistakes when conducting a search, which may be likely if the person has to recall the sign from memory. We published a paper at CSCW 2015.

MinEMail: SMS Alert System for Managing Critical Emails

I worked with Joshua Hailpern in the Social Computing Research Group at HP Labs on the design and implementation of an SMS notification system of critical emails that might have been missed or forgotten. We motivated our project with a 777 person survey learning about SMS and email within enterprise. We followed with an experience sampling study of over 3000 emails to determine what makes a message critical, and when and how the email would be addressed. We used this information to develop MinEMail, which was evaluated over a 2 week ecologically valid study within enterprise. I presented this at CHI 2014.

PVT Touch for Mobile Devices

I worked with Matthew Kay on the implementation of the Psychomotor Vigilance Task on an Android phone, which is used to assess the affects of sleep deprivation. After a stimulus is displayed, a participant would touch the screen and their reaction time would be recorded. My involvement with the project was to instrument the software and remove latency from the program to avoid skewing reaction time. I have been involved in one conference proceeding.

Accessible Passcodes for Blind Users

I worked with Shiri Azenkot on a project that explored the privacy and security issues of smart phone use with people who are blind or low vision. This was followed up with an accessible authentication technique, "Passchords", which enabled blind users to enter in a password faster than the traditional "Passcode" method. I have been involved in one conference proceeding.

Gender HCI

(wiki article)

During my undergraduate career, I worked with Margaret Burnett, Valentina Grigoreanu, Jill Cao, Todd Kulesza, Christopher Bogart, Laura Beckwith, Scott Fleming, Susan Wiedenbeck, Thomas Park, and Derek Inman, on a project that explored the gender divide with Microsoft Excel at the Oregon State University EUSES Consortium, (End Users Shaping Effective Software). The long term goal of this project is not to have a "pink" or "blue" version, but a "purple" version that caters to both genders. I have been involved in four conference proceedings and two articles.

Improving Debugging Tasks

During my undergraduate career, I worked with Margaret Burnett, Rachel Bellamy, Joeseph Lawrence, and Christopher Bogart on how information foraging theory can be used to help in debugging tasks at the Oregon State University EUSES Consortium. When people debug, they can use bug reports to find scent which can lead them to the bug in the source code. I have been involved with two conference proceedings, one article, and one IBM Research report.

Contextual Desktop Search

During the summer of 2008, I worked with Magdalena Balazinska and Evan Welbourne on how to improve Google Desktop by using contextual cues, not just content at the University of Washington Computer Science and Engineering (CSE) department. If someone misplaced a file on their computer, or had not touched it in a long time, my application gave them a new method of searching for this file. Some questions that can be answered with my desktop application include "What word document was I working on at last week's group meeting?" or "What email did I send after running into Bob yesterday in the hallway?" My work over the summer has been addressed in an IEEE Internet Computing article.

Rapid Prototyping of Physical Interfaces

During the summer of 2009, I worked with Scott Hudson and Jennifer Mankoff to make rapid prototyping of physical user interfaces easier for do-it-yourself designers by allowing them to use simple household items at the Carnegie Mellon University Human Computer Interaction Institute. Using the circuit board I designed, discrete input devices can be made out of thumbtacks, and continuous devices can be made from paper painted with conductive wire glue and two paperclips.