If you are a Harvard undergraduate student interested in doing a research project (or a senior thesis) in Human-Computer Interaction (including topics like adaptive user interfaces, crowd-powered systems, accessibility, creativity, and more), please talk to me. I have a number of project ideas that you can contribute to, or you can propose your own. But take a look at the themes that run through my research to see what kinds of topics it would make sense for us to pursue together.
In general, the best time to join a research group is at the end of your sophomore year or at the beginning of your junior year. By that time, you should have enough technical background to contribute to a project, and still enough time left at Harvard to see the fruits of your labor. Ideal candidates would have taken at least one of CS 179, CS 171, CS 181, or CS 182. If you are a junior or senior and you are serious about pursuing research in HCI, I encourage you to take CS 279, a graduate class that will introduce you to the current research topics and the main research methods in HCI. The final project in CS 279 is often a great first step toward your own independent research project.
Below you can see examples of projects led by undergraduates (some were done as senior thesis, others just for fun) and projects where undergrads made significant contributions:
Ingenium: Improving Engagement and Accuracy with the Visualization of Latin for Language Learning
Learners commonly make errors in reading Latin, because they do not fully understand the impact of Latin's grammatical structure--its morphology and syntax--on a sentence's meaning. Synthesizing instructional methods used for Latin and artificial programming languages, Ingenium visualizes the logical structure of grammar by making each word into a puzzle block, whose shape and color reflect the word's morphological forms and roles. See the video to see how it works.
Sharon Zhou, Ivy J. Livingston, Mark Schiefsky, Stuart M. Shieber, and Krzysztof Z. Gajos. Ingenium: Engaging Novice Students with Latin Grammar. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, pages 944-956, New York, NY, USA, 2016. ACM.
[Abstract, BibTeX, Video, etc.]
Learnersourcing: Leveraging Crowds of Learners to Improve the Experience of Learning from Videos
Rich knowledge about the content of educational videos can be used to enable more effective and more enjoyable learning experiences. We are developing tools that leverage crowds of learners to collect rich meta data about educational videos as a byproduct of the learners' natural interactions with the videos. We are also developing tools and techniques that use these meta data to improve the learning experience for others.
Sarah Weir, Juho Kim, Krzysztof Z. Gajos, and Robert C. Miller. Learnersourcing Subgoal Labels for How-to Videos. In Proceedings of CSCW'15, 2015.
[Abstract, BibTeX, etc.]
Juho Kim, Philip J. Guo, Carrie J. Cai, Shang-Wen (Daniel) Li, Krzysztof Z. Gajos, and Robert C. Miller. Data-Driven Interaction Techniques for Improving Navigation of Educational Videos. In Proceedings of UIST'14, 2014. To appear.
[Abstract, BibTeX, Video, etc.]
Juho Kim, Phu Nguyen, Sarah Weir, Philip J Guo, Robert C Miller, and Krzysztof Z. Gajos. Crowdsourcing Step-by-Step Information Extraction to Enhance Existing How-to Videos. In Proceedings of CHI 2014, 2014. To appear.
Honorable Mention
[Abstract, BibTeX, etc.]
Juho Kim, Shang-Wen (Daniel) Li, Carrie J. Cai, Krzysztof Z. Gajos, and Robert C. Miller. Leveraging Video Interaction Data and Content Analysis to Improve Video Learning. In Proceedings of the CHI 2014 Learning Innovation at Scale workshop, 2014.
[Abstract, BibTeX, etc.]
Juho Kim, Philip J. Guo, Daniel T. Seaton, Piotr Mitros, Krzysztof Z. Gajos, and Robert C. Miller. Understanding In-Video Dropouts and Interaction Peaks in Online Lecture Videos. In Proceeding of Learning at Scale 2014, 2014. To appear.
[Abstract, BibTeX, etc.]
Juho Kim, Robert C. Miller, and Krzysztof Z. Gajos. Learnersourcing subgoal labeling to support learning from how-to videos. In CHI '13 Extended Abstracts on Human Factors in Computing Systems, CHI EA '13, pages 685-690, New York, NY, USA, 2013. ACM.
[Abstract, BibTeX, etc.]
Adaptive Click and Cross: Adapting to Both Abilities and Task to Improve Performance of Users With Impaired Dexterity
Adaptive Click-and-Cross, an interaction technique for computer users with impaired dexterity. This technique combines three "adaptive" approaches that have appeared separately in previous literature: adapting the user's abilities to the interface (i.e., by modifying the way that the cursor works), adapting the user interface to the user's abilities (i.e., by modifying the user interface through enlarging items), and adapting the user interface to the user's task (i.e., by moving frequently or recently used items to a convenient location). Adaptive Click-and-Cross combines these three adaptations to minimize each approach's shortcomings, selectively enlarging items predicted to be useful to the user while employing a modified cursor to enable access to smaller items.
Louis Li and Krzysztof Z. Gajos. Adaptive Click-and-cross: Adapting to Both Abilities and Task Improves Performance of Users with Impaired Dexterity. In Proceedings of the 19th International Conference on Intelligent User Interfaces, IUI '14, pages 299–304, New York, NY, USA, 2014. ACM.
[Abstract, BibTeX, etc.]
Louis Li. Adaptive Click-and-cross: An Interaction Technique for Users with Impaired Dexterity. In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS '13, pages 79:1-79:2, New York, NY, USA, 2013. ACM.
[Abstract, BibTeX, etc.]
Curio: a platform for crowdsourcing research tasks in sciences and humanities
Curio is intended to be a platform for crowdsourcing research tasks in sciences and humanities. The platform is designed to allow researchers to create and launch a new crowdsourcing project within minutes, monitor and control aspects of the crowdsourcing process with minimal effort. With Curio, we are exploring a brand new model of citizen science that significantly lowers the barrier of entry for scientists, developing new interfaces and algorithms for supporting mixed-expertise crowdsourcing, and investigating a variety of human computation questions related to task decomposition, incentive design and quality control.
Edith Law, Conner Dalton, Nick Merrill, Albert Young, and Krzysztof Z. Gajos. Curio: A Platform for Supporting Mixed-Expertise Crowdsourcing. In Proceedings of HCOMP 2013. AAAI Press, 2013. To appear.
[Abstract, BibTeX, etc.]
InProv: a Filesystem Provenance Visualization Tool
InProv is a filesystem provenance visualization tool, which displays provenance data with an interactive radial-based tree layout. The tool also utilizes a new time-based hierarchical node grouping method for filesystem provenance data we developed to match the user's mental model and make data exploration more intuitive. In an experiment comparing InProv to a visualization based on the node-link representation, participants using InProv made more accurate assessments of provenance and found InProv to require less mental effort, less physical activity, less work, and to be less stressful to use.
Michelle A Borkin, Chelsea S Yeh, Madelaine Boyd, Peter Macko, KZ Gajos, M Seltzer, and H Pfister. Evaluation of filesystem provenance visualization tools. IEEE transactions on visualization and computer graphics, 19(12):2476-2485, 2013.
[Abstract, BibTeX, etc.]
Predicting Users' First Impressions of Website Aesthetics
Users make lasting judgments about a website's appeal within a split second of seeing it for the first time. This first impression is influential enough to later affect their opinion of a site's usability and trustworthiness. In this project, we aim to automatically adapt website aesthetics to users' various preferences in order to improve this first impression. As a first step, we are working on predicting what people find appealing, and how this is influenced by their demographic backgrounds.
Katharina Reinecke and Krzysztof Z. Gajos. Quantifying Visual Preferences Around the World. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '14, pages 11-20, New York, NY, USA, 2014. ACM.
[Abstract, BibTeX, etc.]
Katharina Reinecke, Tom Yeh, Luke Miratrix, Rahmatri Mardiko, Yuechen Zhao, Jenny Liu, and Krzysztof Z. Gajos. Predicting users' first impressions of website aesthetics with a quantification of perceived visual complexity and colorfulness. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, pages 2049-2058, New York, NY, USA, 2013. ACM.
Honorable Mention
[Abstract, BibTeX, Data, etc.]
Lab in the Wild
Lab in the Wild is a platform for conducting large scale behavioral experiments with unpaid online volunteers. LabintheWild helps make empirical research in Human-Computer Interaction more reliable (by making it possible to recruit many more participants than would be possible in conventional laboratory studies) and more generalizable (by enabling access to very diverse groups of participants).
LabintheWild experiments typically attract thousands or tens of thousands of participants (with two studies reaching more than 250,000 people). LabintheWild's volunteer participants have also been shown to provide more reliable data and exert themselves more than participants recruited via paid platforms (like Amazon Mechanical Turk). A key characteristic of LabintheWild is its incentive structure: Instead of money, participants are rewarded with information about their performance and an ability to compare themselves to others. This design choice engages curiosity and enables social comparison---both of which motivate participants.
LabintheWild is co-directed by Profs. Katharina Reinecke and Krzysztof Gajos.
Here's the original LabintheWild paper that demonstrates that the data obtained on LabintheWild are are as reliable as those captured in traditional experiments:
Katharina Reinecke and Krzysztof Z. Gajos. LabintheWild: Conducting Large-Scale Online Experiments With Uncompensated Samples. In Proceedings of CSCW'15, 2015.
Honorable Mention
[Abstract, BibTeX, etc.]
Here are some papers that relied on the data collected on Lab in the Wild:
Bernd Huber and Krzysztof Z. Gajos. Conducting online virtual environment experiments with uncompensated, unsupervised samples. PLOS ONE, 15(1):1–17, 01 2020.
[Abstract, BibTeX, Data, etc.]
Krzysztof Z. Gajos, Katharina Reinecke, Mary Donovan, Christopher D. Stephen, Albert Y. Hung, Jeremy D. Schmahmann, and Anoopum S. Gupta. Computer Mouse Use Captures Ataxia and Parkinsonism, Enabling Accurate Measurement and Detection. Movement Disorders, 35:354–358, February 2020.
[Abstract, BibTeX, etc.]
Qisheng Li, Krzysztof Z. Gajos, and Katharina Reinecke. Volunteer-Based Online Studies With Older Adults and People with Disabilities. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS '18, pages 229–241, New York, NY, USA, 2018. ACM.
[Abstract, BibTeX, etc.]
Marissa Burgermaster, Krzysztof Z. Gajos, Patricia Davidson, and Lena Mamykina. The Role of Explanations in Casual Observational Learning about Nutrition. In Proceedings of CHI'17, 2017. To appear.
[Abstract, BibTeX, etc.]
Krzysztof Z. Gajos and Krysta Chauncey. The Influence of Personality Traits and Cognitive Load on the Use of Adaptive User Interfaces. In Proceedings of ACM IUI'17, 2017. To appear.
[Abstract, BibTeX, etc.]
Bernd Huber, Katharina Reinecke, and Krzysztof Z. Gajos. The Effect of Performance Feedback on Social Media Sharing at Volunteer-Based Online Experiment Platforms. In Proceedings of CHI'17, 2017. To appear.
[Abstract, BibTeX, Data, etc.]
Katharina Reinecke and Krzysztof Z. Gajos. Quantifying Visual Preferences Around the World. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '14, pages 11-20, New York, NY, USA, 2014. ACM.
[Abstract, BibTeX, etc.]
Katharina Reinecke, Tom Yeh, Luke Miratrix, Rahmatri Mardiko, Yuechen Zhao, Jenny Liu, and Krzysztof Z. Gajos. Predicting users' first impressions of website aesthetics with a quantification of perceived visual complexity and colorfulness. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, pages 2049-2058, New York, NY, USA, 2013. ACM.
Honorable Mention
[Abstract, BibTeX, Data, etc.]
Accurate Measurements of Pointing Performance from In Situ Observations
We present a method for obtaining lab-quality measurements of pointing performance from unobtrusive observations of natural in situ interactions. Specifically, we have developed a set of user-independent classifiers for discriminating between deliberate, targeted mouse pointer movements and those movements that were affected by any extraneous factors. Our results show that, on four distinct metrics, the data collected in-situ and filtered with our classifiers closely matches the results obtained from the formal experiment.
Krzysztof Gajos, Katharina Reinecke, and Charles Herrmann. Accurate measurements of pointing performance from in situ observations. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems, CHI '12, pages 3157-3166, New York, NY, USA, 2012. ACM.
[Abstract, BibTeX, Authorizer, Data and Source Code, etc.]
PlateMate: Crowdsourcing Nutrition Analysis from Food Photographs
PlateMate allows users to take photos of their meals and receive estimates of food intake and composition. Accurate awareness of this information is considered a prerequisite to successful change of eating habits, but current methods for food logging via self-reporting, expert observation, or algorithmic analysis are time-consuming, expensive, or inaccurate. PlateMate crowdsources nutritional analysis from photographs using Amazon Mechanical Turk, automatically coordinating untrained workers to estimate a meal's calories, fat, carbohydrates, and protein. To make PlateMate possible, we developed the Management framework for crowdsourcing complex tasks, which supports PlateMate's decomposition of the nutrition analysis workflow. Two evaluations show that the PlateMate system is nearly as accurate as a trained dietitian and easier to use for most users than traditional self-reporting, while remaining robust for general use across a wide variety of meal types.
Jon Noronha, Eric Hysen, Haoqi Zhang, and Krzysztof Z. Gajos. PlateMate: Crowdsourcing Nutrition Analysis from Food Photographs. In Proceedings of the 24th annual ACM symposium on User interface software and technology, UIST '11, pages 1-12, New York, NY, USA, 2011. ACM.
[Abstract, BibTeX, Authorizer, Data, etc.]
PETALS Project -- A Visual Decision Support Tool For Landmine Detection
Landmines remain in conflict areas for decades after the end of hostilities. Their suspected presence renders vast tracts of land unusable for development and agriculture causing significant psychological and economical damage. Landmine removal is a slow and dangerous process. Compounding the difficulty, modern landmines use minimal amounts of metallic content making them very hard to detect and to distinguish from other metallic debris (such as bullet shells, wires, etc.) frequently present in post-combat areas. Recent research has demonstrated that the accuracy of landmine detection can be improved if deminers try to mentally represent the shape of the area where the metal detector's response gets triggered. Despite similar amounts of metallic content, mines and clutter results in areas of different shapes. Building on these findings, we have created a visual decision support tool that presents the deminer with an explicit visualization of the shapes of these response areas. The results of our study demonstrate that this tool significantly improves novice deminers' detection rates and it improves the localization accuracy.
Lahiru Jayatilaka, David M. Sengeh, Charles Herrmann, Luca Bertuccelli, Dimitrios Antos, Barbara J. Grosz, and Krzysztof Z. Gajos. PETALS: Improving Learning of Expert Skill in Humanitarian Demining. In Proceedings of the 1st ACM SIGCAS Conference on Computing and Sustainable Societies, COMPASS '18, pages 33:1–33:11, New York, NY, USA, 2018. ACM. Best Paper Award
[Abstract, BibTeX, Slides, etc.]
Lahiru G. Jayatilaka, Luca F. Bertuccelli, James Staszewski, and Krzysztof Z. Gajos. Evaluating a Pattern-Based Visual Support Approach for Humanitarian Landmine Clearance. In CHI '11: Proceeding of the annual SIGCHI conference on Human factors in computing systems, New York, NY, USA, 2011. ACM.
[Abstract, BibTeX, Authorizer, etc.]
Lahiru G. Jayatilaka, Luca F. Bertuccelli, James Staszewski, and Krzysztof Z. Gajos. PETALS: a visual interface for landmine detection. In Adjunct proceedings of the 23nd annual ACM symposium on User interface software and technology, UIST '10, pages 427-428, New York, NY, USA, 2010. ACM.
[Abstract, BibTeX, Authorizer, etc.]
Automatic Task Design on Amazon Mechanical Turk
A central challenge in human computation is in understanding how to design task environments that effectively attract participants and coordinate the problem solving process. We consider a common problem that requesters face on Amazon Mechanical Turk: how should a task be designed so as to induce good output from workers? In posting a task, a requester decides how to break down the task into unit tasks, how much to pay for each unit task, and how many workers to assign to a unit task. These design decisions affect the rate at which workers complete unit tasks, as well as the quality of the work that results. Using image labeling as an example task, we consider the problem of designing the task to maximize the number of quality tags received within given time and budget constraints. We consider two different measures of work quality, and construct models for predicting the rate and quality of work based on observations of output to various designs. Preliminary results show that simple models can accurately predict the quality of output per unit task, but are less accurate in predicting the rate at which unit tasks complete. At a fixed rate of pay, our models generate different designs depending on the quality metric, and optimized designs obtain significantly more quality tags than baseline comparisons.
Eric Huang, Haoqi Zhang, David C. Parkes, Krzysztof Z. Gajos, and Yiling Chen. Toward automatic task design: a progress report. In Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP '10, pages 77-85, New York, NY, USA, 2010. ACM.
[Abstract, BibTeX, Authorizer, etc.]