Chris S. Crawford

Associate Professor
University of Alabama
Department of Computer Science

CV

About Me

I am an Associate Professor of Computer Science at the University of Alabama. I direct the Human-Technology Interaction Lab. My work focuses on Brain-Computer Interfaces (BCI) and Human-Robot Interaction (HRI). My goal is to leverage novel neurophysiological sensing technologies, software engineering, and robotics to create tools and applications that support the exploration of Brain-Robot Interaction (BRI). I use a combination of Human-Computer Interaction (HCI) and BCI research to investigate how users interact with systems capable of adapting to their cognitive state.

 

Past Research & Projects

 
 

Block-Based Interactive EEG Software

This research investigates ways to utilize Visual Programming Languages (VPLs) with neurophysiological measurements of electroencephalography (EEG) signals acquired with a Brain-Computer Interface (BCI). This data can be used to understand cognitive and affective states such as fatigue, cognitive workload, engagement, attention, and frustration. This work features a block-based programming environment capable of analyzing (near) real-time electroencephalography (EEG) data that enables users to quickly develop neurofeedback applications. See 'Brain-Computer Interface for Novice Programmers' (SIGCSE 18') for more information.

 
Dr. Crawford helps a student seated at a computer control a drone, another student inspecting drone on floor

Brain-Drone Race

The Brain-Drone Race is a competition featuring users' cognitive ability and mental endurance. During this event competitors are required to out-focus opponents in a drone drag race fueled by electrical signals emitted from the brain. On April 16, 2016, 16 participants competed using the Emotiv insight headsets and DJI Phantom 2 drones. Although others had previously demonstrated drone manipulation via EEG, this was the first public demonstration of a competitive Brain-Drone event. For more information visit www.braindronerace.com.

 
 
Dr. Crawford sticks out tongue, another image to the right shows a computer visual showing the detection of the tongue portrusioni

Perceptual Computing: Tongue Protrusion Detection

This work investigates a method of detecting tongue protrusion gestures by utilizing the tongue's color and texture characteristics. By taking advantage of recent advances in computer vision, real-time tongue gesture detection is possible with video streams provided by a standard web camera. Tongue gesture detection functionality has the potential to supplement user interaction and provide for an immersive experience in applications such as games or video communication applications. It could also aid communication for mobility-impaired users. See 'Using Cr-Y Components to Detect Tongue Protrusion Gestures ' for a description of a process presented at the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI 15') that uses YCbCr color space manipulation and a support vector machine to detect left, right, and downward tongue protrusions in real-time.

 
 

Televoting: Voting for Deployed Military Personnel

Many members of the armed services are overseas during elections. As a result, they are unable to cast their ballot in person. Although the Uniformed and Overseas Citizens Absentee Voting Act (UOCAVA) gives soldiers located overseas the right to mail in absentee ballots, they are often left uncounted due to issues with shipping. This research investigates Televoting, an approach to Internet voting (E-Voting) modeled after Telemedicine systems that utilizes video communication technology. Televoting attempts to address security issues that have plagued previous E-Voting platforms by producing a paper ballot instead of storing votes on a server. See 'Televoting: Secure, Overseas Voting' for a discussion of the system design and the voting process users experience when using Televoting.

 
Screenshot of app showing an aerial satellite view with tree and car detection

Multi-robot Surveillance Systems

Unmanned robotic systems are being used in the military for surveillance and reconnaissance missions. However, current systems utilize a one (or multiple) operator/one robot interface. In addition, human-in-the-loop models or systems that have an autonomy component create issues because human operators tend to intervene more frequently if their expectations of the autonomy are not met. This research investigates the impact spatial and temporal cues have on operators' trust in human-multi-robot systems. See 'Affecting operator trust in intelligent multirobot surveillance systems' for a discussion on the system design.

 

Additional Media

 
 
 
 

News

Alumni Spotlight: Chris Crawford

By: University of Florida

Chris Crawford, Ph.D., may have taken a different path in life had a storm not fried his computer. “It was literally a lightning strike that propelled me into computer science,” said Dr. Crawford (HCC Ph.D. ’17), an assistant professor at the University of Alabama Department of Computer Science... Read More