Abstract
Pupil dilation is an effective indicator of cognitive and affective processes. Although several eyetracker systems on the market can provide effective solutions for pupil dilation measurement, there is a lack of tools for processing and analyzing the data provided by these systems. For this reason, we developed CHAP: open-source software written in MATLAB. This software provides a user-friendly graphical user interface for processing and analyzing pupillometry data. Our software creates uniform conventions for the preprocessing and analysis of pupillometry data and provides a quick and easy-to-use tool for researchers interested in pupillometry. To download CHAP or join our mailing list, please visit CHAP’s website: http://in.bgu.ac.il/en/Labs/CNL/chap.
Similar content being viewed by others
Pupillometry (measurement of changes in pupil size) is an effective indicator of various perceptual, cognitive and affective processes (for reviews, see Mathôt, 2018; van der Wel & van Steenbergen, 2018). Pupil size is used in studies examining attentional processes (Breeden, Siegle, Norr, Gordon, & Vaidya, 2017; Geva, Zivan, Warsha, & Olchik, 2013; van Steenbergen & Band, 2013), mental workload (Beatty, 1982; Hess & Polt, 1964; Kahneman & Beatty, 1966; Klingner, Tversky, & Hanrahan, 2011), surprise (Braem, Coenen, Bombeke, van Bochove, & Notebaert, 2015; Preuschoff, ’t Hart, & Einhäuser, 2011), memory (Beatty & Kahneman, 1966; Goldinger & Papesh, 2012; Otero, Weekes, & Hutton, 2011), and decision making (Einhäuser, Koch, & Carter, 2010; Murphy, Vandekerckhove, & Nieuwenhuis, 2014a). Pupil size also reflects high-order visual processing (Einhäuser, Stout, Koch, & Carter, 2008; Kloosterman et al., 2015; Kuchinke, Võ, Hofmann, & Jacobs, 2007; Naber & Nakayama, 2013), and movement preparation (Wang, Brien, & Munoz, 2015; Wang, McInnis, Brien, Pari, & Munoz, 2016). Moreover, pupil size is commonly used as an indicator of arousal and affective processing (Bradley, Miccoli, Escrig, & Lang, 2008; Cohen, Moyal, & Henik, 2015; Hess & Polt, 1960; Partala & Surakka, 2003; Siegle, Steinhauer, Carter, Ramel, & Thase, 2003; Unsworth & Robison, 2017; van Steenbergen, Band, & Hommel, 2011). Pupillometry (mainly pupil light reflex) is also frequently used in ophthalmology studies assessing the physiology of pupil dynamics, as well as changes to pupil size due to diseases such as multiple sclerosis and diabetes (Barbur, Harlow, & Sahraie, 2007; Binda et al., 2017; Feigl et al., 2012; Kostic et al., 2016; McDougal & Gamlin, 2010; Phillips, Szabadi, & Bradshaw, 2001; Wilhelm, Wilhelm, Moro, & Barbur, 2002).
Researchers who are interested in pupillometry use eyetracking devices (e.g., EyeLink, Tobii, Ergoneers, and Mangold), which provide information on eye movements as well as pupil size. Although there are open-source tools for processing the eye movement data recorded by these devices (Li, 2017; Sogo, 2013; Zhegallo & Marmalyuk, 2015), there is a lack of tools for processing and analyzing pupillometry data.
Some of the current eyetracker companies offer basic tools for processing pupillometry data (e.g., Data Viewer by SR Research, Tobii Pro Lab by Tobii, etc.), but these tools follow different conventions and include only basic processing of the data. For example, these tools do not correct for blinks or allow for a time-course visualization of the data. Although there have been some attempts to provide guidelines and scripts for the preprocessing of pupil size data (Kret & Sjak-Shie, 2018; Mathôt, Fabius, Van Heusden, & Van der Stigchel, 2018), these works do not offer researchers tools for advanced processing and analyses.
For this reason, we developed CHAP (Cohen and Hershman Analysis Pupil), open-source software written in MATLAB. This software provides a user-friendly graphical user interface (GUI) for processing and analyzing pupillometry data. The software receives input of a standard data file from common eyetrackers such as EyeLink (.edf file), Tobii, Applied Science Laboratories (ASL), or Eye Tribe (.csv file), and provides both preprocessing and analysis of the data. Our software aims to create uniform conventions for the preprocessing and analysis of pupillometry data, and to offer a quick and easy-to-implement tool for researchers interested in pupillometry.
CHAP’s input
CHAP receives input of a standard data file from various eyetrackers. It receives input from EyeLink (.edf file), Eye Tribe (both .txt and .csv files; see Appendix A for an example), and Tobii and ASL eyetrackers (.csv files; see Appendixes A and B for examples). CHAP also supports data files from other eyetracker devices, as long as they have a specific format (.dat files; see Appendix B for an example). These input files (.edf, .csv, .txt, or .dat files) include data on pupil size, as well as information the user entered when programming the experiment (e.g., trial number, condition). Various software programs, such as E-Prime (application software package; Psychology Software Tools, 2001), MATLAB, and Experiment Builder (SR Research program provided by EyeLink), can be used to build pupillometry experiments and their use depends on the eyetracker and on specific task requirements. For processing EyeLink data files (.edf), CHAP uses the EDFMEX software (Beta Version 0.9; SR Research, Ontario, Canada) and the EDFMAT software (Version 1.6; developed by Adrian Etter and Marc Biedermann, University of Zurich), which read the EyeLink data files and transform them into a MATLAB structure using the EyeLink edf application programming interface (API). For processing Eye Tribe data files (i.e., .txt files resulting from the streamed data of the Eye Tribe interface), Python should be installed. Other output files, such as the files created by PyGaze (Dalmaijer, Mathôt, & Van der Stigchel, 2014) or PsychoPy (Peirce, 2007), can be converted to both .dat and .csv files (see Appendix B). For processing input files in both a .dat and a .csv format, CHAP does not require any additional packages.
Running CHAP
After installing CHAP (see Appendix C for the installation instructions), the user should type chap in MATLAB’s command window or run the main file chap.m from the software directory. Then, in order to process and analyze a new data file, the user should click on the New project button in the main GUI (see Fig. 1). Next the user will be asked to choose a data file, as follows: For EyeLink data, the file extension should be .edf (the standard output file of EyeLink; e.g., sub01.edf); for Eye Tribe data, the file extension should be .txt (the standard output file of Eye Tribe; e.g., sub01.txt); for ASL data, the file extension should be .asl (a .csv file renamed with the .asl extension; e.g., sub01.asl); and for Tobii data, the extension should be .tbi (.csv file renamed with the .tbi extension; e.g., sub01.tbi).
Loading, cleaning, and compressing files
After CHAP loads the data file and converts it to a .mat file, it keeps relevant measures (pupil size and the x- and y-positions of the pupil) and removes irrelevant information saved by the eyetracking system, such as indications of fixations and saccades. After this cleaning, CHAP processes the data (see below) and creates a file with the extension .chp (e.g., sub01.chp). This CHAP file can later be loaded by clicking on the Open existing project button in the main GUI (Fig. 1). This compression of the data, which results in a .chp file that is approximately ten times smaller than the original data file, allows for fast and efficient processing of the data.
Trials, variables, and events
Pupil size is usually recorded in experiments that consist of different trial types. CHAP uses user-defined information to split the data into trials, to analyze different events within a trial (e.g., pupil size during stimulus presentation) or to compare different conditions (which we call variables; e.g., congruent vs. incongruent Stroop stimuli).
To detect trial onset, CHAP uses EyeLink standards. Namely, CHAP looks for logs with a string in the format TRIALID [trial id]. This log is created by EyeLink when Experiment Builder is used for programming the task. In the case of other software (MATLAB, E-Prime), this log should be defined by the user (see Appendix A for examples). For trial offset, CHAP uses logs with a string that is either TRIAL_RESULT 0 (the EyeLink standard format) or TRIAL_END (see Appendix A for examples). In the case in which the user did not send a log that defines a trial’s offset, CHAP will use the onset of the subsequent trial as offset of the current trial.
CHAP also detects user-defined variables and events (up to 64 of each). These variables and events are defined by the researcher when building his/her experiment and should be logged into the data file. Variables should indicate conditions (e.g., congruency in a Stroop task, valance in a study assessing emotional processing), as well as other information about the trial (e.g., block number, accuracy). Events should indicate the relevant factors that occur during a trial (e.g., fixation, stimulus, inter-trial interval).
CHAP recognizes the variables using logs with text strings that appear in the data file, according to EyeLink standards (!V trial_var [variable name] [variable value]). For example, a possible string could be !V trial_var congruency congruent. In this example, the variable name is congruency and the value is congruent. CHAP also supports other types of variable formats by reading an external .csv file created by the user (see Appendix D). This file should include the variables for each trial (i.e., trial IDs, variable names, and variable values). This external file is especially helpful when the values of some of the variables are unknown during data acquisition. This can happen, for example, in a Stroop task in which participants provide a vocal response that is coded to be correct or incorrect only at the end of the experiment. Another example is a memory experiment in which it is unknown during the encoding phase whether the stimulus will be remembered or forgotten. Therefore, CHAP allows the user to add information about trial accuracy or any other relevant information that does not appear in the data file, using an external .csv file. In the case in which the researcher wants to use this option, he/she should create a .csv file for each participant. This .csv file should have the same name as the main data file with the addition of a dedicated suffix _vars (e.g., sub01_vars.csv). This file should be located in the same folder as the main data file.
CHAP recognizes events using logs that consist of the string !E trial_event_var [event name]. For example, a possible string that indicates the appearance of a stimulus could be !E trial_event_var stimulus_onset. In addition, in contrast to the straightforward approach of logging events at the moment they occur, CHAP also supports the following pattern (which is used in EyeLink): [milliseconds before the trial’s ending] [event name]. This means that instead of logging the event’s information following the event’s onset, the user can send the relevant log at the end of the trial. This approach is useful when using third-party software, such as E-Prime. Similar to supporting the addition of external variables, CHAP also supports the addition of external events. By using dedicated .csv files, the user can specify events that were not logged into the data file during recording (see Appendix D). These files should be selected manually by the user during the data file loading.
Data preprocessing
After CHAP reads the data and converts it to a .chp file, pupil data (pupil size in pixels or any other units) is preprocessed. Preprocessing includes:
-
1.
Exclusion of outlier samples (Akdoğan, Balcı, & van Rijn, 2016; Cohen et al., 2015; Wainstein et al., 2017). CHAP converts pupil size data to z score values based on the mean and standard deviation (SD) calculated for each trial separately. Z score values above or below a user-defined threshold, as entered in the Z Outliers text field in the Data Pre-processing GUI (see Fig. 2), will be converted to missing (not a number: NaN) values.
-
2.
Blink correction. CHAP searches for blinks (or any other missing values) in the data and reconstructs the pupil size for each trial from the relevant samples using either linear interpolation (Bradley et al., 2008; Cohen et al., 2015; Siegle et al., 2003; Steinhauer, Condray, & Kasparek, 2000) or cubic-spline interpolationFootnote 1 (Mathôt, 2013; Mathôt et al., 2018; Smallwood et al., 2011; Wainstein et al., 2017). The user can select the interpolation method from a dropdown list that appears in the Data Pre-processing GUI (see Fig. 2). To find blink onset and offset, CHAP uses a novel algorithm that we have recently developed (Hershman, Henik, & Cohen, 2018), which takes into account the change in pupil size between subsequent samples. The user can see the blink correction by choosing the View trials checkbox in the Data Pre-processing GUI (see Fig. 2). Accurate correction of blinks is highly important, because it prevents artifacts in the data (Hershman et al., 2018).
-
3.
Exclusion of outlier trials (Blaser, Eglington, Carter, & Kaldy, 2014; Cohen et al., 2015; Hershman & Henik, 2019; Nyström, Andersson, Holmqvist, & van de Weijer, 2013; Schmidtke, 2014; Wainstein et al., 2017). CHAP removes trials on the basis of a user-defined percentage of missing pupil observations within a trial (Missing values (%) text field in the Data Pre-processing GUI; see Fig. 2). Missing observations occur during blinks, head movements, or inability of the eyetracking system to track the pupil. Trials that contain a high percentage of missing samples should not be used in the analysis (Nyström et al., 2013). We recommend excluding trials with more than 20% missing samples.
-
4.
Exclusion of participants. CHAP excludes participants that do not meet the user-defined criteria for a minimum number of valid trials per conditions (Min. trials text field in the Data Pre-processing GUI; see Fig. 2). If a participant does not reach the user-defined value (e.g., due to low tracking quality the participant has less than ten valid congruent trials), an error message will appear and the participant will not be included in the analysis.
Selecting variables and events
The Data Pre-processing GUI presents lists of the variables and events that were logged during the experiment (see Fig. 2). The user can choose what variables and events he or she wants to analyze. After selecting the parameters for excluding trials and participants and the desired variables and events, the user should click the Continue button (see the screenshot in Fig. 2). Then, the Condition-selecting GUI will be opened. This GUI allows the user to select the desired conditions (up to seven) from a list. The list includes all the possible combinations, depending on the selected variables (see the example in Fig. 3).
Time-course visualization
After the user selects the relevant variables and events, CHAP presents data for each selected condition (mean pupil size across all valid trials) and indicates the selected events on a dedicated graph (see Fig. 4). By default, the y-axis represents mean pupil size (with the given units from the data file) as a function of time (in milliseconds). Each curve represents a condition (the selected conditions appear in the legend) and the vertical lines represent the mean onset time of each selected event (as selected in the Data Pre-processing GUI). The title (in this case, 20170625a) represents the participant’s ID, as defined by the name of the file (in this case, 20170625a.txt)
For each of the selected conditions, CHAP presents the number of trials that were used to generate the mean, as well as the number of trials that were defined as outliers and therefore were removed from analysis (see Fig. 4). CHAP also presents the mean onset of the selected events relative to the beginning of the trial, as well as the mean offset of each condition.
Advanced processing options
In addition to basic preprocessing of the data (Fig. 5a), CHAP supports several advanced analysis features (Fig. 5b–g). These features can be selected by the user from the Advanced Processing Options and Time-Course Visualization GUI (see the configuration section in Fig. 4) and are outlined below.
Bins
Down-sampling (reducing the sampling rate) is performed by averaging adjacent samples of an overall X (user-defined) milliseconds. Specifically, CHAP divides the data into chunks that include samples with a total duration of X ms each. Then, CHAP calculates the average value for each of these bins (see Fig. 5b for an example). The onset of each bin is defined as the time of the first sample (see Fig. 6 for an example). By using this feature, the user can aggregate the data for analysis or for smoothing purposes (Diede & Bugg, 2017; Kang & Wheatley, 2015; Snell, Mathôt, Mirault, & Grainger, 2018). This option could also be useful in the case of comparing data collected using different eyetrackers or different sampling rates (Titz, Scholz, & Sedlmeier, 2018), or to correlate between pupil size and another measure, such as blood-oxygen-level dependent (BOLD) activity (Murphy, O’Connell, O’Sullivan, Robertson, & Balsters, 2014b; Yellin, Berkovich-Ohana, & Malach, 2015).
Time window
In most psychological experiments, each trial includes several events (e.g., fixation cross, target, intertrial interval). CHAP allows the user to select a specific time window (from one event to another) for analysis (Fig. 5c). When the user uses this feature, the data are aligned to the onset of the first selected event (Kloosterman et al., 2015; Snell et al., 2018; Turi, Burr, & Binda, 2018). This alignment of the data is especially important when the event does not start at a fixed time (e.g., a stimulus appears after 2 s in some trials, and after 2.5 s in other trials). This can happen when stimulus onset is determined by the participant response time (which varies from trial to trial) or when a stimulus appears after a variable inter-stimulus interval (ISI).
Relative change
Instead of using the units recorded by the eyetracker device (e.g., number of pixels, in the case of an EyeLink device), the user can convert the data to a relative change score (Mathôt et al., 2018). The relative change indicates either the difference from baseline (i.e., subtractive baseline correction; Binda & Murray, 2015; Binda, Pereverzeva, & Murray, 2014; Steinhauer, Siegle, Condray, & Pless, 2004), relative = value − baseline (Fig. 5d), or the percent change from baseline (divisive baseline correction; Binda et al., 2014; Kloosterman et al., 2015; Mathôt, van der Linden, Grainger, & Vitu, 2013; Snell et al., 2018): (Fig. 5f).
The baseline is defined by the user as the average X ms prior to onset of the first event (Binda et al., 2014; Mathôt et al., 2013; Snell et al., 2018).
Conversion to Z-scores/millimeters
Most eyetrackers provide pupil data in arbitrary units. These units (or values) are dependent on the eyetracker definitions during recording, the position of the camera, and the distance between the camera and the participant, as well as other parameters, such as the lighting conditions of the room. Normalization of the data (converting the data to comparable values, such as z-score values instead of arbitrary units) makes it possible to analyze and compare participants/experiments whose data were recorded with different conditions.
If the user chooses to convert arbitrary units to z scores (Cohen et al., 2015; Einhäuser et al., 2008; Kang & Wheatley, 2015; Wainstein et al., 2017), CHAP will calculate the z-score values on the basis of the entire time course of the participant (i.e., the mean and SD will be calculated on all valid trials for that participant; Fig. 5e). Alternatively, the user can choose to convert arbitrary units to millimeters (Binda & Murray, 2015; Binda et al., 2014; Blaser et al., 2014; Siegle, Steinhauer, & Thase, 2004) by recording an artificial pupil of fixed size and providing CHAP its millimeter and arbitrary-unit sizes. Pupil size is recorded in either diameter or area measurement, and CHAP recognizes the right measurement after the data are loaded.
CHAP uses the following equations to get the size in millimeters: In the case of diameter,
where ratio is the ratio between the artificial pupil diameter (AP) in millimeters and in pixels, described by
In the case of area,
where pupildiameter[arbitrary_units] is described by
and ratio is the ratio between the artificial pupil diameter (AP) in millimeters and in pixels, described by
For conversion of the data to millimeters, the user should add a .csv file for each participant, with the same name as the main data file and the addition of the suffix _mm (e.g., sub01_mm.csv). This file should be located in the same folder as the main data file (see Appendix E).
Scattering
The user can add scattering around the mean for each curve. The scattering is either the standard error or the 95% confidence interval (Fig. 5g).
Output files
CHAP saves four types of output files. The presented figure can be saved by the user as a .png (standard image file) or as a .fig (standard MATLAB figure file) by clicking the Save data button in the Advanced Processing Options and Time-Course Visualization GUI (Fig. 4). In addition, the processed data can be saved as a .csv or a MATLAB file that contains all the data concerning the relevant variables and events for each participant (by clicking the Save data button in the Advanced Processing Options and Time-Course Visualization GUI; see Fig. 4).
The .csv and .mat files include average pupil size for each condition across all valid trials (see Appendix F). These data could be used later for analysis by external code/third-party software. Other information that is saved by CHAP includes participant name (as defined by the data file name), trial number, value for each defined variable, and event onset relative to the beginning of the selected time window (see Appendix F). In addition to mean pupil size, CHAP also provides information about the mean pupil size in the first and last bins and the minimum and maximum pupil size values during the selected time window. CHAP also provides information about the number of eye blinks and their durations. This information could be useful in studies that are interested in blinks, such as studies assessing startle response (e.g., Graham, 1975) or blink length and eye-blink rate (Monster, Chan, & O’Connor, 1978), measures known to be correlated with dopamine level (Jongkees & Colzato, 2016; Karson, 1983).
Group analysis
Once users process one data file, they can run the same configuration for multiple data files (e.g., different participants). This option is available in the Advanced Processing Options and Time-Course Visualization GUI (see Fig. 4) by clicking on the Group analysis button. The user should select the folder that contains the data files and choose a name for the analysis. Then, output files will be created for each data file and for the mean of all participants (see Fig. 5h for the means across 19 participants). In addition to these files, CHAP also provides .csv files that include information about the number of trials that were used in the analysis (valid_trials.csv), as well as the number of excluded trials (outliers.csv), for each condition for each participants. Moreover, CHAP provides .csv files that include information about the mean event’s onset for each trial for each participant (time-course_data.csv and trials_data.csv; see Appendixes F & G).
Parameters for statistical analysis
CHAP provides commonly used parameters for analysis, such as the mean (Binda et al., 2014; de Gee, Knapen, & Donner, 2014; Hemmati, 2017; Laeng, Ørbo, Holmlund, & Miozzo, 2011), peak amplitude (Binda & Murray, 2015; Hemmati, 2017; Hess & Polt, 1964; Schmidtke, 2014), peak latency (Hess & Polt, 1964; Koelewijn, de Kluiver, Shinn-Cunningham, Zekveld, & Kramer, 2015; Laeng et al., 2011; Schmidtke, 2014), dip amplitude (Bitsios, Szabadi, & Bradshaw, 2004; Henderson, Bradley, & Lang, 2014; Steinhauer et al., 2000), and dip latency (Lanting, Strijers, Bos, Faes, & Heimans, 1991; Shah, Kurup, Ralay Ranaivo, Mets-Halgrimson, & Mets, 2018; Steinhauer et al., 2000). These parameters can be selected for a specific time window or for the entire trial. By selecting the desired time window (From and To fields in the Statistical Analysis GUI; Fig. 7), these parameters will be calculated on the basis of the data within the selected time window (see the table in Fig. 7).
Statistical analysis
CHAP provides two approaches to statistical analysis. CHAP has the option to run a repeated measures analysis of variance (ANOVA). The output of this analysis includes the F value (including the degrees of freedom), p value, ηp2, and MSE. In addition to the main effect, CHAP will run a series of orthogonal contrasts and will present them in the same table with the F value, p value, and MSE.
In addition to this parametric approach, CHAP also supports a Bayesian approach. Specifically, CHAP can run a Bayesian paired-sample t test with a Cauchy prior width of r = .707 for effect size based on the alternative hypothesis (Rouder, Morey, Speckman, & Province, 2012). In this analysis, the null hypothesis means that there is no difference between the tested conditions. The Bayesian analysis provides a Bayes factor (Kass & Raftery, 1995) for each condition. This Bayes factor quantifies the evidence in favor of the alternative over the null hypothesis. Similar to the output of the parametric analysis, by selecting the Bayesian approach (Statistical approach dropdown in the Statistical Analysis GUI; Fig. 7), CHAP will run and present the output for the Bayesian analysis. The output for each condition includes the t value, BF10 (evidence in favor of the alternative hypothesis), BF01 (evidence in favor of the null hypothesis), Cohen’s d, and the number of observations.
Temporal analysis
CHAP also supports temporal analysis of the data (Binda et al., 2014; Einhäuser et al., 2008; Mathôt et al., 2013; Siegle et al., 2003). By using temporal analysis, the user can investigate the temporal differences between conditions across the time course or in a selected time window.
Similar to the analysis described above, CHAP supports both Bayesian and classical approaches (selected from the Statistical approach dropdown in the Statistical Analysis GUI; see Fig. 7). Specifically, CHAP runs a series of paired-sample t tests between each two conditions over the entire selected time course (Fig. 7), as defined in the Advanced Processing Options and Time-Course Visualization GUI.
The output provided by CHAP (presented in a figure and also saved as a .csv file for each comparison) is either a Holm–Bonferroni-corrected p value as a function of time (by using the classical approach) or Bayes factors (both BF10 and BF01) as a function of time (by using the Bayesian approach). In addition to the descriptive information about the Bayes factor as a function of time (using the Bayesian approach), CHAP also provides inference information about the differences and similarities between conditions.
Discussion
In this article, we have presented CHAP, open-source software written in MATLAB for processing and analyzing pupillometry data. CHAP provides a user-friendly tool for researchers who are interested in preprocessing and analysis of pupillometry data. CHAP supports data coming from a wide range of eyetracker devices and includes, what we believe, the most up-to-date and cutting-edge processing and analysis options. By using well-established approaches, CHAP makes working with pupillometry data easier and provides efficient solutions for preprocessing, analysis, and visualization of the data. Besides that, by introducing CHAP we suggest uniform conventions for the preprocessing and analysis of pupillometry data. CHAP’s user interface makes it an easy-to-implement tool, for both experienced MATLAB users as well as for researchers who do not have any experience with programming. CHAP can be easily installed on Windows, Linux, and Mac operating systems (see Appendix C for instructions), and experienced users can also modify the code if they wish (since the MATLAB script is available after installation).
Several challenges in the analysis of pupil data should be addressed in future work. First, because pupil size is influenced by gaze position (Brisson et al., 2013; Gagl, Hawelka, & Hutzler, 2011), these data should be used to exclude trials with large eye movements or to correct for pupil size. We plan to develop a correction for measurement of pupil changes during eye movements. Such corrections will be incorporated in future CHAP versions. Second, currently CHAP supports only one-way ANOVA and t tests. In the future, we plan to add more options for data analyses. In addition, CHAP currently supports only a within-group analysis. We plan to add options for the analysis of between-group effects. Finally, we plan to create a Python version of CHAP so that it can be used by users who do not have MATLAB.
To summarize, this article has presented CHAP, a tool for processing and analyzing pupillometry data. CHAP is already being used by several groups, and it is our hope that more researchers will use it. To download CHAP or join our mailing list, please visit CHAP’s website: http://in.bgu.ac.il/en/Labs/CNL/chap.
Author note
This work was supported by funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007–2013)/ERC Grant Agreement no. 295644. We thank Desiree Meloul for helpful comments and useful input on this article. We also thank Yoav Kessler and Joseph Tzelgov for their advice regarding statistical analysis, and Stuart Steinhauer for his helpful input on pupil parameters. In addition, we thank Sam Hutton for his help with the adaptation of CHAP to EyeLink data, Reem Mulla for her help with the adaptation of CHAP to ASL data, Leah Fostick for her help with the adaptation of CHAP to Tobii data, and Nicole Amichetti for her help with the adaptation of CHAP for Mac users. Finally, we thank all present CHAP users for their helpful feedback and advice, especially Dina Devyatko and Dalit Milshtein, who also provided feedback on this article.
Notes
Cubic interpolation is less recommended when the data is relatively noisy or when the sampling rate is relatively low (below or equal to 60 Hz).
References
Akdoğan, B., Balcı, F., & van Rijn, H. (2016). Temporal expectation indexed by pupillary response. Timing & Time Perception, 4, 354–370. doi:https://doi.org/10.1163/22134468-00002075
Barbur, J. L., Harlow, A. J., & Sahraie, A. (2007). Pupillary responses to stimulus structure, colour and movement. Ophthalmic and Physiological Optics, 12, 137–141. doi:https://doi.org/10.1111/j.1475-1313.1992.tb00276.x
Beatty, J. (1982). Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychological Bulletin, 91, 276–292. doi:https://doi.org/10.1037/0033-2909.91.2.276
Beatty, J., & Kahneman, D. (1966). Pupillary changes in two memory tasks. Psychonomic Science, 5, 371–372. doi:https://doi.org/10.3758/BF03328444
Binda, P., & Murray, S. O. (2015). Spatial attention increases the pupillary response to light changes. Journal of Vision, 15(2), 1. doi:https://doi.org/10.1167/15.2.1
Binda, P., Pereverzeva, M., & Murray, S. O. (2014). Pupil size reflects the focus of feature-based attention. Journal of Neurophysiology, 112, 3046–3052. doi:https://doi.org/10.1152/jn.00502.2014
Binda, P., Straßer, T., Stingl, K., Richter, P., Peters, T., Wilhelm, H., . . . Kelbsch, C. (2017). Pupil response components: attention-light interaction in patients with Parinaud’s syndrome. Scientific Reports, 7, 10283. doi:https://doi.org/10.1038/s41598-017-10816-x
Bitsios, P., Szabadi, E., & Bradshaw, C. . (2004). The fear-inhibited light reflex: Importance of the anticipation of an aversive event. International Journal of Psychophysiology, 52, 87–95. doi:https://doi.org/10.1016/J.IJPSYCHO.2003.12.006
Blaser, E., Eglington, L., Carter, A. S., & Kaldy, Z. (2014). Pupillometry reveals a mechanism for the autism spectrum disorder (ASD) advantage in visual tasks. doi:https://doi.org/10.1038/srep04301
Bradley, M. M., Miccoli, L., Escrig, M. A., & Lang, P. J. (2008). The pupil as a measure of emotional arousal and autonomic activation. Psychophysiology, 45, 602–607. doi:https://doi.org/10.1111/j.1469-8986.2008.00654.x
Braem, S., Coenen, E., Bombeke, K., van Bochove, M. E., & Notebaert, W. (2015). Open your eyes for prediction errors. Cognitive, Affective, & Behavioral Neuroscience, 15, 374–380. doi:https://doi.org/10.3758/s13415-014-0333-4
Breeden, A. L., Siegle, G. J., Norr, M. E., Gordon, E. M., & Vaidya, C. J. (2017). Coupling between spontaneous pupillary fluctuations and brain activity relates to inattentiveness. European Journal of Neuroscience, 45, 260–266. doi:https://doi.org/10.1111/ejn.13424
Brisson, J., Mainville, M., Mailloux, D., Beaulieu, C., Serres, J., & Sirois, S. (2013). Pupil diameter measurement errors as a function of gaze direction in corneal reflection eyetrackers. Behavior Research Methods, 45, 1322–1331. doi:https://doi.org/10.3758/s13428-013-0327-0
Cohen, N., Moyal, N., & Henik, A. (2015). Executive control suppresses pupillary responses to aversive stimuli. Biological Psychology, 112, 1–11. doi:https://doi.org/10.1016/j.biopsycho.2015.09.006
Dalmaijer, E. S., Mathôt, S., & Van der Stigchel, S. (2014). PyGaze: An open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments. Behavior Research Methods, 46, 913–921. doi:https://doi.org/10.3758/s13428-013-0422-2
de Gee, J. W., Knapen, T., & Donner, T. H. (2014). Decision-related pupil dilation reflects upcoming choice and individual bias. Proceedings of the National Academy of Sciences, 111, E618–E625. doi:https://doi.org/10.1073/pnas.1317557111
Diede, N. T., & Bugg, J. M. (2017). Cognitive effort is modulated outside of the explicit awareness of conflict frequency: Evidence from pupillometry. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43, 824–835. doi:https://doi.org/10.1037/xlm0000349
Einhäuser, W., Koch, C., & Carter, O. L. (2010). Pupil dilation betrays the timing of decisions. Frontiers in Human Neuroscience, 4, 18. doi:https://doi.org/10.3389/fnhum.2010.00018
Einhäuser, W., Stout, J., Koch, C., & Carter, O. (2008). Pupil dilation reflects perceptual selection and predicts subsequent stability in perceptual rivalry. Proceedings of the National Academy of Sciences, 105, 1704–1709. doi:https://doi.org/10.1073/pnas.0707727105
Feigl, B., Zele, A. J., Fader, S. M., Howes, A. N., Hughes, C. E., Jones, K. A., & Jones, R. (2012). The post-illumination pupil response of melanopsin-expressing intrinsically photosensitive retinal ganglion cells in diabetes. Acta Ophthalmologica, 90, e230–e234. doi:https://doi.org/10.1111/j.1755-3768.2011.02226.x
Gagl, B., Hawelka, S., & Hutzler, F. (2011). Systematic influence of gaze position on pupil size measurement: Analysis and correction. Behavior Research Methods, 43, 1171–1181. doi:https://doi.org/10.3758/s13428-011-0109-5
Geva, R., Zivan, M., Warsha, A., & Olchik, D. (2013). Alerting, orienting or executive attention networks: Differential patters of pupil dilations. Frontiers in Behavioral Neuroscience, 7, 145. doi:https://doi.org/10.3389/fnbeh.2013.00145
Goldinger, S. D., & Papesh, M. H. (2012). Pupil dilation reflects the creation and retrieval of memories. Current Directions in Psychological Science, 21, 90–95. doi:https://doi.org/10.1177/0963721412436811
Graham, F. K. (1975). Presidential Address, 1974: The more or less startling effects of weak prestimulation. Psychophysiology, 12, 238–348. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/1153628
Hemmati, M. (2017). A study on the visual illusion effects on the pupillary aperture (Doctoral dissertation). Retrieved from http://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=8900434&fileOId=8900438
Henderson, R. R., Bradley, M. M., & Lang, P. J. (2014). Modulation of the initial light reflex during affective picture viewing. Psychophysiology, 51, 815–818. doi:https://doi.org/10.1111/psyp.12236
Hershman, R., Henik, A., & Cohen, N. (2018). A novel blink detection method based on pupillometry noise. Behavior Research Methods, 50: 107. doi:https://doi.org/10.3758/s13428-017-1008-1
Hershman, R., & Henik, A. (2019). Dissociation between reaction time and pupil dilation in the Stroop task. Journal of Experimental Psychology: Learning, Memory, and Cognition. https://doi.org/10.1037/xlm0000690
Hess, E. H., & Polt, J. M. (1960). Pupil size as related to interest value of visual stimuli. Science, 132, 349–350. doi:https://doi.org/10.1126/science.132.3423.349
Hess, E. H., & Polt, J. M. (1964). Pupil size in relation to mental activity during simple problem-solving. Science, 143, 1190–1192. doi:https://doi.org/10.1126/science.143.3611.1190
Jeffreys, H. (1961). Theory of probability (3rd ed.). Oxford, UK: Oxford University Press, Clarendon Press.
Jongkees, B. J., & Colzato, L. S. (2016). Spontaneous eye blink rate as predictor of dopamine-related cognitive function—A review. Neuroscience & Biobehavioral Reviews, 71, 58–82. doi:https://doi.org/10.1016/j.neubiorev.2016.08.020
Kahneman, D., & Beatty, J. (1966). Pupil diameter and load on memory. Science, 154, 1583–1585. doi:https://doi.org/10.1126/science.154.3756
Kang, O., & Wheatley, T. (2015). Pupil dilation patterns reflect the contents of consciousness. Consciousness and Cognition, 35, 128–135. doi:https://doi.org/10.1016/j.concog.2015.05.001
Karson, C. N. (1983). Spontaneous eye-blink rates and dopaminergic systems. Brain, 106, 643–653. doi:https://doi.org/10.1093/brain/106.3.643
Kass, R. E., & Raftery, A. E. (1995). Bayes Factors. Journal of the American Statistical Association, 90, 773–795. doi:https://doi.org/10.1080/01621459.1995.10476572
Klingner, J., Tversky, B., & Hanrahan, P. (2011). Effects of visual and verbal presentation on cognitive load in vigilance, memory, and arithmetic tasks. Psychophysiology, 48, 323–332. doi:https://doi.org/10.1111/j.1469-8986.2010.01069.x
Kloosterman, N. A., Meindertsma, T., van Loon, A. M., Lamme, V. A. F., Bonneh, Y. S., & Donner, T. H. (2015). Pupil size tracks perceptual content and surprise. European Journal of Neuroscience, 41, 1068–1078. doi:https://doi.org/10.1111/ejn.12859
Koelewijn, T., de Kluiver, H., Shinn-Cunningham, B. G., Zekveld, A. A., & Kramer, S. E. (2015). The pupil response reveals increased listening effort when it is difficult to focus attention. Hearing Research, 323, 81–90. doi:https://doi.org/10.1016/J.HEARES.2015.02.004
Kostic, C., Crippa, S. V., Martin, C., Kardon, R. H., Biel, M., Arsenijevic, Y., & Kawasaki, A. (2016). Determination of rod and cone influence to the early and late dynamic of the pupillary light response. Investigative Opthalmology and Visual Science, 57, 2501–2508. doi:https://doi.org/10.1167/iovs.16-19150
Kret, M. E., & Sjak-Shie, E. E. (2018). Preprocessing pupil size data: Guidelines and code. Behavior Research Methods, 1–7. doi:https://doi.org/10.3758/s13428-018-1075-y
Kuchinke, L., Võ, M. L.-H., Hofmann, M., & Jacobs, A. M. (2007). Pupillary responses during lexical decisions vary with word frequency but not emotional valence. International Journal of Psychophysiology, 65, 132–140. doi:https://doi.org/10.1016/J.IJPSYCHO.2007.04.004
Laeng, B., Ørbo, M., Holmlund, T., & Miozzo, M. (2011). Pupillary Stroop effects. Cognitive Processing, 12, 13–21. doi:https://doi.org/10.1007/s10339-010-0370-z
Lanting, P., Strijers, R. L. M., Bos, J. E., Faes, T. J. C., & Heimans, J. J. (1991). The cause of increased pupillary light reflex latencies in diabetic patients: the relationship between pupillary light reflex and visual evoked potential latencies. Electroencephalography and Clinical Neurophysiology, 78, 111–115. doi:https://doi.org/10.1016/0013-4694(91)90110-P
Li, C. (2017). Extracting and visualizing data from mobile and static eye trackers in R and Matlab (Doctoral dissertation). Retrieved from https://digitalcommons.usu.edu/etd/6880
Mathôt, S. (2013). A simple way to reconstruct pupil size during eye blinks (Unpublished manuscript). Retrieved from doi:https://doi.org/10.6084/m9.figshare.688001
Mathôt, S. (2018). Pupillometry: Psychology, physiology, and function. Journal of Cognition, 1, 16. doi:https://doi.org/10.5334/joc.18
Mathôt, S., Fabius, J., Van Heusden, E., & Van der Stigchel, S. (2018). Safe and sensible preprocessing and baseline correction of pupil-size data. Behavior Research Methods, 50, 94–106. doi:https://doi.org/10.3758/s13428-017-1007-2
Mathôt, S., van der Linden, L., Grainger, J., & Vitu, F. (2013). The pupillary light response reveals the focus of covert visual attention. PLoS ONE, 8, e78168. doi:https://doi.org/10.1371/journal.pone.0078168
McDougal, D. H., & Gamlin, P. D. (2010). The influence of intrinsically-photosensitive retinal ganglion cells on the spectral sensitivity and response dynamics of the human pupillary light reflex. Vision Research, 50, 72–87. doi:https://doi.org/10.1016/J.VISRES.2009.10.012
Monster, A. W., Chan, H. C., & O’Connor, D. (1978). Long-term trends in human eye blink rate. Biotelemetry and Patient Monitoring, 5, 206–222. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/754827
Murphy, P. R., O’Connell, R. G., O’Sullivan, M., Robertson, I. H., & Balsters, J. H. (2014a). Pupil diameter covaries with BOLD activity in human locus coeruleus. Human Brain Mapping, 35, 4140–4154. doi:https://doi.org/10.1002/hbm.22466
Murphy, P. R., Vandekerckhove, J., & Nieuwenhuis, S. (2014b). Pupil-linked arousal determines variability in perceptual decision making. PLoS Computational Biology, 10, e1003854. doi:https://doi.org/10.1371/journal.pcbi.1003854
Naber, M., & Nakayama, K. (2013). Pupil responses to high-level image content. Journal of Vision, 13(6), 7. doi:https://doi.org/10.1167/13.6.7
Nyström, M., Andersson, R., Holmqvist, K., & van de Weijer, J. (2013). The influence of calibration method and eye physiology on eyetracking data quality. Behavior Research Methods, 45, 272–288. doi:https://doi.org/10.3758/s13428-012-0247-4
Otero, S. C., Weekes, B. S., & Hutton, S. B. (2011). Pupil size changes during recognition memory. Psychophysiology, 48, 1346–1353. doi:https://doi.org/10.1111/j.1469-8986.2011.01217.x
Partala, T., & Surakka, V. (2003). Pupil size variation as an indication of affective processing. International Journal of Human–Computer Studies, 59, 185–198. doi:https://doi.org/10.1016/S1071-5819(03)00017-X
Peirce, J. W. (2007). PsychoPy—Psychophysics software in Python. Journal of Neuroscience Methods, 162, 8–13. doi:https://doi.org/10.1016/J.JNEUMETH.2006.11.017
Phillips, M. A., Szabadi, E., & Bradshaw, C. M. (2001). Comparison of the effects of clonidine and yohimbine on pupillary diameter at different illumination levels. British Journal of Clinical Pharmacology, 50, 65–68. doi:https://doi.org/10.1046/j.1365-2125.2000.00225.x
Preuschoff, K., ‘t Hart, B. M., & Einhäuser, W. (2011). Pupil dilation signals surprise: Evidence for noradrenaline’s role in decision making. Frontiers in Neuroscience, 5, 115. doi:https://doi.org/10.3389/fnins.2011.00115
Rouder, J. N., Morey, R. D., Speckman, P. L., & Province, J. M. (2012). Default Bayes factors for ANOVA designs. Journal of Mathematical Psychology, 56, 356–374. doi:https://doi.org/10.1016/J.JMP.2012.08.001
Schmidtke, J. (2014). Second language experience modulates word retrieval effort in bilinguals: Evidence from pupillometry. Frontiers in Psychology, 5, 137. doi:https://doi.org/10.3389/fpsyg.2014.00137
Shah, S. S., Kurup, S. P., Ralay Ranaivo, H., Mets-Halgrimson, R. B., & Mets, M. B. (2018). Ophthalmic genetics pupillary manifestations of Marfan syndrome: From the Marfan eye consortium of Chicago. Ophthalmic Genetics, 39, 297–299. doi:https://doi.org/10.1080/13816810.2018.1424207
Siegle, G. J., Steinhauer, S. R., Carter, C. S., Ramel, W., & Thase, M. E. (2003). Do the seconds turn into hours? Relationships between sustained pupil dilation in response to emotional information and self-reported rumination. Cognitive Therapy and Research, 27, 365–382. doi:https://doi.org/10.1023/A:1023974602357
Siegle, G. J., Steinhauer, S. R., & Thase, M. E. (2004). Pupillary assessment and computational modeling of the Stroop task in depression. International Journal of Psychophysiology, 52, 63–76. doi:https://doi.org/10.1016/J.IJPSYCHO.2003.12.010
Smallwood, J., Brown, K. S., Tipper, C., Giesbrecht, B., Franklin, M. S., Mrazek, M. D., . . . Schooler, J. W. (2011). Pupillometric evidence for the decoupling of attention from perceptual input during offline thought. PLoS ONE, 6, e18298. doi:https://doi.org/10.1371/journal.pone.0018298
Snell, J., Mathôt, S., Mirault, J., & Grainger, J. (2018). Parallel graded attention in reading: A pupillometric study. Scientific Reports, 8, 3743. doi:https://doi.org/10.1038/s41598-018-22138-7
Sogo, H. (2013). GazeParser: An open-source and multiplatform library for low-cost eye tracking and analysis. Behavior Research Methods, 45, 684–695. doi:https://doi.org/10.3758/s13428-012-0286-x
Steinhauer, S. R., Condray, R., & Kasparek, A. (2000). Cognitive modulation of midbrain function: Task-induced reduction of the pupillary light reflex. International Journal of Psychophysiology, 39, 21–30. doi:https://doi.org/10.1016/S0167-8760(00)00119-7
Steinhauer, S. R., Siegle, G. J., Condray, R., & Pless, M. (2004). Sympathetic and parasympathetic innervation of pupillary dilation during sustained processing. International Journal of Psychophysiology, 52, 77–86. doi:https://doi.org/10.1016/J.IJPSYCHO.2003.12.005
Titz, J., Scholz, A., & Sedlmeier, P. (2018). Comparing eye trackers by correlating their eye-metric data. Behavior Research Methods, 50, 1853–1863. doi:https://doi.org/10.3758/s13428-017-0954-y
Turi, M., Burr, D. C., & Binda, P. (2018). Pupillometry reveals perceptual differences that are tightly linked to autistic traits in typical adults. eLife, 7, 32399. doi:https://doi.org/10.7554/eLife.32399
Unsworth, N., & Robison, M. K. (2017). The importance of arousal for variation in working memory capacity and attention control: A latent variable pupillometry study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43, 1962–1987. doi:https://doi.org/10.1037/xlm0000421
van der Wel, P., & van Steenbergen, H. (2018). Pupil dilation as an index of effort in cognitive control tasks: A review. Psychonomic Bulletin & Review, 25, 2005–2015. doi:https://doi.org/10.3758/s13423-018-1432-y
van Steenbergen, H., & Band, G. P. H. (2013). Pupil dilation in the Simon task as a marker of conflict processing. Frontiers in Human Neuroscience, 7, 215. doi:https://doi.org/10.3389/fnhum.2013.00215
van Steenbergen, H., Band, G. P. H., & Hommel, B. (2011). Threat but not arousal narrows attention: Evidence from pupil dilation and saccade control. Frontiers in Psychology, 2, 281. doi:https://doi.org/10.3389/fpsyg.2011.00281
Wainstein, G., Rojas-Líbano, D., Crossley, N. A., Carrasco, X., Aboitiz, F., & Ossandón, T. (2017). Pupil size tracks attentional performance in attention-deficit/hyperactivity disorder. Scientific Reports, 7, 8228. doi:https://doi.org/10.1038/s41598-017-08246-w
Wang, C.-A., Brien, D. C., & Munoz, D. P. (2015). Pupil size reveals preparatory processes in the generation of pro-saccades and anti-saccades. European Journal of Neuroscience, 41, 1102–1110. doi:https://doi.org/10.1111/ejn.12883
Wang, C.-A., McInnis, H., Brien, D. C., Pari, G., & Munoz, D. P. (2016). Disruption of pupil size modulation correlates with voluntary motor preparation deficits in Parkinson’s disease. Neuropsychologia, 80, 176–184. doi:https://doi.org/10.1016/j.neuropsychologia.2015.11.019
Wilhelm, B. J., Wilhelm, H., Moro, S., & Barbur, J. L. (2002). Pupil response components: Studies in patients with Parinaud’s syndrome. Brain, 125, 2296–2307.
Yellin, D., Berkovich-Ohana, A., & Malach, R. (2015). Coupling between pupil fluctuations and resting-state fMRI uncovers a slow build-up of antagonistic responses in the human cortex. NeuroImage, 106, 414–427. doi:https://doi.org/10.1016/J.NEUROIMAGE.2014.11.034
Zhegallo, A. V., & Marmalyuk, P. A. (2015). ETRAN—R extension package for eye tracking results analysis. Perception, 44, 1129–1135. doi:https://doi.org/10.1177/0301006615594944
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Eye Tribe and Tobii files
CHAP receives input of a standard data file from Tobii or Eye Tribe. In addition to the standard data files, CHAP requires additional csv log files that are created by the user. These .csv files (see the screenshots in Fig. 8) include data about the trials’ onset and offset as well as data about the variables’ values and about events’ onsets. These log files should have the same name as the main data file with the addition of the suffix _events (e.g., sub01_events.csv), and it should be located in the same folder as the main data file.
Appendix B: ASL and other device files
For ASL, CHAP needs both .asl and .csv files. The .asl file (see the screenshot in Fig. 9a) should be created by using the standard output file of ASL and the .csv file is an external log file that is created by the user. This .csv log file (see the screenshot in Fig. 9b) includes data about the trials’ onset and offset as well as data about the variables’ values and about events’ onsets. These log files should have the same name as the main data file with the addition of the suffix _events (e.g., sub01_events.csv) and they should be located in the same folder as the main data file.
CHAP can read input from other eyetrackers, as long as the user adds files with a specific format. The user should provide two files: a .dat file (.csv file that was renamed to this extension; e.g., sub01.dat) and a .csv file. The .dat file includes basic information about the pupil (size and gaze position) and the .csv file includes information about events and variables in the experiment (see a screenshot in Fig. 9c and B1d). These files should have the same name as the main data file with the addition of the suffix _events (e.g., sub01_events.csv), and they should be located in the same folder as the main data file.
Appendix C: CHAP’s installation
The installation of CHAP includes three main steps:
-
1.
Downloading CHAP from https://github.com/ronenno1/chap/archive/master.zip
-
2.
Extracting the ZIP file.
-
3.
Running CHAP_installation.m file (that adds CHAP to the MATLAB path list).
When working with EyeLink data files:
-
Users who use Windows 32-bit version should go to CHAP’s folder and copy manually the edf2mat/edfapi.dll file to C:\Windows\System32
-
Users who use Windows 64-bit version should go to CHAP’s folder and copy manually the edf2mat/edfapi64.dll file to C:\Windows\System32
-
Users who use Mac should go to CHAP’s folder and unzip manually the edf2mat/edfapi.framework.zip file and copy edfapi.framework to /Library/Frameworks
More information about converting EyeLink data files into MATLAB can be found here: https://github.com/uzh/edf-converter.
When working with Eye Tribe data files (txt files given by the default Eye Tribe interface), Python should be installed.
Appendix D: External variables and events
CHAP supports the addition of external variables using a .csv file created by the user (see the screenshot in Fig. 10a). This file should include the variables data for each trial (i.e., trial ID, variable name, and variable value). Each participant should have a .csv file with the same name as the main data file with the addition of the suffix _vars (e.g., sub01_vars.csv). This file should be located in the same folder as the main data file. CHAP also supports the addition of external events. By using dedicated .csv files, the user can specify events that were not logged into the data file during recording (see the screenshot in Fig. 10b).
Appendix E: Converting to millimeters
Pupil data can be converted to millimeters by using a dedicated file (Fig. 11). This file includes data for each participant and should have the same name as the main data file with the addition of the suffix _mm (e.g., sub01_mm.csv). This file should be located in the same folder as the main data file.
Appendix F: Output data
The output data from CHAP include average pupil size in each bin across trials for each condition (see the screenshot in Fig. 12). These data could be useful for advanced analyses using external code/third-party software. CHAP provides information for each trial (see the screenshot in Fig. 13). The data include information about variables, events, pupil size, and eye blinks.
Appendix G: Group analysis
When running a group analysis, CHAP creates a .csv files that includes information about number of valid trials and number of excluded trials for each condition for each participants (Fig. 14a). In addition, CHAP provides a .csv files that includes information about the mean events’ onset for each trial for each participant (Fig. 14b).
Rights and permissions
About this article
Cite this article
Hershman, R., Henik, A. & Cohen, N. CHAP: Open-source software for processing and analyzing pupillometry data. Behav Res 51, 1059–1074 (2019). https://doi.org/10.3758/s13428-018-01190-1
Published:
Issue Date:
DOI: https://doi.org/10.3758/s13428-018-01190-1