Our recent study
given monitors' refresh cycles (using RAF loops
). For the present follow-up study, we seek upcoming regular online response-time
experiments whose primary goal is to measure common psychological
phenomena (such as, e.g., a Stroop effect), so that we can assess the extent of timing
improvement in case of such real experimental results (e.g., a larger overall Stroop effect due to lower within-test variances). In each contributing experiment, we will implement two different timing mechanisms: (a) the inferior
one (with no RAF) that ignores the refresh cycle, and (b) the superior new one (with RAF) that takes
refresh cycles into account and hence provides superior (more precise) results. For each display
change in each experiment, the underlying code will store both timings, and thereby the experiment leaders will be able to use the superior timing data for their primary purpose for the study (e.g., estimating the Stroop effect or its
modulations under given conditions), and may simply disregard the other timing data. For
our project however, the aim is solely the meta-analytical comparison of the two timing mechanisms, and does not concern the studies' original purposes. Thereby each contributing study
will serve two purposes at the same time, with very little extra effort (see below).
Any response-time experiment (requiring fast key- or button-press responses, typically below one second) is suitable for us where there is a clear expectation for a substantial effect between the response times to any two stimulus
compatible and incompatible stimuli in the Stroop task – here we keep using the Stroop task as an archetypical behavioral response-time task example, but it could also be anything else). It does not matter whether this expected
the main effect of interest. For example, the experimenters may be interested in modulating
the Stroop effect (which may or may not prove successful), but for us it is sufficient to have the Stroop effect itself, which then can be used for the comparison of the two timing mechanisms. (Note that we want to assess
true effects, and therefore we cannot include studies that in the end find no significant effect between any of the response types.)
The experiment should be of substantial scale – for example, studies with less than 50 expected participants are not generally suitable.
measurements: both source code and intructions for it can be found here
(but still we can help with it). For the "inferior" timing, the time should simply be called
outside (before or after) the RAF call, hence simply ignoring the refresh cycle. Both
timings should be saved for
each relevant display change. In the end, you should use the superior
timing measurement for the original purposes of your study, while on our end we will use the data from your experiment solely to compare the two timing mechanisms.
The data and information we need
At the end of the data collection for your experiment, we would ask for the following.
All relevant data collected during the experiment. (If you prefer, you can exclude all data or information not related to display timing – see below for what is relevant.)
A very short description of the chief effect, e.g.: "The RT difference between compatible and incompatible stimuli in a regular Stroop task."
The name of the relevant RT columns: (a) display time with RAF, (b) display time with no RAF, and (c) external (keyboard/mouse) response time.
The name of the column that distinguishes the two stimulus types to be compared. For example: "stim_type" column name, with "compatible" and "incompatible" types.
Column names and other information necessary for filtering data, if applicable (e.g., to exclude practice data or incorrect responses).
We would also welcome but do not strictly need, from each test: browser brand and version, OS brand and version, location (the country in which the participant took the test), and the date of starting and/or finishing the task.
For your contribution you would be offered co-authorship in our eventual paper. Your study will also be cited (and very briefly described, as provided by the authors).
The project is organized by the iScience team at the University of Konstanz (head of group: Prof. Dr. Ulf-Dietrich Reips
), in collaboration with the Department of
Methodology and Statistics at Tilburg University (Dr. Bennett Kleinberg
If you are interested in contributing or have any questions, please contact Gáspár Lukács