Tutorials

On Sunday 25th of August, tutorials will take place at the Sir Duncan Rice Library on the University of Aberdeen campus (located towards the north end of the city). We will have six sessions in total, with two running in parallel during the morning and afternoon. Since two tutorials always take place in parallel, you can sign up for a maximum of three sessions.
Below, you will find all the tutorials along with a short abstract. We will run a sign-up process for the tutorials at the beginning of July.

Tutorial sessions 9:30am-11:30am (at full capacity)

A) Frequency-tagging in visual attention research (at full capacity)
Organisers: Nika Adamian & Søren Andersen (Liverpool John Moores University, UK & University of Southern Denmark, Denmark)

Steady-state visual evoked potentials are a powerful tool for investigating visual processing in multielement displays. ‘Frequency-tagging’ of individual elements at specific frequencies allows for unambiguous quantification of visual processing of each constituent part of the stimulation and enables high-SNR measurements of attentional allocation. This tutorial will cover the basics of frequency-tagging in visual attention research. We will explain how experimental paradigm and analysis go hand-in-glove by walking through the entire process from creating stimuli to EEG data analysis and quantification of attention effects. Attendees are encouraged to bring their own laptops to follow the MATLAB coding parts of the tutorial.

B) Creating perception studies to run in lab and online using PsychoPy and Pavlovia (at full capacity)
Organisers: Rebecca Hirst & Kimberly Dundas (University of Nottingham, UK, Open Science Tools Ltd)

PsychoPy is an Open Source Python library for creating flexible experiments to study behaviour, perception and psychophysics (Peirce et al. 2019). In this workshop we will walk through the basics of how to create an experiment using PsychoPy’s Builder interface, and how this can be extended with python code snippets. We will then demonstrate how experiments can be ported to JavaScript (via PsychoPy’s sister library, PsychoJS) for online implementation and how these experiments can be hosted via Pavlovia.org, a secure gitlab based server allowing many git-based features including version control and collaborative projects. We will discuss the specific considerations to be made when running visual psychophysics experiments online including screen scaling, viewing distance checks and remote gamma correction. By attending this workshop we hope to share tips for creating perception based studies to run in the lab and online using PsychoPy and Pavlovia.

Tutorial sessions 12:30pm-2:30pm

C) Creating reproducible multi-element displays for your perception research using OCTA
Organiser: Eline Van Geert (KU Leuven, Belgium)

The Order & Complexity Toolbox for Aesthetics (OCTA; Van Geert, Bossens, & Wagemans, 2023) is an open-source Python toolbox and point-and-click online application to create multi-element displays, with tools to manipulate regularity (order) and variety (complexity) along multiple element features (e.g., shape, size, color, orientation). OCTA also allows images, complex shapes, or dynamic feature changes (e.g., in color or orientation) to be included in the stimuli. By attending this tutorial, you will become familiar with both the basic and more advanced functionalities of OCTA. The tutorial will also introduce how these OCTA stimuli can be used in different types of online experiments.

D) Visual attention in the wild: A hands-on tutorial in deep learning-powered wearable eye tracking (at full capacity)
Organiser: Pupil Labs

In this tutorial, participants will explore visual attention and search behaviour using Neon, a wearable eye tracker. They will receive training on wearable eye tracking technology, focusing on deep learning to obtain calibration-free and robust gaze data. They will engage in a practical exercise involving a visual search and navigation paradigm in a real-world setting. Participants will learn how to analyse the data they recorded, including extracting fixation and saccade metrics, to gain insights into visual behaviour (participants should bring their own laptops for this section of the tutorial). This hands-on experience will equip participants with practical skills and ideas for incorporating wearable eye tracking into their own research.

Tutorial sessions 2:30pm-4:30pm (at full capacity)

E) Multi-level modelling & data visualisation in R (at full capacity)
Organisers: Anna Hughes & Alasdair Clarke (University of Essex, UK)

Designed for intermediate R users, this course will delve into the intricacies of multi-level models for vision perception research, providing you with the knowledge and tools to handle and visualise complex hierarchical data structures effectively. In the session, we will touch on aspects including model design (how to decide which models to fit, when a variable should be counted as a random effect), model comparison and hypothesis testing, troubleshooting and making publication-ready plots. We will go beyond textbook material, using real research data that presents a range of challenges that are regularly encountered during experimental analyses. The session will involve some lecture material including worked examples, as well as more interactive aspects. Please bring a laptop and make sure you have R and R Studio downloaded before the session. You will get more from the course if you make sure beforehand that you are familiar with how to define and fit simple linear models in R, and the basics of the package ggplot2.

F) Open and FAIR stimulus creation with stimupy (at full capacity)
Organisers: Lynn Schmittwilken & Joris Vincent (TU Berlin, Germany)

Stimuli are at the heart of vision science, yet are not always openly accessible. Stimupy (Schmittwilken, Maertens, & Vincent, 2023) tackles this problem, and makes stimulus creation findable, accessible, interoperable, and reusable (FAIR). Stimupy is an open-source Python package for creating two-dimensional stimuli to test and/or control aspects of early/mid-level vision, including shapes, gratings, visual illusions, and noises. In this tutorial, we introduce you to FAIR in the context of stimulus creation, and show you how you can use Stimupy for a wide range of research purposes, such as experimentation, modeling, replication, and the exploration of stimulus parameter spaces.

Sir Duncan Rice Library on the Old Aberdeen Campus