Public Talks Schedule

Meet Us There!

Public Talks: 27th - 29th April 2024

At a Glance

PRAID Conference 2024 offers an immersive experience across three days of intensive collaboration and sharing. Each morning will feature 30minute presentations by leading researchers, followed by 15minute Q&A sessions. These talks will cover three approaches to studying perception:

      1. Representational
      2. Sensorimotor
      3. Dynamical

Day 1: Saturday, April 27th

8:45 AM - 9:00 AM
Goodes Hall, Room 151
Opening Remarks
10:30 AM - 10:45 AM
Goodes Atrium
Coffee Break
10:45 AM - 11:30 AM
Goodes Hall, Room 151

Day 2: Sunday, April 28th

09:00 AM - 09:45 AM Goodes Hall, Room 151
10:30 AM - 10:45 AM
Goodes Atrium
Coffee Break
09:45 AM - 10:30 AM
Goodes Hall, Room 151
11:30 AM - 12:15 AM
Goodes Hall, Room 151
Q & A

Day 3: Monday April 29th

10:30 AM - 10:45 AM
Goodes Atrium
Coffee Break
10:45 AM - 11:30 AM
Goodes Hall, Room 151
Summary Discussion
11:30 AM- 11:45 AM
Goodes Hall, Room 151
Closing Remarks

Abstracts

Discover more about our talks!

Talk 1: The Eyes Are the Windows to the Mind: Implications for AI-Driven Personalized Interaction

Cristina Conati
Eye-tracking has been extensively used both in psychology for understanding various aspects of human cognition, as well as in human-computer interaction (HCI) for evaluation of interface design or as a form of direct input. In recent years, eye-tracking has also been investigated as a source of information for machine learning models that predict relevant user states and traits (e.g., attention, confusion, learning, perceptual abilities). These predictions can then be leveraged by AI agents to model their users and personalize the interaction accordingly. In this talk, Dr. Conati will provide an overview of the research her lab has done in this area, including detecting and modeling user cognitive skills, and affective states, with applications to user-adaptive visualizations, intelligent tutoring systems, and health.

Talk 2: Researchers Comparing DNNs to Brains Need to Adopt Standard Methods of Science

Jeffrey Bowers
Deep neural networks (DNNs) developed in computer science are successful in a range of vision tasks and can predict brain activations of humans (and macaques) better than alternative models. This has led to the common claim that DNNs are the best models of biological vision. Here I show that the success of these models in predicting brain activations is a poor metric for judging the similarity of DNNs and brains, and indeed, these models account for few findings in psychology in the domain vision, and they show similar problems when it comes to language.  Researchers need to run experiments that manipulate independent variables to test hypotheses in order to more usefully compare DNNs and brains.

Talk 3: Subjective Perspectives in Humans and AI

Susanna Schellenberg
Ego4D is a new program of Meta that is supposed to teach AI how to have a subjective perspective. As Grauman, the lead researcher of Ego4D, describes the project: “For AI systems to interact with the world the way we do, the AI field needs to evolve in an entirely new paradigm of first-person perspective.” Can AI have a subjective perspective? I argue that yes, it can. Any artificial or biological organism has a perspective on its environment as a consequence of its cognitive, emotional, perceptual, and behavioral schemas (or lack thereof) with which it processes, organizes, interprets, and responds to information it receives. According to this lens view of perspectives, a perspective is a spatiotemporally located information processing mechanism. I develop this view of perspectives and show that there are many elements of subjective perspectives, each of which can be more or less complex and many of which come in degrees. AI already has many of the elements that constitute our subjective perspectives. Questions permeating the project include: How does your and my perspectives differ; and how are they the same? How does the perspective of us humans differ from that of AI or a less rational animal? And again, how are our perspectives the same as that of a robot or a rat?

Talk 4: Sensorimotor transformations: where network neuroscience meets explainable AI

Gunnar Blohm
Movement defines who we are and what we do. Any sensory guided movement requires a transformation of afferent signals into movement codes specific to the effector. This transformation involves – among others – taking the body geometry into account (Blohm & Crawford 2007). E.g. spatial localization of a visual object depends on the line of sight; and reaching motor commands depend on the current posture of the arm. But how can neurons perform these computations and where in the brain do they happen? To address this, we have developed a neural network and analyzed its emergent properties (Blohm et al., 2009; Blohm 2012) – using techniques we call “explainable AI” nowadays. This model predicted a gradual, feed-forward sensorimotor transformation across brain areas rather than specialized brain areas performing recurrent computations for different aspects of this transformation (as previously posited). We recently tested these predictions in a whole-brain magnetoencephalography (MEG) experiment (Blohm et al., 2019; 2022; in preparation). Using MEG combined with a pro-/anti-pointing task, we asked when and where sensory signals were transformed into motor commands, when and where effect specificity (left vs. right arm) was integrated into this motor command and when and where motor signals became muscle-specific – this is called intrinsic coding as opposed to extrinsic (spatial) coding of movement. This network neuroscience analysis confirmed our predictions, but also pointed toward potential misconceptions in our current knowledge of the sensorimotor system. That is, the feed-forward transformations seem to be followed by feedback processes that update our internal representations of movement intentions; and movement commands seem to be first specified in muscle coordinates and not spatial coordinates as previously believed. Overall, this theory-driven line of research demonstrates the value of combining modern machine learning approaches with neuroscientific experiments to generate testable hypotheses.

Talk 5: Human Perception: Integrating Signals

Katharina Schwarz
Perception can be seen as the integration of various sensory signals into a percept, a mental representation of the world around us.

Here, I argue that the signals we integrate are, in fact, shaped so strongly by our own predictions and expectations, by our experiences, by attentional processes or even just our own mental state, that this process might be more akin to a “creation” than a “recreation”- and, even though representational overlap may be great at times, the representation of the world will still be as varied as there are minds to create them. Moreover, perception is not isolated from its purpose, and to understand human percepts, we need to understand the situations in which they were created, and which goal they were meant to facilitate. Consequently, human perception is not an objective endeavor, and as such differs from perceptive qualities in machines or AI environments.

Talk 6: Becoming Artificial: Human Experience in the Wake of Artificial Intelligence

Donald Landes
Do computational processes adequately model human cognition? Might AI systems develop something like human experience? Can human ethical values and AI be aligned? These questions assume that the challenge is to compare or align two independent entities that are externally related. But is this the right starting point? My research, which is situated within the Merleau-Pontian phenomenological tradition, suggests some additional considerations. On the one hand, there are already “artificial” processes at work within certain regional forms of human cognition or perception. On the other hand, the supposedly self-evident and relatively stable entity of “human experience” is anything but self-evident and stable. Human experience emerges from a vulnerable set of practices, developed and evolving across both personal and historical time, such that what we do and the fields of intelligent activity we engage with reshape what we are. Merleau-Ponty once suggested that if “operational” thinking came to dominate humanity, then the human being might well become the mere manipulandum that it is assumed to be. As such, insisting upon the embodiment of cognition and perception is important, but does not in itself protect human experience from potentially radical changes. Surely the proliferation of AI across the spectrum of human activity reshapes lived experience—what are the existential implications of this “becoming artificial”? To explore this evolving internal relation between AI and human experience, I consider the example of AI-generated art and what it might mean for the phenomenology of creativity.

Talk 7: Meta-Physical Theatre: Designing ‘Physical’ Interactions in ‘Virtual’ Reality Live Performances using Robotics

Matthew Pan & Michael Wheeler
Virtual reality (VR) promises compelling experiences that allow users to explore the metaverse, yet it often falls short in providing truly tangible interactions, especially with virtual characters. This gap in immersion diminishes the potential impact of storytelling and limits the inclusivity of VR experiences. Our research endeavours to push the boundaries of VR by introducing dynamic physicality into these virtual domains. This interdisciplinary project confronts challenges in merging VR and human-robot interaction to simulate touch in virtual environments. We face questions of feasibility and fidelity, such as whether robots can authentically replicate physical, emotive contact with people, objects, and environments. Moreover, we must navigate the uncharted territory of designing robots capable of controlled physical interactions with users, contrary to their typical function of collision avoidance. We envision the rewards of overcoming these challenges are immense. By imbuing virtual experiences with physical interactions, we unlock new avenues for storytelling. Immersive experiences that can replicate a dynamic world with tangible physical attributes open up an entirely new domain for artistic creation and audience experience. Our work lays the groundwork for conducting these interactions effectively, offering potential applications across domains such mental and physical therapy, telepresence for social connectedness, and immersive education and training.

Talk 8: Transparent AI: Bridging the gap of machine learning and human understanding

Ting Hu
Machine learning has the remarkable capability to uncover intricate patterns and relationships within data. However, the consequential decisions made based on these model predictions can profoundly affect human lives. As machine learning models find their way into high-stakes domains such as medicine, job hiring, and criminal justice, concerns about fairness, transparency, and accountability have rightfully emerged. In response, there is a growing need not only to create highly accurate prediction models but also to comprehend and elucidate the inner workings of AI systems and their decision making. In this talk, we will delve into the landscape of transparent AI and related topics, including explainability and interpretability. We will discuss how techniques can be developed to make AI models more understandable. Additionally, we will discuss the exciting research potential at the intersection of AI, psychology, and cognitive science, highlighting the importance of integrating interdisciplinary perspectives to address the challenges and opportunities in building transparent AI systems.

Don't Miss Out!