Roeterseilandcampus - Gebouw C, Straat: Nieuwe Achtergracht 129-B, Ruimte: GS.01. Vanwege beperkte zaalcapaciteit is deelname op basis van wie het eerst komt, het eerst maalt. Leraren moeten zich hieraan houden.
A central question in cognitive neuroscience is whether the brain's language and visual networks represent meaning similarly. In other words, is the neural representation for a concept, such as "a person eating an apple," similar when perceived visually versus read as text? A previous pilot study used Representational Similarity Analysis to answer this question and yielded null results, suggesting these networks lack a shared neural code. The current study re-examined that conclusion by applying more flexible alignment techniques to the original fMRI data, where participants viewed images and read matching captions in separate tasks. Both Procrustes analysis and Gaussian-Copula Mutual Information revealed a degree of alignment, indicating a shared geometric and (non-)linear representational structure. The alignment was consistently strongest between the language network and the higher visual cortex, as opposed to earlier visual areas V1 and V4, pointing to a shared code for abstract semantic information. In contrast, Canonical Correlation Analyses overfitted the data and failed to find this alignment. These findings suggest some alignment between vision and language networks, but detection depends on the technique used. However, since there was a much weaker signal in the language task as opposed to the vision task, these comparisons require cautious interpretation. Future work with a larger sample, improved task engagement, and data preprocessing is needed to confirm if these results are robust.
Keywords: fMRI, Alignment, Procrustes, Canonical Correlation Analysis, Gaussian-Copula Mutual Information