As interfaces become increasingly advanced and multimodal, ranging from complex dashboards to XR applications, traditional evaluation methods and questionnaires are often inadequate for capturing the full spectrum of user experience. This workshop aims to bring together researchers and practitioners to discuss the development of new evaluation instruments and heuristics tailored to contemporary multimodal interfaces. The expected outcome is a modular questionnaire framework that supports rigorous and flexible assessment of user experience (including both qualitative and quantitative measures) across diverse interaction modalities.
This workshop is aimed at practitioners, PhD students, and senior researchers working with advanced visual interfaces and/or multimodal data representations. We think it is good if you have some previous experience of user centred evaluation, and an interest in working towards a toolbox of methods for evaluation of advanced visual and multimodal data representations.
With this workshop, we aim to foster deeper discussions and to initiate the work of collaboratively developing a modular questionnaire framework suitable for evaluating modern advanced visual interfaces.
List of Topics
Visualization and multimodal data representation, and interaction
Evaluation methods and metrics for multimodal interfaces
User-centered evaluation and user studies
Questionnaires design and heuristics
Submit your evaluation experiences
There will not be a formal submission and reviewing process in relation to this worksohp, but we invite you to submit your experiences on evaluation. The main focus should be on experiences of current evaluation methodologies (questionnaires, heuristics, etc), but of course any experience such as what did go well, what did not go as planned, was the evaluation outcome easy to interpret, did the evaluation method used answer the research questions asked, and so on... is of great interest.
These submissions will be reviewed by the organizers and selected based on their relevance to the workshop. Selected contributions will be presented orally by the authors (about 5 minutes) during the workshop to inspire group discussions and collaborative activities. Information about this will be communicated in advance of the conference, at 1 of June.
Write a short description of your experiences, not more than 400-500 words, and submit this as a pdf-file to niklas.ronnberg@liu.se, at latest Friday 22 of May.
Important Dates
29th March 2026
Deadline for submission of evaluation experiences (PDF, max 400–500 words)
10th April 2026
Notification to selected contributors and information about presentations
Camilla Forsell is Associate Professor in Evaluation Methodology and Visualization at the Department of Science and Technology, Linköping University, in Sweden.
Mail: camilla.forsell@liu.se
Niklas Rönnberg is Senior Associate Professor in Sound Technology at the Department of Science and Technology, Linköping University, Sweden.
Mail: niklas.ronnberg@liu.se
Workshop Format
Full-day, 6-7 hours, workshop.
We encourage participants to attend the entire workshop, as the collaborative discussions and group activities build on each other and are essential for achieving the intended outcomes. We are, however, flexible and can adjust the final duration in coordination with the conference organizers to ensure the best possible workshop experience.
The workshop will combine short presentations, hands-on sessions, and group discussions to actively engage attendees. Each session is approximately 30 minutes, adjustable depending on number of submissions and discussion flow, allowing flexibility for breaks and adjustments.
Welcome and introductions: Brief introductions of all attendees.
User centred evaluation: Overview of current methods and challenges.
Presentation of heuristics and questionnaires: Review of existing evaluation instruments.
Group work 1: Discussion and analysis of user evaluation practices and existing questionnaires.
Reporting and discussion 1: Groups share insights, followed by general discussion.
Individual or group activity (to be decided together with the attendees): Ideation through affinity diagrams: The attendees sketch and structure key ideas using affinity diagramming to identify gaps and patterns that will guide the subsequent group work.
Group work 2: Collaborative creation of new heuristics, questions, and statements to address identified gaps.
Reporting and discussion 2: Groups present their results, followed by plenary discussion.
Formation of new questionnaire: Consolidation of ideas into a modular questionnaire.
Planning next steps: Discussion of validation approaches for the new questionnaire (of the included heuristics, questions, and statements) and potential future publications. Reflect on the proposed questionnaire modules, outlining plans for confirmatory factor analysis and other validation procedures to ensure reliability, construct validity, and applicability across multimodal interface contexts.
Resources
Here follows some existing questionnaires and resources that are, at least to some extent, relevant for the workshop, and for user centered evaluation in general:
BeauVis He, T., Isenberg, P., Dachselt, R., & Isenberg, T. (2022). BeauVis: A validated scale for measuring the aesthetic pleasure of visual representations. IEEE Transactions on Visualization and Computer Graphics, 29(1), 363-373.
Perceived Readability Evaluation in Visualization (PREvis) Cabouat, A. F., He, T., Isenberg, P., & Isenberg, T. (2024). PREVis: Perceived readability evaluation for visualizations. IEEE Transactions on Visualization and Computer Graphics.
ICE-T Value-Driven Visualization Evaluation Wall, E., Agnihotri, M., Matzen, L., Divis, K., Haass, M., Endert, A., & Stasko, J. (2018). A heuristic approach to value-driven evaluation of visualizations. IEEE transactions on visualization and computer graphics, 25(1), 491-500.
System Usability Scale (SUS) Brooke, J. (1996). SUS-A quick and dirty usability scale. Usability evaluation in industry, 189(194), 4-7.
Questionnaire for User Interface Satisfaction (QUIS) Chin, J. P., Diehl, V. A., & Norman, K. L. (1988). Development of an instrument measuring user satisfaction of the human-computer interface. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 213-218).
Software Usability Measurement Inventory (SUMI) Kirakowski, J., Corbett, M., & Sumi, M. (1993). The software usability measurement inventory. British Journal of Educational Technology, 24(3), 210-2.
NASA Task Load index (NASA TLX) Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in psychology (Vol. 52, pp. 139-183). North-Holland.
After-Scenario Questionnaire (ASQ) Lewis, J. R. (1995). IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. International Journal of Human‐Computer Interaction, 7(1), 57-78.
Post-Study System Usability Questionnaire (PSSUQ) Lewis, J. R. (1995). IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. International Journal of Human‐Computer Interaction, 7(1), 57-78.
Computer System Usability Questionnaire (CSUQ) Lewis, J. R. (1995). IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. International Journal of Human‐Computer Interaction, 7(1), 57-78.
UMUX-LITE Lewis, J. R., Utesch, B. S., & Maher, D. E. (2013). UMUX-LITE: when there's no time for the SUS. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 2099-2102).
Component model of User Experience (meCUE) Minge, M., Thüring, M., & Wagner, I. (2016). Developing and validating an English version of the meCUE questionnaire for measuring user experience. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 60, No. 1, pp. 2063-2067). Sage CA: Los Angeles, CA: SAGE Publications.
meCUE2.0 Minge, M., & Thüring, M. (2018). The meCUE questionnaire (2.0): Meeting five basic requirements for lean and standardized UX assessment. In International Conference of Design, User Experience, and Usability (pp. 451-469). Cham: Springer International Publishing.
Immersive Experience Questionnaire (IEQ) Rigby, J. M., Brumby, D. P., Gould, S. J., & Cox, A. L. (2019). Development of a questionnaire to measure immersion in video media: The Film IEQ. In Proceedings of the 2019 ACM International Conference on Interactive Experiences for TV and Online Video (pp. 35-46).
Gaming Engagement Questionnaire (GEQ) Brockmyer, J. H., Fox, C. M., Curtiss, K. A., McBroom, E., Burkhart, K. M., & Pidruzny, J. N. (2009). The development of the Game Engagement Questionnaire: A measure of engagement in video game-playing. Journal of experimental social psychology, 45(4), 624-634.
User Engagement Scale (UES) Wiebe, E. N., Lamb, A., Hardy, M., & Sharek, D. (2014). Measuring engagement in video game-based environments: Investigation of the User Engagement Scale. Computers in human behavior, 32, 123-132. O’Brien, H. L., Cairns, P., & Hall, M. (2018). A practical approach to measuring user engagement with the refined user engagement scale (UES) and new UES short form. International Journal of Human-Computer Studies, 112, 28-39.
Auditory Interface User Experience test (the BUZZ) Tomlinson, B. J., Noah, B. E., & Walker, B. N. (2018, April). Buzz: An auditory interface user experience scale. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-6).
Desire for Aesthetics Scale (DFAS) Lundy, D. E., Schenkel, M. B., Akrie, T. N., & Walker, A. M. (2010). How important is beauty to you? The development of the Desire for Aesthetics Scale. Empirical Studies of the Arts, 28(1), 73-92.
Centrality of Visual Product Aesthetics (CVPA) Bloch, P. H., Brunel, F. F., & Arnold, T. J. (2003). Individual differences in the centrality of visual product aesthetics: Concept and measurement. Journal of consumer research, 29(4), 551-565.
Aesthetic Emotions Scale (AESTHEMOS) Schindler, I., Hosoya, G., Menninghaus, W., Beermann, U., Wagner, V., Eid, M., & Scherer, K. R. (2017). Measuring aesthetic emotions: A review of the literature and a new assessment tool. PloS one, 12(6), e0178899.
Aesthetic Experience Questionnaire (AEQ) Wanzer, D. L., Finley, K. P., Zarian, S., & Cortez, N. (2020). Experiencing flow while viewing art: Development of the Aesthetic Experience Questionnaire. Psychology of Aesthetics, Creativity, and the Arts, 14(1), 113.