Comparing the Effects of Visual, Haptic, and Visuohaptic Encoding on Memory Retention of Digital Objects in Virtual Reality
This repository contains experimental data for the NordiCHI '24 full paper "Comparing the Effects of Visual, Haptic, and Visuohaptic Encoding on Memory Retention of Digital Objects in Virtual Reality"
Although Virtual Reality (VR) has undoubtedly improved human interaction with 3D data, users still face difficulties retaining important details of complex digital objects in preparation for physical tasks. To address this issue, we evaluated the potential of visuohaptic integration to improve the memorability of virtual objects in immersive visualizations. In a user study (N=20), participants performed a delayed match-to-sample task where they memorized stimuli of visual, haptic, or visuohaptic encoding conditions. We assessed performance differences between the conditions through error rates and response time. We found that visuohaptic encoding significantly improved memorization accuracy compared to unimodal visual and haptic conditions. Our analysis indicates that integrating haptics into immersive visualizations enhances the memorability of digital objects. We discuss its implications for the optimal encoding design in VR applications that assist professionals who need to memorize and recall virtual objects in their daily work.
The paper is available in Arxiv link
- The results.csv itself, available in Creative Commons Public Domain Dedication (CC-0), represented the experimental results from consented anonymous participants and was collected by Lucas Rodrigues.
- The NASA-TLX.csv itself, available in Creative Commons Public Domain Dedication (CC-0), represented the workload self-assessment results from consented anonymous participants and was collected by Lucas Rodrigues.
- The MatchToSampleExperiment itself, available in Creative Commons Public Domain Dedication (CC-0), represented the open-source Unity project that was created by Lucas Rodrigues and used for data collection.
The contained "source code" (i.e., Python scripts and Jupyter Notebooks) of this work is made available under the terms of GNU GPLv3. They are fully available also in the Open Science Framework.
Below are the BibTex entries to cite the paper and data set.
@misc{siqueira_rodrigues_data_2023,
title = {Data from "{Comparing} the {Effects} of {Visual}, {Haptic}, and {Visuohaptic} {Encoding} on {Error} {Rate} and {Response} {Time} in {Virtual} {Reality}"},
url = {https://github.com/lsrodri/VHMatch},
publisher = {https://github.com/lsrodri/VHMatch},
author = {Siqueira Rodrigues, Lucas and Schmidt, Timo T. and Nyakatura, John and Zachow, Stefan and Israel, Johann Habakuk and Kosch, Thomas},
year = {2023},
}
The author acknowledges the support of the Cluster of Excellence »Matters of Activity. Image Space Material« funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC 2025 – 390648296.
Copyright © 2024. Cluster of Excellence Matters of Activity. All rights reserved.