MultiMod: A platform for qualitative analysis of multimodal learning analytics.
Multimodal analytics are increasingly important for research on learning, equity, and justice in our digitally-mediated world. However, challenges to using multimodal analytics in research include selection, analysis, technology, and ethics. We present MultiMod, a platform for aggregating, selecting, analyzing, and presenting multimodal texts. A case study of multimodal sensemaking, intersubjectivity, and collaboration in Minecraft will illustrate how MultiMod supports our research team’s qualitative analysis and allows annotated simulation as a mode of sharing research findings.
Download full textAPA
Proctor, C. & Mawer, D. (2023). MultiMod: A platform for qualitative analysis of multimodal learning analytics. In Building knowledge and sustaining our community, Proceedings of the 16th International Conference on Computer-Supported Collaborative Learning - CSCL 2023. Montreal, Canada: International Society of the Learning Sciences.
Bibtex
@inproceedings{proctor2023multimod, title = {Joint Visual Attention and Collaboration in {{Minecraft}}}, booktitle = {Proceedings of the 15th {{International Con-}} Ference on {{Computer-Supported Collaborative Learning}} - {{CSCL}} 2022}, author = {Proctor, Chris and Muller, Dalia Antonia Caraballo}, editor = {Weinberger, A and Chen, W and {Hern{\'a}ndez-Leo}, D and Chen, B}, year = {2022}, pages = {226--233}, publisher = {{International Society of the Learning Sciences}}, address = {{Hiroshima, Japan}}, langid = {english} }