<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.0/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" article-type="research-article" xml:lang="en">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">IR</journal-id>
<journal-title-group>
<journal-title>Information Research</journal-title>
</journal-title-group>
<issn pub-type="epub">1368-1613</issn>
<publisher>
<publisher-name>University of Bor&#x00E5;s</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">ir31163020</article-id>
<article-id pub-id-type="doi">10.47989/ir31163020</article-id>
<article-categories>
<subj-group xml:lang="en">
<subject>Research article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Information need in metaverse recordings - A field study</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author"><name><surname>Steinert</surname><given-names>Patrick</given-names></name>
<xref ref-type="aff" rid="aff0001"/></contrib>
<contrib contrib-type="author"><name><surname>Mischkies</surname><given-names>Jan</given-names></name>
<xref ref-type="aff" rid="aff0002"/></contrib>
<contrib contrib-type="author"><name><surname>Wagenpfeil</surname><given-names>Stefan</given-names></name>
<xref ref-type="aff" rid="aff0003"/></contrib>
<contrib contrib-type="author"><name><surname>Frommholz</surname><given-names>Ingo</given-names></name>
<xref ref-type="aff" rid="aff0004"/></contrib>
<contrib contrib-type="author"><name><surname>Hemmje</surname><given-names>Matthias L.</given-names></name>
<xref ref-type="aff" rid="aff0005"/></contrib>
<aff id="aff0001"><bold>Patrick Steinert</bold> is a Ph.D. student and lecturer at the faculty of mathematics and computer science, University of Hagen, Hagen Germany. Further, he works as Technology Consultant for software solutions in media technologies at Qvest Digital AG, Bonn, Germany. His research interest includes virtual worlds and multimedia technologies. He can be contacted at <email xlink:href="psteinert&#x0040;acm.org">psteinert&#x0040;acm.org</email>.</aff>
<aff id="aff0002"><bold>Jan Mischkies</bold> is a graduate student at the faculty of mathematics and computer science, University of Hagen, Germany. He can be contacted at <email xlink:href="janmischkies&#x0040;web.de">janmischkies&#x0040;web.de</email>.</aff>
<aff id="aff0003"><bold>Stefan Wagenpfeil</bold> is a Professor for Software Engineering and IT-Management at the Private University of Applied Sciences, G&#x00F6;ttingen, Germany. He received his Ph.D. from the University of Hagen and his research interests include Software Engineering Multimedia Information Retrieval, Virtual Reality and Gamification. He can be contacted at <email xlink:href="s.wagenpfeil&#x0040;pfh.de">s.wagenpfeil&#x0040;pfh.de</email>.</aff>
<aff id="aff0004"><bold>Ingo Frommholz</bold> is Professor and Head of the School of Applied Data Science at Modul University Vienna, Austria, and Adjunct Professor at the Bern University of Applied Sciences, Berne, Switzerland. With a general interest in data science and artificial intelligence (AI), his focus is on human-centric Information Retrieval, generative AI and its application in disciplines such as digital libraries. He can be contacted at <email xlink:href="ifrommholz&#x0040;acm.org">ifrommholz&#x0040;acm.org</email>.</aff>
<aff id="aff0005"><bold>Matthias L. Hemmje</bold> is full professor at the University in Hagen, Hagen, Germany. He is involved in research on Virtual Information and Knowledge Environments with special focus on distributed collaborative Digital Libraries, Multimedia Archives, Information Retrieval, Filtering, Linking, Enrichment, Personalization, and Information Visualization. He can be contacted at <email xlink:href="matthias.hemmje&#x0040;fernuni-hagen.de">matthias.hemmje&#x0040;fernuni-hagen.de</email>.</aff>
</contrib-group>
<pub-date pub-type="epub"><day>06</day><month>02</month><year>2026</year></pub-date>
<pub-date pub-type="collection"><year>2026</year></pub-date>
<volume>31</volume>
<issue>1</issue>
<fpage>226</fpage>
<lpage>246</lpage>
<permissions>
<copyright-year>2026</copyright-year>
<copyright-holder>&#x00A9; 2026 The Author(s).</copyright-holder>
<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by-nc/4.0/">
<license-p>This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by-nc/4.0/">http://creativecommons.org/licenses/by-nc/4.0/</ext-link>), permitting all non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<abstract xml:lang="en">
<title>Abstract</title>
<p><bold>Introduction.</bold> Metaverse Recordings (MVRs) represent an emerging and underexplored media type within the field of Multimedia Information Retrieval (MMIR). This paper presents findings from a field study aimed at understanding the user information needs and search behaviours specific to MVR retrieval.</p>
<p><bold>Method.</bold> By conducting and analysing expert interviews, the study identifies application scenarios and retrieval challenges. from an MVR collection.</p>
<p><bold>Analysis.</bold> The collected responses were analysed to reveal patterns in user needs and behaviours. Key focus areas included interest in time-series data generated during graphical rendering and data from related input-output devices, identified as crucial components for MVR retrieval.</p>
<p><bold>Results.</bold> The findings outline existing application scenarios for MVRs, validate the need to capture specific data types in MVR retrieval, and emphasise the importance of user-tailored MVR retrieval systems. The study also identifies use cases, and requirements essential for the design of effective MVR retrieval systems.</p>
<p><bold>Conclusions.</bold> This research contributes to a foundational understanding of MVR retrieval and supports future efforts in system design and research within MMIR, enhancing the comprehension of information search behaviours in the context of the emerging multimedia type MVR.</p>
</abstract>
</article-meta>
</front>
<body>
<sec id="sec1">
<title>Introduction</title>
<p>The rate of multimedia creation is accelerating. Digital cameras are ubiquitous, and social media has led to an immense media generation. In recent years, this has boosted short form video content. Furthermore, the COVID crisis has given remote technologies for communication a push, such as increased use of video conferencing and virtual conferences. Another trend re-emerged in recent years, the idea of a persistent virtual space, where people meet and live together, the metaverse.</p>
<p>The growth rate of usage of platforms (<xref ref-type="bibr" rid="R15">KZero Worldwide, 2024</xref>) like Roblox (<xref ref-type="bibr" rid="R28">Wikipedia, 2023b</xref>) or Minecraft (<xref ref-type="bibr" rid="R17">Mojang, 2023</xref>) show that people are heavily using virtual worlds. Trend reports assume an even higher usage in the future (<xref ref-type="bibr" rid="R7">Gartner Inc., 2022</xref>). It is likely that people will create recordings of experiences in the virtual world, like they do in the real-world. Early versions of this can be seen as YouTube videos (<xref ref-type="bibr" rid="R6">Bestie Let&#x2019;s play, 2022</xref>) for entertainment purposes.</p>
<p>Multimedia Information Retrieval (MMIR) (<xref ref-type="bibr" rid="R20">R&#x00FC;ger, 2010</xref>) is the field in computer science which addresses indexing and retrieval of multimedia content. The metaverse is built on virtual worlds, which are basically computer-generated multimedia. Therefore, this study examines the integration of Metaverse Recordings (MVRs) in MMIR.</p>
<p><xref ref-type="bibr" rid="R21">Steinert et al. (2023</xref>; <xref ref-type="bibr" rid="R23">2024b</xref>) have outlined the differences between metaverse content and other media types, such as format, structure and content. The analysis of the differences revealed a lack of support of MMIR for metaverse content. The further integration of MVR in MMIR should be grounded on user demands. There is a noted gap in existing literature regarding the information needs specifically related to MVR retrieval. Understanding these information needs is essential for developing effective MVR retrieval systems.</p>
<p>MVR as an emerging multimedia type introduces challenges for integration in MMIR, related to the capture, organisation, and retrieval of content generated in virtual environments. One open question is whether such user sessions are recorded, which would be indirectly recorded metaverse content, in the field and for which applications. Another significant challenge concerns the formats of data available in metaverse environments and how they align with user interests. Unlike traditional media recordings, MVRs can capture not only video and audio but also complex data formats such as movement patterns, eye-tracking information, and biosensor data. The potential for data capture in the metaverse is considerable, yet it remains unclear how these rich data formats align with users&#x2019; needs and interests. For example, while systems may be capable of reconstructing virtual scenes with mathematical precision, it is unclear whether users find such detailed data useful or necessary for their tasks. A further challenge lies in understanding users&#x2019; information needs and how they search for and retrieve MVRs. Little is known about the information searching behaviour specific to MVRs, and existing search systems are not yet tailored to the unique attributes of virtual worlds. Traditional search filters, such as date ranges, location, and event types, may not fully capture the complexity of user needs in the metaverse. Moreover, it is unclear how users express their information needs when searching for MVRs, as past queries and behaviours have not yet been documented.</p>
<p>The lack of understanding of the technical capabilities and user interests shows a critical research gap. Understanding which data types users value and how they search for MVRs is crucial for integrating MVR in MMIR and developing effective MVR retrieval systems. This paper presents a field study conducted with a small expert group. Based on interviews, we describe application scenarios and search behaviours of users, and how MMIR can support them.</p>
<p>The following sections present an overview of the metaverse and related technologies, information retrieval (IR), and MMIR in the section for state of the art and related work. The section on study design describes the field study design. The results section presents the results of the interviews. Finally, the conclusion and outlook section summarises the presented work and discusses future work.</p>
</sec>
<sec id="sec2">
<title>State of the art and related work</title>
<p>In this section, the state of the art and related work for MVR retrieval are summarised.</p>
<sec id="sec2_1">
<title>Metaverse</title>
<p>The metaverse is an umbrella term for a set of digital technologies and use cases. <xref ref-type="bibr" rid="R19">Mystakidis (2022)</xref> defines the metaverse as a persistent, multiuser environment that blends physical reality with digital virtuality. It uses technologies like virtual reality (VR) (<xref ref-type="bibr" rid="R2">Alsop, 2023</xref>) and augmented reality (AR) (<xref ref-type="bibr" rid="R27">Wikipedia, 2023a</xref>) for multisensory interactions with virtual spaces, objects, and people. The metaverse connects immersive, social environments on multiuser platforms, allowing real-time communication and interaction with digital content. The idea of a metaverse originates in <xref ref-type="bibr" rid="R25">Stephenson&#x2019;s (1992)</xref> novel <italic>Snow Crash</italic>, where it was described as a network of virtual worlds where avatars could move between them, but now it includes social VR platforms, online games, and AR collaboration spaces (<xref ref-type="bibr" rid="R3">Anderson &#x0026; Rainie, 2022</xref>). Such application domains are described next.</p>
</sec>
<sec id="sec2_2">
<title>Metaverse application domains</title>
<p>In the field of metaverse applications, several use cases are imaginable and already visible, which potentially include MVRs, such as the user sessions.</p>
<p><xref ref-type="bibr" rid="R9">Gunkel et al. (2018)</xref> researched metaverse experiences with persons. Based on a survey, they present a user interest in the domains of video conferencing, education, gaming, watching movies, and further similar entertainment activities (<xref ref-type="bibr" rid="R9">Gunkel et al., 2018</xref>). The audience research company GWI researched use cases for the metaverse and lists (<xref ref-type="bibr" rid="R18">Morris, 2022</xref>) watching TV and playing games as applications for customers. Hence, there is strong support for entertainment applications. An example of MVRs in this application domain is YouTube&#x2019;s videos created from the metaverse (<xref ref-type="bibr" rid="R6">Bestie Let&#x2019;s play, 2022</xref>).</p>
<p>Within the metaverse-related literature, the risk of cybercrime is mentioned. <xref ref-type="bibr" rid="R13">The International Criminal Police Organization (INTERPOL, 2024b)</xref> describes the potential harms and the concepts to counter them. Examples are crimes against children or sexual offenses and assault. The activities in question can be tracked by recording the relevant transaction data (<xref ref-type="bibr" rid="R12">INTERPOL, 2024a</xref>). While not conclusively demonstrated in the existing literature, it is conceivable that MVRs could be used for this purpose.</p>
<p>In addition to consumer-focused use cases, the field of industrial metaverse is relevant. The field of industrial metaverse covers the use of metaverse technologies mainly for simulations of industrial processes (<xref ref-type="bibr" rid="R30">Zheng et al., 2022</xref>). Although not definitively described in current research, the review of such simulations is conceivable as a use case for MVRs. Such simulations can address not yet existing scenarios, thus addressing more research and development use cases, or real scenarios mirrored in the virtual world. The latter is described as digital twin. An example is the simulation of mining operations (<xref ref-type="bibr" rid="R26">Stothard et al., 2024</xref>).</p>
<p>While many people record their personal life experiences digitally and share them online (Austin, 2023), it is conceivable that people record their experiences in the metaverse as well. Within this paper it is termed as personal use.</p>
<p>In conclusion, a list of potential application domains is compiled, shown in <xref ref-type="table" rid="T1">Table 1</xref>. The application domains of education and videoconferencing &#x0026; collaboration are grounded in the research of <xref ref-type="bibr" rid="R9">Gunkel et al., 2018</xref>. In the application domain entertainment, this paper summarises the applications of things such as gaming, watching TV, and listening to music. Law enforcement is anchored in the literature of risks of the metaverse. Industrial metaverse is grounded in the literature about metaverse. The personal use application domain is anchored in the described behaviour of people creating and sharing experiences on the Internet.</p>
<table-wrap id="T1">
<label>Table 1.</label>
<caption><p>Metaverse application domains.</p></caption>
<table>
<thead>
<tr>
<th align="center" valign="top">ID</th>
<th align="center" valign="top">Domain</th>
<th align="center" valign="top">Example</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">AD 2.1</td>
<td align="left" valign="top">Education</td>
<td align="left" valign="top">VR Training</td>
</tr>
<tr>
<td align="left" valign="top">AD 2.2</td>
<td align="left" valign="top">Videoconferencing / Collaboration</td>
<td align="left" valign="top">Online Business Meetings</td>
</tr>
<tr>
<td align="left" valign="top">AD 2.3</td>
<td align="left" valign="top">Entertainment / Video Gaming</td>
<td align="left" valign="top">Watching movies, play games</td>
</tr>
<tr>
<td align="left" valign="top">AD 2.4</td>
<td align="left" valign="top">Law Enforcement</td>
<td align="left" valign="top">Investigate in crimes</td>
</tr>
<tr>
<td align="left" valign="top">AD 2.5</td>
<td align="left" valign="top">Industrial Metaverse</td>
<td align="left" valign="top">Simulating Car Driving Scenarios</td>
</tr>
<tr>
<td align="left" valign="top">AD 2.6</td>
<td align="left" valign="top">Personal Use</td>
<td align="left" valign="top">Record personal memories</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Next, the technical opportunities of MVRs are described.</p>
</sec>
<sec id="sec2_3">
<title>Options to produce MVRs</title>
<p>In the realm of metaverse, there are several technical recording methodologies for user sessions to consider. These can be broadly categorised into three types, each with distinct characteristics and applications. <xref ref-type="fig" rid="F1">Figure 1</xref> illustrates the rendering process and the three types. This process takes scene raw data (SRD) and peripheral data (PD), also known as auxiliary data, as input. PD is derived from user devices, hence represents user behaviour. SRD includes information used by the computer to construct the virtual scene. Both inputs are used to create perceivable rendering output, in the form of 2D/3D image stream(s), audio streams, and actuator commands. The rendering output is recordable multimedia, which is defined as multimedia content objects (MMCOs). In summary, the three major categories of recordable data are MMCO, SRD, and PD.</p>
<fig id="F1">
<label>Figure 1.</label>
<caption><p>Rendering process.</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="c11-fig1.jpg"><alt-text>none</alt-text></graphic>
</fig>
<p>The first group, MMCO shown as recordable multimedia in <xref ref-type="fig" rid="F1">Figure 1</xref>, can be created by directly recording sessions as videos within metaverse applications, such as in Horizon Worlds. This approach captures both audio and video outputs of the virtual environment. The 256 metaverse records dataset (<xref ref-type="bibr" rid="R22">Steinert et al., 2024a</xref>) contains examples of such MVRs. An alternative within this category is the use of screen recorders to capture the audio-visual output of the rendered scenes.</p>
<p>The second group, SRD visualised as input in <xref ref-type="fig" rid="F1">Figure 1</xref>, rendering inputs capture the visual rendering inputs used to create the virtual scene. This can be done in two primary approaches. First, capturing scene graphs, which detail the objects present in a scene. Second, utilising network transmitted information to record inputs from other players, including avatar positions and actions. This raw data, when combined with additional audio-visual elements like textures and colours, offers a comprehensive view of the scene. Tools exist to capture rendering raw data, such as Nvidia Ansel (<xref ref-type="bibr" rid="R8">G&#x00FC;ng&#x00F6;r, 2016</xref>) or RenderDoc (<xref ref-type="bibr" rid="R14">Karlsson, 2018</xref>).</p>
<p>The third group of recordable data is capturing PD, shown as input in <xref ref-type="fig" rid="F1">Figure 1</xref>. The recording of recording PD provides supplementary information that enriches the primary recording. This data is either hard to resemble later, like emotion detection from facial expressions versus audio recordings, or not possible, like heart rate in specific situations.</p>
<p>As of now, it is unclear which recording method is superior. Audio-visual data is convenient for playback and compatible with existing tools but extracting detailed information for analysis is challenging. On the other hand, the Metaverse&#x2019;s technical nature allows for the utilisation of logs and interaction data, e.g. avatar presence and hand movements captured from controller data, to gain immediate insights without the need for extensive video analysis. Capturing and organising virtual world experiences for retrieval poses several unresolved challenges. These include how to represent rendered scenes in a way that preserves both technical structure and user interaction, how to map and synchronize multimedia content over time, and how to manage composite content from multiple sources. A core issue is establishing reliable connections between SRD, PD, and their rendered outputs MMCO. Addressing these problems is essential for enabling effective retrieval and reuse of metaverse recordings.</p>
<p><xref ref-type="bibr" rid="R21">Steinert et al. (2023)</xref> see value in the different types for different use cases. Hence, they described a model of the combination of data, visualised in <xref ref-type="fig" rid="F2">Figure 2</xref>. If the data is combined in any form, they define this as composite multimedia content object (CMMCO). If there is a common time in the data, such as a timestamp or frame number, they define this as time-mapped CMMCO (TCMMCO). The 111 recordings dataset (<xref ref-type="bibr" rid="R24">Steinert, 2025</xref>) presents MVRs containing MMCO, SRD, and PD, hence CMMCO data. The data is recorded with timestamps; hence the MVRs can be considered TCMMCO.</p>
<fig id="F2">
<label>Figure 2.</label>
<caption><p>Classification of <italic>MVR</italic> components (<xref ref-type="bibr" rid="R21">Steinert, 2023</xref>).</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="c11-fig2.jpg"><alt-text>none</alt-text></graphic>
</fig>
</sec>
<sec id="sec2_4">
<title>Information retrieval</title>
<p>Starting with the user, there is the need to search, find, and retrieve information. This happens when the user has a knowledge gap, which <xref ref-type="bibr" rid="R5">Belkin et al. (1993)</xref> describe as an anomalous state of knowledge (ASK). When the user has all information to complete a task, it is the normal state. If information is missing, it is ASK, which is an information need. This results in information-seeking behaviour, which is described by many scientific research articles.</p>
<p>Multimedia is the combination of any combination of media formats. In the case of MMIR, it is about the different types of media that can be retrieved in a system. The types can be any perceivable media, such as images or audio, or biometric sensor data.</p>
<p>The yearly Lifelog Search Challenge (<xref ref-type="bibr" rid="R11">Gurrin, 2025</xref>) addresses a comparable use case then to expect with metaverse content. The challenges address user experience questions for user interfaces to query multimedia originated from recordings of cameras worn by people to capture images of full days in the life. Corresponding data is gathered, such as biometrics and location (<xref ref-type="bibr" rid="R10">Gurrin, 2021</xref>). What can be found in the related literature are example search terms, used in the challenges, but the literature lacks specific descriptions of relatable information search behaviour for MVR retrieval.</p>
</sec>
<sec id="sec2_5">
<title>Summary</title>
<p>The metaverse is a term which is used to describe a set of technologies and use cases including fully virtual environments, or real-world environments, with digital extensions. Different use cases are conceivable or already observable, which produce MVRs. A production of MVRs leads to larger collections, which are retrievable by MMIR. Missing in the literature is an understanding of the information need and information searching behaviour, to integrate MVRs in MMIR. A subsequent field study can provide important findings.</p>
</sec>
</sec>
<sec id="sec3">
<title>Study design</title>
<p>To create an understanding of the information need for MVR retrieval, a field study was conducted. This study employs a qualitative design to explore participants&#x2019; experiences and perceptions through in-depth interviews. The field study was carried out as an expert interview with experts in the field of the metaverse.</p>
<sec id="sec3_1">
<title>Expert selection</title>
<p>Following the application domains described in <xref ref-type="table" rid="T1">Table 1</xref>, relevant experts listed in <xref ref-type="table" rid="T2">Table 2</xref> were identified through a targeted search within the professional network of the authors, the research group at university, and the Immersive Collaboration Hub. Additional contacts were made through subject-specific communities, including VR vendors, consultants, meetup groups, and registered associations. Recruitment focused on individuals with demonstrable practical experience in metaverse or VR applications within the specified domains, for example, through direct involvement in user workshops or active participation in related projects. Twelve experts in the selected application domains were contacted. Six have responded and agreed to a recorded interview. The formed group of participants consists of six individuals, five German, one Italian, all male, with higher education backgrounds. Their professional expertise spans product design, programming, IT consulting, and documentary filmmaking; this offered a diverse and practice-oriented perspective on the study topics. The six people work for one or multiple companies or organisations and are active in an international but mostly European market. It proved challenging to identify individuals utilising the metaverse technologies for personal purposes, particularly those engaged in recording and subsequent retrieval of these experiences.</p>
<p>It was notable that the experts identified for one application domain were in fact able to provide information on other application domains, or at the least, on existing application scenarios in other domains, during the discussion.</p>
<table-wrap id="T2">
<label>Table 2.</label>
<caption><p>Potential <italic>MVR</italic> retriever user groups by domain.</p></caption>
<table>
<thead>
<tr>
<th align="center" valign="top">ID</th>
<th align="center" valign="top">Domain</th>
<th align="center" valign="top">Potential MVR Retriever User Group</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">AD 2.1</td>
<td align="left" valign="top">Education</td>
<td align="left" valign="top">(1) Organisers of VR Training</td>
</tr>
<tr>
<td align="left" valign="top">AD 2.2</td>
<td align="left" valign="top">Videoconferencing / Collaboration</td>
<td align="left" valign="top">(2) Participants of Virtual Team Meetings, (3) Marketing Staff of Collaboration Platforms</td>
</tr>
<tr>
<td align="left" valign="top">AD 2.3</td>
<td align="left" valign="top">Entertainment / Video Gaming</td>
<td align="left" valign="top">(4) Content Creators</td>
</tr>
<tr>
<td align="left" valign="top">AD 2.4</td>
<td align="left" valign="top">Law Enforcement</td>
<td align="left" valign="top">(5) Law Enforcement Staff</td>
</tr>
<tr>
<td align="left" valign="top">AD 2.5</td>
<td align="left" valign="top">Industrial Metaverse</td>
<td align="left" valign="top">(6) Quality Assurance Staff at Telecommunications Companies, (7) Market Research Staff</td>
</tr>
<tr>
<td align="left" valign="top">AD 2.6</td>
<td align="left" valign="top">Personal Use</td>
<td align="left" valign="top">(8) Personal users of VR Headsets</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>After reaching out to the experts, the responding experts were invited to a video conference call. During the recorded call, two interviewers were present, one in the role leading through the interview, the second in a mostly silent role, noting the responses. Both interviewers are experts in the field of multimedia retrieval. To structure the interviews, a questionnaire was created, which is explained next.</p>
</sec>
<sec id="sec3_2">
<title>Questionnaire</title>
<p>The expert interviews were conducted as guided conversations, using a structured questionnaire to ensure consistency and depth of investigation. This questionnaire, as shown in the Appendix Questionnaire <xref ref-type="table" rid="A7">Table 7</xref> and <xref ref-type="table" rid="A8">Table 8</xref>, was divided into two main parts: Application scenario validation and use context detail questions. The questionnaire served as a flexible guide, allowing interviewers to adapt their inquiries based on each expert&#x2019;s unique insights and experiences. This methodology facilitated a comprehensive examination of metaverse recording applications while maintaining the ability to delve into emerging or unexpected areas of interest.</p>
<p>In the first section, the interviewee is asked to name existing application scenarios (F1.1) and conceivable application scenarios (F1.2). Beforehand, we constructed application scenarios for which the interviewee was asked to assess whether they exist, are conceivable or not relevant (F1.3). If in this step an existing use case was identified, this use case was used to determine the information search behaviour.</p>
<p>The application scenario validation section aimed to explore known cases of metaverse recordings and assess the realism of previously hypothesised scenarios. <xref ref-type="table" rid="T6">Table 6</xref> addresses the identification of application scenarios of metaverse recordings and MVR retrieval. After a brief explanation of MMIR in general and MVR retrieval in specific, interviewees were asked to describe familiar applications (F1.1, F1.2) and evaluate the feasibility of proposed application scenarios (F1.3). The use context detail questions focused on gathering in-depth information about a specific application scenario, preferably one described by the interviewee during the initial discussion. This approach allowed for a more targeted exploration of practical implications and potential challenges.</p>
<p>To understand the information search behaviour in an application scenario, the questionnaire asks for the information available at the start of a search (F3). Further, it is asked how the search query is constructed (F11), what is expected to be retrieved (F4, F5, F9), and how the result is presented (F15). Further questions check whether the retrieval is static or dynamic, and which kind of devices are likely used.</p>
</sec>
</sec>
<sec id="sec4">
<title>Results</title>
<p>After the interviews had taken place, they were analysed. In a first step, the application scenarios were summarised.</p>
<sec id="sec4_1">
<title>Analysis of the interviews</title>
<p>Based on the responses, several findings can be summarised.</p>
<sec id="sec4_1_1">
<title>Application domains and application scenarios</title>
<p>During the expert interviews, various application scenarios for MVR retrieval were identified by experts (F1.1-F1.3). Some of the prepared application scenarios were also validated as existing. Some experts contributed scenarios beyond their own domain. <xref ref-type="table" rid="T3">Table 3</xref> lists these scenarios, showing who proposed and classified each one. Scenarios labelled as <italic>existing</italic> are already in use today, indicating real-world demand (market pull), while <italic>conceivable</italic> scenarios could become relevant in the future. Scenarios known before the survey are shaded in grey, and the original survey data can be found in the appendix. Expert F was unable to name an AS or classify one as existing or conceivable.</p>
<table-wrap id="T3">
<label>Table 3.</label>
<caption><p>Application scenarios validated by metaverse experts. Application scenarios highlighted in grey were proposed by the authors, others by the expert.</p></caption>
<table>
<thead>
<tr>
<th align="center" valign="top">ID</th>
<th align="center" valign="top">Domain</th>
<th align="center" valign="top">Scenario Description</th>
<th align="center" valign="top">Expert</th>
<th align="center" valign="top">Rating</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">AS 1</td>
<td align="left" valign="top">AD 2.1 Education</td>
<td align="left" valign="top">A professor of medicine searches the recordings for instances where specific surgical tools were used.</td>
<td align="center" valign="top">Expert A</td>
<td align="center" valign="top">Conceivable</td>
</tr>
<tr>
<td align="left" valign="top">AS 2</td>
<td align="left" valign="top">AD 2.1 Education</td>
<td align="left" valign="top">Recordings from VR glasses are reviewed to edit instructional videos.</td>
<td align="center" valign="top">Expert C</td>
<td align="center" valign="top">Existing</td>
</tr>
<tr>
<td align="left" valign="top">AS 3</td>
<td align="left" valign="top">AD 2.1 Education</td>
<td align="left" valign="top">Screen recordings of Spatial.io were recorded to organise training and teaching sessions</td>
<td align="center" valign="top">Expert A</td>
<td align="center" valign="top">Existing</td>
</tr>
<tr>
<td align="left" valign="top">AS 4</td>
<td align="left" valign="top">AD 2.1 Education</td>
<td align="left" valign="top">Spatial videos are recorded and can be relived afterwards for training purposes.</td>
<td align="center" valign="top">Expert A</td>
<td align="center" valign="top">Conceivable</td>
</tr>
<tr>
<td align="left" valign="top">AS 5</td>
<td align="left" valign="top">AD 2.1 Education</td>
<td align="left" valign="top">Welding training is carried out with the help of VR. Trainers can see in replays whether the weld seam has been drawn accurately.</td>
<td align="center" valign="top">Expert A</td>
<td align="center" valign="top">Conceivable</td>
</tr>
<tr>
<td align="left" valign="top">AS 6</td>
<td align="left" valign="top">AD 2.2 Videoconferencing / Collaboration</td>
<td align="left" valign="top">An employee of a company is looking for a certain Excel graphic that could be contained in the recordings of meetings from the past few months.</td>
<td align="center" valign="top">Expert B</td>
<td align="center" valign="top">Conceivable</td>
</tr>
<tr>
<td align="left" valign="top">AS 7</td>
<td align="left" valign="top">AD 2.2 Videoconferencing / Collaboration</td>
<td align="left" valign="top">Marketing employees of collaboration platforms search recordings for marketing purposes.</td>
<td align="center" valign="top">Expert B</td>
<td align="center" valign="top">Conceivable</td>
</tr>
<tr>
<td align="left" valign="top">AS 8</td>
<td align="left" valign="top">AD 2.3 Entertainment / Video Gaming</td>
<td align="left" valign="top">Content creators search Metaverse Recordings looking for content to edit.</td>
<td align="center" valign="top">Expert C</td>
<td align="center" valign="top">Existing</td>
</tr>
<tr>
<td align="left" valign="top">AS 9</td>
<td align="left" valign="top">AD 2.3 Entertainment / Video Gaming</td>
<td align="left" valign="top">Recordings of VR concerts are searched to create highlight reels or relive them with friends, for example.</td>
<td align="center" valign="top">Expert C</td>
<td align="center" valign="top">Conceivable</td>
</tr>
<tr>
<td align="left" valign="top">AS 10</td>
<td align="left" valign="top">AD 2.3 Entertainment / Video Gaming</td>
<td align="left" valign="top">Parts of virtual experiences (rooms) are made available again to other companies (Business-to-Business) and may also be searched.</td>
<td align="center" valign="top">Expert B</td>
<td align="center" valign="top">Existing</td>
</tr>
<tr>
<td align="left" valign="top">AS 11</td>
<td align="left" valign="top">AD 2.4 Law Enforcement</td>
<td align="left" valign="top">The investigators were informed of a sexual assault in a metaverse room. The investigators search the operators&#x2019; log files, which also contain short screen recordings, for an avatar wearing a red t-shirt.</td>
<td align="center" valign="top">Expert A</td>
<td align="center" valign="top">Conceivable</td>
</tr>
<tr>
<td align="left" valign="top">AS 12</td>
<td align="left" valign="top">AD 2.2 Videoconferencing / Collaboration</td>
<td align="left" valign="top">XRSI meetings were held in VR-Space and published on YouTube.</td>
<td align="center" valign="top">Expert A</td>
<td align="center" valign="top">Existing</td>
</tr>
<tr>
<td align="left" valign="top">AS 13</td>
<td align="left" valign="top">AD 2.4 Law Enforcement</td>
<td align="left" valign="top">Capture spatial videos of crime scenes to revisit and search them later in VR.</td>
<td align="center" valign="top">Expert A</td>
<td align="center" valign="top">Conceivable</td>
</tr>
<tr>
<td align="left" valign="top">AS 14</td>
<td align="left" valign="top">AD 2.5 Industrial Metaverse / Research and Development</td>
<td align="left" valign="top">Behavioural researchers search metaverse recordings to better understand behaviours.</td>
<td align="center" valign="top">Expert D</td>
<td align="center" valign="top">Conceivable</td>
</tr>
<tr>
<td align="left" valign="top">AS 15</td>
<td align="left" valign="top">AD 2.5 Industrial Metaverse / Research and Development</td>
<td align="left" valign="top">Market research employees analyse recordings of virtual supermarket visits.</td>
<td align="center" valign="top">Expert C</td>
<td align="center" valign="top">Existing</td>
</tr>
<tr>
<td align="left" valign="top">AS 16</td>
<td align="left" valign="top">AD 2.5 Industrial Metaverse / Research and Development</td>
<td align="left" valign="top">Industrial products are presented in a digital experience. For sales discussions, specific products are searched within the experience.</td>
<td align="center" valign="top">Expert B</td>
<td align="center" valign="top">Conceivable</td>
</tr>
<tr>
<td align="left" valign="top">AS 17</td>
<td align="left" valign="top">AD 2.5 Industrial Metaverse / Research and Development</td>
<td align="left" valign="top">Telecommunications technicians work in the field with AR glasses during line switching. This is recorded, reviewed by quality assurance employees, and specifically searched.</td>
<td align="center" valign="top">Expert D</td>
<td align="center" valign="top">Conceivable</td>
</tr>
<tr>
<td align="left" valign="top">AS 18</td>
<td align="left" valign="top">AD 2.5 Industrial Metaverse / Research and Development</td>
<td align="left" valign="top">For example, certain movements could be searched to create training data for robots.</td>
<td align="center" valign="top">Expert D</td>
<td align="center" valign="top">Conceivable</td>
</tr>
<tr>
<td align="left" valign="top">AS 19</td>
<td align="left" valign="top">AD 2.6 Personal Use</td>
<td align="left" valign="top">A person has recorded metaverse sessions and would like to experience them again.</td>
<td align="center" valign="top">Expert E</td>
<td align="center" valign="top">Conceivable</td>
</tr>
<tr>
<td align="left" valign="top">AS 20</td>
<td align="left" valign="top">AD 2.6 Personal Use</td>
<td align="left" valign="top">Search spatial videos (e.g., from a smartphone) to experience them again with VR glasses.</td>
<td align="center" valign="top">Expert D</td>
<td align="center" valign="top">Existing</td>
</tr>
<tr>
<td align="left" valign="top">AS 21</td>
<td align="left" valign="top">AD 2.6 Personal Use</td>
<td align="left" valign="top">Private gaming experiences (specifically golf) should be filterable, so good shots can be shared or rewatched for game optimisation.</td>
<td align="center" valign="top">Expert E</td>
<td align="center" valign="top">Existing</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="table" rid="T4">Table 4</xref> summarises the ratings, where eight application scenarios were existing, and thirteen were conceivable (F1.1 - F1.3). All application domains except law enforcement have existing application scenarios, based on the knowledge of the experts. Overall, it can be confirmed that application scenarios for MVR retrieval exist, based on the current insights provided by the experts.</p>
<table-wrap id="T4">
<label>Table 4.</label>
<caption><p>Rating of the application scenarios.</p></caption>
<table>
<thead>
<tr>
<th align="center" valign="top">ID</th>
<th align="center" valign="top">Application Domain</th>
<th align="center" valign="top">Amount of Cases</th>
<th align="center" valign="top">Rated Existing</th>
<th align="center" valign="top">Rated Conceivable</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">AD 2.1</td>
<td align="left" valign="top">Education</td>
<td align="center" valign="top">5</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">3</td>
</tr>
<tr>
<td align="left" valign="top">AD 2.2</td>
<td align="left" valign="top">Videoconferencing / Collaboration</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">2</td>
</tr>
<tr>
<td align="left" valign="top">AD 2.3</td>
<td align="left" valign="top">Entertainment / Video Gaming</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">1</td>
</tr>
<tr>
<td align="left" valign="top">AD 2.4</td>
<td align="left" valign="top">Law Enforcement</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">0</td>
<td align="center" valign="top">2</td>
</tr>
<tr>
<td align="left" valign="top">AD 2.5</td>
<td align="left" valign="top">Industrial Metaverse</td>
<td align="center" valign="top">5</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">4</td>
</tr>
<tr>
<td align="left" valign="top">AD 2.6</td>
<td align="left" valign="top">Personal Use</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">1</td>
</tr>
<tr>
<td align="left" valign="top"></td>
<th align="left" valign="top"><bold>Total</bold></th>
<th align="center" valign="top"><bold>21</bold></th>
<th align="center" valign="top"><bold>8</bold></th>
<th align="center" valign="top"><bold>13</bold></th>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="sec4_1_2">
<title>File types</title>
<p>A particular question of F5 and F9 were which kind of data is recorded and relevant to retrieve. Relevant are not only the audio-video recordings, but also the sensor data and rendering data. Several application scenarios (AS1, partly AS7, AS8, AS 17, AS11, AS15, AS21) can involve recording of the 3D scene in a way that it can be played back afterward with the ability to look around. Several described application scenarios include multiple data types, summarised in <xref ref-type="table" rid="T5">Table 5</xref>. This supports the hypothesis that in addition to MMCO, SRD and PD are relevant to record and to be supported in the retrieval. It was not asked directly in the interviews, but the answers suggest that the MVR components are analysed and played back in a time-linked manner.</p>
<table-wrap id="T5">
<label>Table 5.</label>
<caption><p><italic>MVR</italic>-data types mentioned in the responses to questions F5 and F9.</p></caption>
<table>
<thead>
<tr>
<th align="center" valign="top">Mention</th>
<th align="center" valign="top">Type/Format</th>
<th align="center" valign="top">Classification in MVR taxonomy</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Screen recording</td>
<td align="center" valign="top">Video 2D</td>
<td align="center" valign="top">MMCO</td>
</tr>
<tr>
<td align="left" valign="top">Engine data</td>
<td align="center" valign="top">Video 3D</td>
<td align="center" valign="top">SRD</td>
</tr>
<tr>
<td align="left" valign="top">Spatial video</td>
<td align="center" valign="top">Video 3D</td>
<td align="center" valign="top">MMCO</td>
</tr>
<tr>
<td align="left" valign="top">Multiple Videos</td>
<td align="center" valign="top">Multiple Videos</td>
<td align="center" valign="top">MMCO</td>
</tr>
<tr>
<td align="left" valign="top">3D Videos</td>
<td align="center" valign="top">Video 3D</td>
<td align="center" valign="top">SRD</td>
</tr>
<tr>
<td align="left" valign="top">Augmented video</td>
<td align="center" valign="top">Video 2D/3D</td>
<td align="center" valign="top">SRD/MMCO</td>
</tr>
<tr>
<td align="left" valign="top">Image</td>
<td align="center" valign="top">Image</td>
<td align="center" valign="top">MMCO</td>
</tr>
<tr>
<td align="left" valign="top">Audio</td>
<td align="center" valign="top">Audio</td>
<td align="center" valign="top">MMCO</td>
</tr>
<tr>
<td align="left" valign="top">Background processes</td>
<td align="center" valign="top">Text</td>
<td align="center" valign="top">SRD</td>
</tr>
<tr>
<td align="left" valign="top">Language</td>
<td align="center" valign="top">Text</td>
<td align="center" valign="top">SRD/Metadata</td>
</tr>
<tr>
<td align="left" valign="top">Session owner</td>
<td align="center" valign="top">Text</td>
<td align="center" valign="top">SRD</td>
</tr>
<tr>
<td align="left" valign="top">IP addresses</td>
<td align="center" valign="top">Text</td>
<td align="center" valign="top">SRD</td>
</tr>
<tr>
<td align="left" valign="top">Location data</td>
<td align="center" valign="top">Text</td>
<td align="center" valign="top">SRD (virtual) PD (real)</td>
</tr>
<tr>
<td align="left" valign="top">Sensor data</td>
<td align="center" valign="top">Various Formats</td>
<td align="center" valign="top">PD</td>
</tr>
<tr>
<td align="left" valign="top">Metadata (e.g. user)</td>
<td align="center" valign="top">Text</td>
<td align="center" valign="top">SRD</td>
</tr>
<tr>
<td align="left" valign="top">Chat</td>
<td align="center" valign="top">Text</td>
<td align="center" valign="top">MMCO</td>
</tr>
<tr>
<td align="left" valign="top">Document</td>
<td align="center" valign="top">Text Document</td>
<td align="center" valign="top">MMCO</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="sec4_1_3">
<title>Search types and query input methods</title>
<p>Search types and query input methods play a crucial role in IR systems, thus MVR systems. <xref ref-type="table" rid="T6">Table 6</xref> summarises the number of input methods for MVR searches mentioned in the responses to F3 and F11, with keywords being the most frequently mentioned (five occurrences), followed by natural language inputs, including voice commands (four occurrences), and image-based methods such as screenshots and sketches (three occurrences). Other less frequently used input methods include timestamps and metadata (two occurrences), audio inputs (two occurrences), and more specialised methods like location in-world, SPARQL queries, filters (such as participant groups), and interfaces with media asset management (MAM) systems, each mentioned once. While the data may not be exhaustive, it offers an indication of the input methods currently recognised within the scope of the expert responses.</p>
<p>This wide range of input methods underscores the need for flexible search functions in MVR retrieval systems. Indexing features must accommodate content from MMCO, SRD, and PD, ensuring that MVR indexing supports a comprehensive content analysis across all data types. Importantly, user interest often focuses on specific sequences rather than entire files, necessitating search methods that can efficiently target relevant segments. The type of search input varies significantly based on the application scenario, further highlighting the importance of adaptable and versatile retrieval systems to meet diverse user needs.</p>
<table-wrap id="T6">
<label>Table 6.</label>
<caption><p>Frequency of mentioned input methods for <italic>MVR</italic> search.</p></caption>
<table>
<thead>
<tr>
<th align="center" valign="top">Input Method</th>
<th align="center" valign="top">Frequency</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Keywords</td>
<td align="center" valign="top">5</td>
</tr>
<tr>
<td align="left" valign="top">Natural Language (including voice)</td>
<td align="center" valign="top">4</td>
</tr>
<tr>
<td align="left" valign="top">Image-based (screenshot, sketch, snipping)</td>
<td align="center" valign="top">3</td>
</tr>
<tr>
<td align="left" valign="top">Timestamp/Metadata</td>
<td align="center" valign="top">2</td>
</tr>
<tr>
<td align="left" valign="top">Audio</td>
<td align="center" valign="top">2</td>
</tr>
<tr>
<td align="left" valign="top">Location in-world</td>
<td align="center" valign="top">1</td>
</tr>
<tr>
<td align="left" valign="top">SPARQL</td>
<td align="center" valign="top">1</td>
</tr>
<tr>
<td align="left" valign="top">Filters (like participant groups)</td>
<td align="center" valign="top">1</td>
</tr>
<tr>
<td align="left" valign="top">Interface with MAM system</td>
<td align="center" valign="top">1</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="sec4_1_4">
<title>Further findings</title>
<p>The answers to the questions on search results (F4) confirm that a specific section of the MVRs is more relevant as a result than the entire MVR.</p>
<p>In terms of integration, the responses suggest a requirement for dynamic processing of MVRs, with potential for embedded processing capabilities. Regarding device preferences, computers and laptops emerge as primary platforms for MVR interaction, with growing potential for VR/AR devices. Smartphones and tablets appear to play a minimal role in this context. These observations show implications for the design and implementation of future MVR systems, emphasising the need for desktop-oriented interfaces adaptable to emerging AR/VR technologies.</p>
<p>The study aims to find general requirements for information search behaviour for metaverse recordings. However, two existing application scenarios in particular provided deeper insight: AS15 analysis of supermarket visits and AS2 edit instructional videos. In the development of supermarket concepts, increasing sales and improving the experience are relevant goals, which strongly depends on customer behaviour. To assess this behaviour and identify changes in the layout and arrangement of supermarket shelves, position data and gaze direction are recorded, the expert explained. Implementing this in VR simplifies the process. In addition to the video feed, other data, such as gaze direction and position, are captured (<xref ref-type="bibr" rid="R1">Adhanom et al. 2023</xref>, <xref ref-type="bibr" rid="R16">Mei&#x00DF;ner et al., 2019</xref>). These data fall into the category of PD. In subsequent analysis, the video data is overlaid with position and gaze direction to enhance the analytical process. IR can support this process by identifying similar or distinctly different behaviours, which can further improve the analysis. Also, analysis of the order of products watched can be created from the MVRs, either by manual inspection of the MVRs or by specialised content analysis of the MVR. The boundary between a general search system and an analysis system is fluid in this case, which reinforces the argument that an MVR retrieval system should be seamlessly integrable, providing access to the PD and features from the MVR content analysis, as well as the query and retrieval capabilities.</p>
<p>In the application scenario of instructional videos, the editor must construct the final video out of elements of recorded videos of VR, also called footage. To find relevant segments, the editor must scan through or search in the MVRs. This application scenario is at least like real-world video editing. The IR process can support the editor by showing the similar segments as alternatives for the current segment. The utility of such a function increases with the quality of the semantic understanding of the segments, since the recognition of and search for the instructional action requires a deeper understanding. The explanations of the process by the expert described known challenges from the field of video retrieval, for example, object detection and boundary detection.</p>
<p>In conclusion, the results show a broader application of MVR retrieval, with a broad set of requirements. This leads to several challenges for the MVR retrieval: MVR query construction, MVR content analysis, and MVR retrieval integration.</p>
</sec>
</sec>
</sec>
<sec id="sec5">
<title>Discussion of limitations</title>
<p>This study has a few limitations that should be considered when interpreting its results. Firstly, the small sample size of only six experts from diverse application domains limits the generalisability of the findings. Additionally, the narrow geographic and cultural representation of the participants may not capture the full spectrum of perspectives on MVRs. Another significant limitation is the potential ambiguity in understanding user needs, as the study relied heavily on expert opinions rather than direct observations of existing systems. This approach was necessitated by the current inaccessibility of some key platforms, such as AltspaceVR (<xref ref-type="bibr" rid="R29">Wikipedia, 2024</xref>), which are no longer publicly visible. Consequently, this constraint also led to a limited exploration of data types within actual metaverse environments. These limitations underscore the need for future research to expand the sample size, broaden cultural representation, and incorporate direct observations of metaverse systems, when possible, to provide a more comprehensive understanding of MVR retrieval needs and challenges. However, for the future research of MVR retrieval, this study&#x2019;s results provide fundamental findings, insights on information need, and indications of what MVR retrieval system features are more relevant to the users.</p>
</sec>
<sec id="sec6">
<title>Summary</title>
<p>The results of this study provide significant insights into the creation, usage, and information needs associated with MVRs. Firstly, our six-expert panel validated eight existing and thirteen conceivable MVR retrieval scenarios across six domains, providing compelling evidence that MVRs are actively being created and utilised in various field contexts. A central finding is the users focus on locating specific segments within MVRs, rather than entire recordings, and the need to identify similar segments or their immediate temporal context (F4). The study clearly demonstrates that search methodologies and the relative importance of different data types vary considerably across the identified application scenarios (such as AS15 and AS2), reinforcing the demand for highly flexible, adaptable, and potentially user-tailored MVR retrieval systems capable of diverse query inputs (see <xref ref-type="table" rid="T6">Table 6</xref>). Crucially, the analysis of data types in <xref ref-type="table" rid="T5">Table 5</xref> validates the practical relevance of all components of the proposed MVR taxonomy, MMCO, SRD, and PD, in real-world MVR contexts. This finding underscores the necessity for MVR retrieval systems to handle these distinct yet interconnected data streams. However, the relative importance and utilisation of these data types fluctuate significantly across different application scenarios, highlighting the complex and multifaceted nature of MVR data and the diverse requirements for effective information retrieval in metaverse environments.</p>
</sec>
<sec id="sec7">
<title>Conclusions and outlook</title>
<p>Based on the results of this study, it is now feasible to model the context of use for MVR retrieval and develop a corresponding system design that can be both implemented and evaluated. The findings from the field study allowed for the validation and further detailing of application scenarios. Moreover, the information needs of users are better understood, as well as their information-searching behaviour in relation to MVR retrieval. The study confirms that time-series data, including SRD and PD, are not only technically feasible to capture, but are also highly relevant to user needs.</p>
<p>These results serve as a foundational basis for future research into MVR retrieval systems. For instance, they contribute to defining the context of use for such systems, identifying user stereotypes, and deriving specific use cases and other requirements for designing effective MVR retrieval systems. These contributions provide a structured approach to further development in this domain, ensuring that future systems align more closely with user needs and behaviour.</p>
</sec>
<sec id="sec8">
<title>Copyright</title>
<p>Authors contributing to <italic>Information Research</italic> agree to publish their articles under a <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by-nc/4.0/"><underline>Creative Commons CC BY-NC 4.0 license,</underline></ext-link> which gives third parties the right to copy and redistribute the material in any medium or format. It also gives third parties the right to remix, transform and build upon the material for any purpose, except commercial, on the condition that clear acknowledgment is given to the author(s) of the work, that a link to the license is provided and that it is made clear if changes have been made to the work. This must be done in a reasonable manner, and must not imply that the licensor endorses the use of the work by third parties. The author(s) retain copyright to the work. You can also read more at: <ext-link ext-link-type="uri" xlink:href="https://publicera.kb.se/ir/openaccess"><underline>https://publicera.kb.se/ir/openaccess</underline></ext-link></p>
</sec>
</body>
<back>
<ack>
<title>Acknowledgements</title>
<p>The authors thank the experts for their time, valuable insights, and contributions to the study.</p>
</ack>
<app-group>
<app id="app1">
<title>Appendix</title>
<sec id="app1_1">
<title>Questionnaire</title>
<p>The <xref ref-type="table" rid="A7">Table 7</xref> contains the questions of part 1.</p>
<table-wrap id="A7">
<label>Table 7.</label>
<caption><p>Expert panel questionnaire part 1 &#x2013; Questions on application scenarios.</p></caption>
<table>
<thead>
<tr>
<th align="left" valign="top">Identifier</th>
<th align="center" valign="top">Question</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">F1.1</td>
<td align="left" valign="top">Can you identify application scenarios of MVR retrieval within your application domain?<break/>(Goal: Existing MVR retrieval)</td>
</tr>
<tr>
<td align="left" valign="top">F1.2</td>
<td align="left" valign="top">Can you imagine further scenarios beyond the identified application scenarios?<break/>(Goal: Conceivable MVR retrieval)</td>
</tr>
<tr>
<td align="left" valign="top">F2</td>
<td align="left" valign="top">To what extent do you consider the following application scenario conceivable?<break/>(Goal: Validation of modeled MVR retrieval scenarios)</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The <xref ref-type="table" rid="A8">Table 8</xref> contains the questions of part 2.</p>
<table-wrap id="A8">
<label>Table 8.</label>
<caption><p>Expert panel questionnaire part 2 &#x2013; Questionnaire for application context gathering.</p></caption>
<table>
<thead>
<tr>
<th align="left" valign="top">Identifier</th>
<th align="center" valign="top">Question / Response Ideas</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">F3</td>
<td align="left" valign="top">What kind of information do users most likely have at the start of their search to begin the search process?</td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top"><list list-type="simple">
<list-item><p>Keywords</p></list-item>
<list-item><p>Example image</p></list-item>
<list-item><p>Example audio</p></list-item>
<list-item><p>Timestamp</p></list-item>
<list-item><p>Filename</p></list-item>
<list-item><p>Other</p></list-item></list></td>
</tr>
<tr>
<td align="left" valign="top">F11</td>
<td align="left" valign="top">Which components should a user interface for MVR retrieval include for querying?</td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top"><list list-type="simple">
<list-item><p>Keywords</p></list-item>
<list-item><p>Text field for natural language</p></list-item>
<list-item><p>Query language (SPARQL)</p></list-item>
<list-item><p>Image input</p></list-item>
<list-item><p>Sketch</p></list-item>
<list-item><p>Audio input</p></list-item></list></td>
</tr>
<tr>
<td align="left" valign="top">F4</td>
<td align="left" valign="top">What type of search result might users be particularly interested in for MVR retrieval?</td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top"><list list-type="simple">
<list-item><p>A single relevant file</p></list-item>
<list-item><p>A collection of relevant files</p></list-item>
<list-item><p>A specific position within a file</p></list-item>
<list-item><p>Multiple positions within one or more files</p></list-item>
<list-item><p>Other</p></list-item></list></td>
</tr>
<tr>
<td align="left" valign="top">F5</td>
<td align="left" valign="top">What type of media might users be particularly interested in for MVR retrieval?</td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top"><list list-type="simple">
<list-item><p>Chat</p></list-item>
<list-item><p>Audio</p></list-item>
<list-item><p>Image</p></list-item>
<list-item><p>Video</p></list-item>
<list-item><p>Spatial video (3D video)</p></list-item>
<list-item><p>Document</p></list-item>
<list-item><p>Combinations of the above</p></list-item>
<list-item><p>Other</p></list-item></list></td>
</tr>
<tr>
<td align="left" valign="top">F9</td>
<td align="left" valign="top">What kind of file types do the collections to be searched consist of?</td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top"><list list-type="simple">
<list-item><p>Screen recordings</p></list-item>
<list-item><p>Video raw files (engine data)</p></list-item>
<list-item><p>Metadata/Log files</p></list-item>
<list-item><p>Sensor data</p></list-item>
<list-item><p>Other</p></list-item></list></td>
</tr>
<tr>
<td align="left" valign="top">F15</td>
<td align="left" valign="top">What components or combinations thereof should a user interface have for presenting results?</td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top"><list list-type="simple">
<list-item><p>Displaying one file at a time (details)</p></list-item>
<list-item><p>Display all files as a ranked list sorted by relevance (ranked list)</p></list-item>
<list-item><p>Option to filter results based on metadata (e.g., file type, creation date, etc.)</p></list-item>
<list-item><p>Ability to open files (e.g., play back a video file to verify a result)</p></list-item>
<list-item><p>Display additional files based on another similarity metric (recommendations)</p></list-item>
<list-item><p>Other</p></list-item></list></td>
</tr>
<tr>
<td align="left" valign="top">F6</td>
<td align="left" valign="top">Is MVR retrieval a static or dynamic process?</td>
</tr>
<tr>
<td align="left" valign="top">F8</td>
<td align="left" valign="top">What type of application are MVR retrieval systems likely to be? (Standalone or Embedded)</td>
</tr>
<tr>
<td align="left" valign="top">F7</td>
<td align="left" valign="top">On which technical devices is the use of MVR retrieval most likely?</td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top"><list list-type="simple">
<list-item><p>Desktop PC</p></list-item>
<list-item><p>Laptop</p></list-item>
<list-item><p>Smartphone</p></list-item>
<list-item><p>Gaming console</p></list-item>
<list-item><p>Other</p></list-item></list></td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="app1_2">
<title>Expert responses</title>
<p>Expert responses can be provided upon request.</p>
</sec>
</app>
</app-group>
<ref-list>
<title>References</title>
<ref id="R1"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Adhanom</surname><given-names>I.B.</given-names></name><name><surname>MacNeilage</surname><given-names>P.</given-names></name><name><surname>Folmer</surname><given-names>E.</given-names></name></person-group><year>2023</year><article-title>Eye tracking in virtual reality: A broad review of applications and challenges</article-title><source>Virtual Reality</source><volume>27</volume><fpage>1481</fpage><lpage>1505</lpage><comment><ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1007/s10055-022-00738-z">https://doi.org/10.1007/s10055-022-00738-z</ext-link></comment></element-citation></ref>
<ref id="R2"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Alsop</surname><given-names>T.</given-names></name></person-group><year>2023</year><month>August</month><day>31</day><source>Virtual reality (VR)&#x2014;Statistics &amp; facts</source><publisher-name>Statista</publisher-name><comment><ext-link ext-link-type="uri" xlink:href="https://www.statista.com/topics/2532/virtual-reality-vr/">https://www.statista.com/topics/2532/virtual-reality-vr/</ext-link></comment><comment>Archived at</comment><comment><ext-link ext-link-type="uri" xlink:href="https://web.archive.org/web/20250920142626/https://www.statista.com/topics/2532/virtual-reality-vr/">https://web.archive.org/web/20250920142626/https://www.statista.com/topics/2532/virtual-reality-vr/</ext-link></comment></element-citation></ref>
<ref id="R3"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Anderson</surname><given-names>J.</given-names></name><name><surname>Rainie</surname><given-names>L.</given-names></name></person-group><year>2022</year><month>June</month><day>30</day><source>The metaverse in 2040</source><publisher-name>Pew Research Center</publisher-name><comment><ext-link ext-link-type="uri" xlink:href="https://www.pewresearch.org/internet/2022/06/30/the-metaverse-in-2040/">https://www.pewresearch.org/internet/2022/06/30/the-metaverse-in-2040/</ext-link></comment><comment>Archived at</comment><comment><ext-link ext-link-type="uri" xlink:href="https://web.archive.org/web/20250921093518/https://www.pewresearch.org/internet/2022/06/30/the-metaverse-in-2040/">https://web.archive.org/web/20250921093518/https://www.pewresearch.org/internet/2022/06/30/the-metaverse-in-2040/</ext-link></comment></element-citation></ref>
<ref id="R4"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Austin</surname><given-names>D.</given-names></name></person-group><year>2023</year><month>April</month><day>20</day><source>2023 Internet minute infographic</source><publisher-name>eDiscovery Today by Doug Austin</publisher-name><comment><ext-link ext-link-type="uri" xlink:href="https://ediscoverytoday.com/2023/04/20/2023-internet-minute-infographic-by-ediscovery-today-and-ltmg-ediscovery-trends/">https://ediscoverytoday.com/2023/04/20/2023-internet-minute-infographic-by-ediscovery-today-and-ltmg-ediscovery-trends/</ext-link></comment><comment>Archived at</comment><comment><ext-link ext-link-type="uri" xlink:href="https://web.archive.org/web/20250823180818/https://ediscoverytoday.com/2023/04/20/2023-internet-minute-infographic-by-ediscovery-today-and-ltmg-ediscovery-trends/">https://web.archive.org/web/20250823180818/https://ediscoverytoday.com/2023/04/20/2023-internet-minute-infographic-by-ediscovery-today-and-ltmg-ediscovery-trends/</ext-link></comment></element-citation></ref>
<ref id="R5"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Belkin</surname><given-names>N. J.</given-names></name><name><surname>Marchetti</surname><given-names>P. G.</given-names></name><name><surname>Cool</surname><given-names>C.</given-names></name></person-group><year>1993</year><article-title>BRAQUE: Design of an interface to support user interaction in information retrieval</article-title><source>Information Processing &amp; Management</source><volume>29</volume><issue>3</issue><fpage>325</fpage><lpage>344</lpage><comment><ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/0306-4573(93)90059-M">https://doi.org/10.1016/0306-4573(93)90059-M</ext-link></comment></element-citation></ref>
<ref id="R6"><element-citation publication-type="web"><person-group person-group-type="author"><collab>Bestie Let&#x2019;s Play (Director)</collab></person-group><year>2022</year><month>October</month><day>16</day><source>Wir verbringen einen Herbsttag mit der Gro&#x00DF;familie!!/Roblox Bloxburg Family Roleplay Deutsch</source><comment>[Video recording]</comment><comment><ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/watch?v=sslXNBKeqf0">https://www.youtube.com/watch?v=sslXNBKeqf0</ext-link></comment></element-citation></ref>
<ref id="R7"><element-citation publication-type="book"><person-group person-group-type="author"><collab>Gartner Inc</collab></person-group><year>2022</year><month>August</month><day>30</day><source>Metaverse, web3 and crypto: Separating blockchain hype from reality</source><comment>[Interview]</comment><publisher-name>Gartner</publisher-name><comment><ext-link ext-link-type="uri" xlink:href="https://www.gartner.com/en/newsroom/press-releases/2022-08-30-metaverse-web3-and-crypto-separating-blockchain-hype-from-reality">https://www.gartner.com/en/newsroom/press-releases/2022-08-30-metaverse-web3-and-crypto-separating-blockchain-hype-from-reality</ext-link></comment><comment>Archived at</comment><comment><ext-link ext-link-type="uri" xlink:href="https://web.archive.org/web/20250806055732/https://www.gartner.com/en/newsroom/press-releases/2022-08-30-metaverse-web3-and-crypto-separating-blockchain-hype-from-reality">https://web.archive.org/web/20250806055732/https://www.gartner.com/en/newsroom/press-releases/2022-08-30-metaverse-web3-and-crypto-separating-blockchain-hype-from-reality</ext-link></comment></element-citation></ref>
<ref id="R8"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>G&#x00FC;ng&#x00F6;r</surname><given-names>A.</given-names></name></person-group><year>2016</year><month>May</month><day>17</day><source>Video: NVIDIA Ansel architecture explained</source><publisher-name>Technopat Sosyal</publisher-name><comment><ext-link ext-link-type="uri" xlink:href="https://www.technopat.net/sosyal/konu/video-nvidia-ansel-architecture-explained.329263/">https://www.technopat.net/sosyal/konu/video-nvidia-ansel-architecture-explained.329263/</ext-link></comment><comment>Archived at</comment><comment><ext-link ext-link-type="uri" xlink:href="https://web.archive.org/web/20251104185726/https://www.technopat.net/sosyal/konu/video-nvidia-ansel-architecture-explained.329263/">https://web.archive.org/web/20251104185726/https://www.technopat.net/sosyal/konu/video-nvidia-ansel-architecture-explained.329263/</ext-link></comment></element-citation></ref>
<ref id="R9"><element-citation publication-type="web"><person-group person-group-type="author"><name><surname>Gunkel</surname><given-names>S.</given-names></name><name><surname>Stokking</surname><given-names>H.</given-names></name><name><surname>Prins</surname><given-names>M.</given-names></name><name><surname>Niamut</surname><given-names>O.</given-names></name><name><surname>Siahaan</surname><given-names>E.</given-names></name><name><surname>Cesar</surname><given-names>P.</given-names></name></person-group><year>2018</year><article-title>Experiencing virtual reality together: Social VR use case study</article-title><source>Proceedings of the 2018 ACM International Conference on Interactive Experiences for TV and Online Video</source><fpage>233</fpage><lpage>&#x2013;</lpage><comment><ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1145/3210825.3213566">https://doi.org/10.1145/3210825.3213566</ext-link></comment></element-citation></ref>
<ref id="R10"><element-citation publication-type="web"><person-group person-group-type="author"><name><surname>Gurrin</surname><given-names>C.</given-names></name></person-group><year>2021</year><article-title>Personal data matters: New opportunities from lifelogs</article-title><source>2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)</source><fpage>1</fpage><lpage>&#x2013;</lpage><comment><ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/iSAI-NLP54397.2021.9678155">https://doi.org/10.1109/iSAI-NLP54397.2021.9678155</ext-link></comment></element-citation></ref>
<ref id="R11"><element-citation publication-type="web"><person-group person-group-type="author"><name><surname>Gurrin</surname><given-names>C.</given-names></name><name><surname>Zhou</surname><given-names>L.</given-names></name><name><surname>Healy</surname><given-names>G.</given-names></name><name><surname>Tran</surname><given-names>A.</given-names></name><name><surname>Rossetto</surname><given-names>L.</given-names></name><name><surname>Bailer</surname><given-names>W.</given-names></name><name><surname>Dang-Nguyen</surname><given-names>D.</given-names></name><name><surname>Hodges</surname><given-names>S.</given-names></name><name><surname>J&#x00F3;nsson</surname><given-names>B</given-names></name><name><surname>Tran</surname><given-names>M.</given-names></name><name><surname>Sch&#x00F6;ffmann</surname><given-names>K.</given-names></name></person-group><year>2025</year><month>June</month><article-title>Introduction to the 8th Annual Lifelog Search Challenge, LSC&#x2019;25.</article-title><source>Proceedings of the 2025 International Conference on Multimedia Retrieval</source><fpage>2143</fpage><lpage>2144</lpage><comment><ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1145/3731715.3734579">https://doi.org/10.1145/3731715.3734579</ext-link></comment></element-citation></ref>
<ref id="R12"><element-citation publication-type="web"><person-group person-group-type="author"><collab>The International Criminal Police Organization (INTERPOL)</collab></person-group><year>2024</year><comment>a</comment><source>Metaverse&#x2014;A law enforcement perspective</source><comment>[Whitepaper]</comment><comment><ext-link ext-link-type="uri" xlink:href="https://www.interpol.int/News-and-Events/News/2024/Grooming-radicalization-and-cyber-attacks-INTERPOL-warns-of-Metacrime">https://www.interpol.int/News-and-Events/News/2024/Grooming-radicalization-and-cyber-attacks-INTERPOL-warns-of-Metacrime</ext-link></comment><comment>Archived at</comment><comment><ext-link ext-link-type="uri" xlink:href="https://web.archive.org/web/20250611211724/https://www.interpol.int/News-and-Events/News/2024/Grooming-radicalization-and-cyber-attacks-INTERPOL-warns-of-Metacrime">https://web.archive.org/web/20250611211724/https://www.interpol.int/News-and-Events/News/2024/Grooming-radicalization-and-cyber-attacks-INTERPOL-warns-of-Metacrime</ext-link></comment></element-citation></ref>
<ref id="R13"><element-citation publication-type="web"><person-group person-group-type="author"><collab>The International Criminal Police Organization (INTERPOL)</collab></person-group><year>2024</year><comment>b, January 18</comment><source>Grooming, radicalization and cyber-attacks: INTERPOL warns of &#x2018;Metacrime&#x2019;</source><comment><ext-link ext-link-type="uri" xlink:href="https://www.interpol.int/en/News-and-Events/News/2024/Grooming-radicalization-and-cyber-attacks-INTERPOL-warns-of-Metacrime">https://www.interpol.int/en/News-and-Events/News/2024/Grooming-radicalization-and-cyber-attacks-INTERPOL-warns-of-Metacrime</ext-link></comment><comment>Archived at</comment><comment><ext-link ext-link-type="uri" xlink:href="https://web.archive.org/web/20250906030108/https://www.interpol.int/en/News-and-Events/News/2024/Grooming-radicalization-and-cyber-attacks-INTERPOL-warns-of-Metacrime">https://web.archive.org/web/20250906030108/https://www.interpol.int/en/News-and-Events/News/2024/Grooming-radicalization-and-cyber-attacks-INTERPOL-warns-of-Metacrime</ext-link></comment></element-citation></ref>
<ref id="R14"><element-citation publication-type="web"><person-group person-group-type="author"><name><surname>Karlsson</surname><given-names>B.</given-names></name></person-group><year>2018</year><source>RenderDoc</source><comment><ext-link ext-link-type="uri" xlink:href="https://renderdoc.org/">https://renderdoc.org/</ext-link></comment><comment>Archived at</comment><comment><ext-link ext-link-type="uri" xlink:href="https://web.archive.org/web/20251010074022/https://renderdoc.org/">https://web.archive.org/web/20251010074022/https://renderdoc.org/</ext-link></comment></element-citation></ref>
<ref id="R15"><element-citation publication-type="web"><person-group person-group-type="author"><collab>KZero Worldwide</collab></person-group><year>2024</year><month>February</month><day>6</day><source>Exploring the Q1 24&#x2019; metaverse radar chart: Key findings unveiled - KZero Worldswide</source><comment><ext-link ext-link-type="uri" xlink:href="https://kzero.io/2024/02/06/2633/">https://kzero.io/2024/02/06/2633/</ext-link></comment><comment>Archived at</comment><comment><ext-link ext-link-type="uri" xlink:href="https://web.archive.org/web/20240206134622/https://kzero.io/2024/02/06/2633/">https://web.archive.org/web/20240206134622/https://kzero.io/2024/02/06/2633/</ext-link></comment></element-citation></ref>
<ref id="R16"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mei&#x00DF;ner</surname><given-names>M.</given-names></name><name><surname>Pfeiffer</surname><given-names>J.</given-names></name><name><surname>Pfeiffer</surname><given-names>T.</given-names></name><name><surname>Oppewal</surname><given-names>H.</given-names></name></person-group><year>2019</year><article-title>Combining virtual reality and mobile eye tracking to provide a naturalistic experimental environment for shopper research</article-title><source>Journal of Business Research</source><volume>100</volume><fpage>445</fpage><lpage>458</lpage><comment><ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.jbusres.2017.09.028">https://doi.org/10.1016/j.jbusres.2017.09.028</ext-link></comment></element-citation></ref>
<ref id="R17"><element-citation publication-type="book"><person-group person-group-type="author"><collab>Mojang</collab></person-group><year>2023</year><month>January</month><day>31</day><source>Minecraft official website</source><publisher-name>Minecraft.net</publisher-name><comment><ext-link ext-link-type="uri" xlink:href="https://www.minecraft.net/de-de">https://www.minecraft.net/de-de</ext-link></comment><comment>Archived at</comment><comment><ext-link ext-link-type="uri" xlink:href="https://web.archive.org/web/20251103172205/https://www.minecraft.net/de-de">https://web.archive.org/web/20251103172205/https://www.minecraft.net/de-de</ext-link></comment></element-citation></ref>
<ref id="R18"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Morris</surname><given-names>T.</given-names></name></person-group><year>2022</year><month>May</month><day>3</day><source>Understanding just what&#x2019;s happening with the metaverse?</source><publisher-name>GWI</publisher-name><comment><ext-link ext-link-type="uri" xlink:href="https://blog.gwi.com/chart-of-the-week/metaverse-predictions/">https://blog.gwi.com/chart-of-the-week/metaverse-predictions/</ext-link></comment><comment>Archived at</comment><comment><ext-link ext-link-type="uri" xlink:href="https://web.archive.org/web/20250807010715/https://www.gwi.com/blog/metaverse-predictions">https://web.archive.org/web/20250807010715/https://www.gwi.com/blog/metaverse-predictions</ext-link></comment></element-citation></ref>
<ref id="R19"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mystakidis</surname><given-names>S.</given-names></name></person-group><year>2022</year><article-title>Metaverse</article-title><source>Encyclopedia</source><volume>2</volume><issue>1</issue><fpage>486</fpage><lpage>497</lpage><comment><ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3390/encyclopedia2010031">https://doi.org/10.3390/encyclopedia2010031</ext-link></comment></element-citation></ref>
<ref id="R20"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>R&#x00FC;ger</surname><given-names>S.</given-names></name></person-group><year>2010</year><chapter-title>What is multimedia information retrieval?</chapter-title><person-group person-group-type="editor"><name><surname>R&#x00FC;ger</surname><given-names>S.</given-names></name></person-group><source>Multimedia Information Retrieval</source><fpage>1</fpage><lpage>12</lpage><publisher-name>Springer International Publishing</publisher-name><comment><ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1007/978-3-031-02269-2_1">https://doi.org/10.1007/978-3-031-02269-2_1</ext-link></comment></element-citation></ref>
<ref id="R21"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Steinert</surname><given-names>P.</given-names></name><name><surname>Wagenpfeil</surname><given-names>S.</given-names></name><name><surname>Frommholz</surname><given-names>I.</given-names></name><name><surname>Hemmje</surname><given-names>M. L.</given-names></name></person-group><year>2023</year><chapter-title>Towards the integration of metaverse and multimedia information retrieval</chapter-title><source>IEEE International Conference on Metrology for eXtended Reality</source><publisher-loc>Milano, Italy</publisher-loc><fpage>581</fpage><lpage>586</lpage><comment><ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/MetroXRAINE58569.2023.10405728">https://doi.org/10.1109/MetroXRAINE58569.2023.10405728</ext-link></comment></element-citation></ref>
<ref id="R22"><element-citation publication-type="web"><person-group person-group-type="author"><name><surname>Steinert</surname><given-names>P.</given-names></name><name><surname>Wagenpfeil</surname><given-names>S.</given-names></name><name><surname>Frommholz</surname><given-names>I.</given-names></name><name><surname>Hemmje</surname><given-names>M. L.</given-names></name></person-group><year>2024</year><comment>a, October</comment><article-title>256 metaverse records dataset.</article-title><source>Proceedings of the 32nd ACM International Conference on Multimedia</source><fpage>4256</fpage><lpage>4263</lpage><comment><ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1145/3664647.3681711">https://doi.org/10.1145/3664647.3681711</ext-link></comment></element-citation></ref>
<ref id="R23"><element-citation publication-type="web"><person-group person-group-type="author"><name><surname>Steinert</surname><given-names>P.</given-names></name><name><surname>Wagenpfeil</surname><given-names>S.</given-names></name><name><surname>Frommholz</surname><given-names>I.</given-names></name><name><surname>Hemmje</surname><given-names>M. L.</given-names></name></person-group><year>2024</year><comment>b, February</comment><article-title>Integration of Metaverse Recordings in Multimedia Information Retrieval.</article-title><source>Proceedings of the 2024 13th International Conference on Software and Computer Applications</source><fpage>137</fpage><lpage>145</lpage><comment><ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1145/3651781.365180">https://doi.org/10.1145/3651781.365180</ext-link></comment></element-citation></ref>
<ref id="R24"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Steinert</surname><given-names>P.</given-names></name><name><surname>Wagenpfeil</surname><given-names>S.</given-names></name><name><surname>Frommholz</surname><given-names>I.</given-names></name><name><surname>Hemmje</surname><given-names>M. L.</given-names></name></person-group><year>2025</year><article-title>Uses of metaverse recordings in multimedia information retrieval</article-title><source>Multimedia</source><volume>1</volume><issue>1</issue><fpage>2</fpage><comment><ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3390/multimedia1010002">https://doi.org/10.3390/multimedia1010002</ext-link></comment></element-citation></ref>
<ref id="R25"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Stephenson</surname><given-names>N.</given-names></name></person-group><year>1992</year><source>Snow crash</source><publisher-name>Bantam Books</publisher-name></element-citation></ref>
<ref id="R26"><element-citation publication-type="web"><person-group person-group-type="author"><name><surname>Stothard</surname><given-names>P.</given-names></name><name><surname>Ryan</surname><given-names>P.</given-names></name><name><surname>Kurata</surname><given-names>T.</given-names></name><name><surname>Stapleton</surname><given-names>D.</given-names></name></person-group><year>2024</year><article-title>Towards a mining metaverse</article-title><source>Mining Technology: Transactions of the Institutions of Mining and Metallurgy</source><fpage>25726668241242232</fpage><comment><ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/25726668241242232">https://doi.org/10.1177/25726668241242232</ext-link></comment></element-citation></ref>
<ref id="R27"><element-citation publication-type="web"><person-group person-group-type="author"><collab>Wikipedia</collab></person-group><year>2023</year><comment>a</comment><article-title>Augmented reality</article-title><source>Wikipedia</source><comment><ext-link ext-link-type="uri" xlink:href="https://en.wikipedia.org/w/index.php?title=Augmented_reality&amp;oldid=1142793928">https://en.wikipedia.org/w/index.php?title=Augmented_reality&amp;oldid=1142793928</ext-link></comment><comment>Archived at</comment><comment><ext-link ext-link-type="uri" xlink:href="https://web.archive.org/web/20240903103104/https://en.wikipedia.org/w/index.php?title=Augmented_reality&amp;oldid=1142793928">https://web.archive.org/web/20240903103104/https://en.wikipedia.org/w/index.php?title=Augmented_reality&amp;oldid=1142793928</ext-link></comment></element-citation></ref>
<ref id="R28"><element-citation publication-type="web"><person-group person-group-type="author"><collab>Wikipedia</collab></person-group><year>2023</year><comment>b</comment><article-title>Roblox</article-title><source>Wikipedia</source><comment><ext-link ext-link-type="uri" xlink:href="https://en.wikipedia.org/w/index.php?title=Roblox&amp;oldid=1177660840">https://en.wikipedia.org/w/index.php?title=Roblox&amp;oldid=1177660840</ext-link></comment><comment>Archived at</comment><comment><ext-link ext-link-type="uri" xlink:href="https://web.archive.org/web/20241005054004/https://en.wikipedia.org/w/index.php?title=Roblox&amp;oldid=1177660840">https://web.archive.org/web/20241005054004/https://en.wikipedia.org/w/index.php?title=Roblox&amp;oldid=1177660840</ext-link></comment></element-citation></ref>
<ref id="R29"><element-citation publication-type="web"><person-group person-group-type="author"><collab>Wikipedia</collab></person-group><year>2024</year><article-title>AltspaceVR</article-title><source>Wikipedia</source><comment><ext-link ext-link-type="uri" xlink:href="https://en.wikipedia.org/w/index.php?title=AltspaceVR&amp;oldid=1248210194">https://en.wikipedia.org/w/index.php?title=AltspaceVR&amp;oldid=1248210194</ext-link></comment><comment>Archived at</comment><comment><ext-link ext-link-type="uri" xlink:href="https://web.archive.org/web/20251104192245/https://en.wikipedia.org/w/index.php?title=AltspaceVR&amp;oldid=1248210194">https://web.archive.org/web/20251104192245/https://en.wikipedia.org/w/index.php?title=AltspaceVR&amp;oldid=1248210194</ext-link></comment></element-citation></ref>
<ref id="R30"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Zheng</surname><given-names>Z.</given-names></name><name><surname>Li</surname><given-names>T.</given-names></name><name><surname>Li</surname><given-names>B.</given-names></name><name><surname>Chai</surname><given-names>X.</given-names></name><name><surname>Song</surname><given-names>W.</given-names></name><name><surname>Chen</surname><given-names>N.</given-names></name><name><surname>Zhou</surname><given-names>Y.</given-names></name><name><surname>Lin</surname><given-names>Y.</given-names></name><name><surname>Li</surname><given-names>R.</given-names></name></person-group><year>2022</year><chapter-title>Industrial metaverse: Connotation, features, technologies, applications and challenges</chapter-title><person-group person-group-type="editor"><name><surname>Fan</surname><given-names>W.</given-names></name><name><surname>Zhang</surname><given-names>L.</given-names></name><name><surname>Li</surname><given-names>N.</given-names></name><name><surname>Song</surname><given-names>X.</given-names></name></person-group><source>Methods and Applications for Modeling and Simulation of Complex Systems</source><fpage>239</fpage><lpage>263</lpage><publisher-name>Springer Nature Singapore</publisher-name><comment><ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1007/978-981-19-9198-1_19">https://doi.org/10.1007/978-981-19-9198-1_19</ext-link></comment></element-citation></ref>
</ref-list>
</back>
</article>