<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.0/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" article-type="research-article" xml:lang="en">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">IR</journal-id>
<journal-title-group>
<journal-title>Information Research</journal-title>
</journal-title-group>
<issn pub-type="epub">1368-1613</issn>
<publisher>
<publisher-name>University of Bor&#x00E5;s</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">ir30iConf47290</article-id>
<article-id pub-id-type="doi">10.47989/ir30iConf47290</article-id>
<article-categories>
<subj-group xml:lang="en">
<subject>Research article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>&#x2018;Sora is incredible and scary&#x2019;: public perceptions and governance challenges of text-to-video generative AI models</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author"><name><surname>Zhou</surname><given-names>Kyrie Zhixuan</given-names></name>
<xref ref-type="aff" rid="aff0001"/></contrib>
<contrib contrib-type="author"><name><surname>Choudhry</surname><given-names>Abhinav</given-names></name>
<xref ref-type="aff" rid="aff0002"/></contrib>
<contrib contrib-type="author"><name><surname>Gumusel</surname><given-names>Ece</given-names></name>
<xref ref-type="aff" rid="aff0003"/></contrib>
<contrib contrib-type="author"><name><surname>Sanfilippo</surname><given-names>Madelyn Rose</given-names></name>
<xref ref-type="aff" rid="aff0004"/></contrib>
<aff id="aff0001"><bold>Kyrie Zhixuan Zhou</bold> is a PhD Candidate in the School of Information Sciences, University of Illinois Urbana-Champaign. He received his bachelor&#x2019;s degree from Wuhan University. His research interests are in human-computer interaction, computer accessibility, and AI ethics. He can be contacted at <email xlink:href="zz78@illinois.edu">zz78@illinois.edu</email></aff>
<aff id="aff0002"><bold>Abhinav Choudhry</bold> is a PhD Candidate in Information Sciences, University of Illinois Urbana- Champaign, Illinois, USA. He received his master&#x2019;s from Cornell University and his research interests relate to the deployment of AI and software applications to health, financial literacy, and planning, particularly for older adults. He can be contacted at <email xlink:href="ac62@illinois.edu">ac62@illinois.edu</email></aff>
<aff id="aff0003"><bold>Ece Gumusel</bold> is a Ph.D. candidate in Information Science with a minor in Computer Science at the Luddy School of Informatics, Computing, and Engineering at Indiana University, Bloomington. Her doctoral research focuses on user privacy dynamics in conversational text-based AI chatbots through mixed-method approaches. Her research agenda encompasses areas such as usable privacy, privacy compliance, human-computer interaction, and social informatics. She received her LL.M. (Master of Laws) in Intellectual Property and Technology Law at the University of Illinois at Urbana-Champaign, and her LL.B. (Bachelor of Laws) at Ba&#x015F;kent University. She can be contacted at <email xlink:href="egumusel@iu.edu">egumusel@iu.edu</email></aff>
<aff id="aff0004"><bold>Madelyn Rose Sanfilippo</bold> is an Assistant Professor in the School of Information Sciences, University of Illinois, Urbana-Champaign USA. She received her Ph.D. from Indiana University, and her research focuses on governance in sociotechnical systems. She can be contacted at <email xlink:href="madelyns@illinois.edu">madelyns@illinois.edu</email></aff>
</contrib-group>
<pub-date pub-type="epub"><day>06</day><month>05</month><year>2025</year></pub-date>
<pub-date pub-type="collection"><year>2025</year></pub-date>
<volume>30</volume>
<issue>i</issue>
<fpage>508</fpage>
<lpage>522</lpage>
<permissions>
<copyright-year>2025</copyright-year>
<copyright-holder>&#x00A9; 2025 The Author(s).</copyright-holder>
<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by-nc/4.0/">
<license-p>This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by-nc/4.0/">http://creativecommons.org/licenses/by-nc/4.0/</ext-link>), permitting all non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<abstract xml:lang="en">
<title>Abstract</title>
<p><bold>Introduction.</bold> Text-to-video generative AI models such as Sora OpenAI have the potential to disrupt multiple industries. In this paper, we report a qualitative social media analysis aiming to uncover people&#x2019;s perceived impact of and concerns about Sora&#x2019;s integration.</p>
<p><bold>Method.</bold> We collected and analysed comments (N=292) under popular posts about (1) Sora generated videos, (2) a comparison between Sora videos and Midjourney images, and (3) artists&#x2019; complaints about copyright infringement by Generative AI.</p>
<p><bold>Analysis.</bold> We employed a thematic analysis to analyse the collected comments.</p>
<p><bold>Results.</bold> We found that people were most concerned about Sora&#x2019;s impact on content creation-related industries. Governance challenges included the for-profit nature of OpenAI, the blurred boundaries between real and fake content, human autonomy, data privacy, copyright issues, and environmental impact. Potential regulatory solutions proposed by people included law-enforced labelling of AI content and AI literacy education for the public.</p>
<p><bold>Conclusions.</bold> Based on the findings, we discuss the importance of gauging people&#x2019;s tech perceptions early and propose policy recommendations to regulate Sora before its public release. </p>
</abstract>
</article-meta>
</front>
<body>
<sec id="sec1">
<title>Introduction</title>
<p>As artificial intelligence (AI) continues to permeate daily life, advanced AI systems raise significant questions and prompt diverse social reactions. In particular, the remarkable advancements of generative AI (GenAI) have transformed the domain of human-computer interaction (HCI) (<xref rid="R25" ref-type="bibr">Muller et al., 2022</xref>). Sora OpenAI (Sora) is a recently announced advanced GenAI tool capable of crafting realistic videos based on textual input (<xref rid="R32" ref-type="bibr">Sun et al., 2024</xref>; <xref rid="R36" ref-type="bibr">Wang et al., 2024</xref>), which has sparked considerable interest and speculation regarding its potential impact on societal dynamics, governance structures, and social norms. Sora is equipped with sophisticated text-to-video generative abilities and holds the potential to revolutionize industries (<xref rid="R12" ref-type="bibr">Guan et al., 2024</xref>), reshape social interactions, and redefine the boundaries of human-AI collaboration (<xref rid="R33" ref-type="bibr">Waisberg et al., 2024</xref>).</p>
<p>As society begins to integrate Sora and similar generative AI tools into various domains, such as healthcare, education, and art, the need to comprehend the intricate dynamics of public perceptions and the broader societal implications becomes increasingly urgent. Effectively navigating the complex intersection of technology and society requires a deep understanding of the ethical, social, and cultural dimensions that accompany AI integration. Addressing initial reactions and concerns may mitigate the potential risks and safeguard the values and interests of individuals and communities after Sora&#x2019;s release. In this study, we collected Sora-related discussions from X (formerly Twitter). In addition to discussions about Sora-generated videos, we also included discussions related to a comparison between Sora videos and Midjourney images, and discussions on ethics initiated by an artist whose work was copied by generative AI, in our analysis. We additionally focused on governance solutions to people&#x2019;s expressed concerns. These aspects can potentially help envision regulatory frameworks to control the implications of Sora and GenAI in general.</p>
<p>We aspired to answer the following research questions:
<list list-type="bullet">
<list-item><p><bold>RQ1</bold>. What are the primary reactions, opinions, and concerns people express regarding Sora&#x2019;s integration?</p></list-item>
<list-item><p><bold>RQ2</bold>. What are the perceived implications of Sora for governance structures and social norms?</p></list-item>
</list></p>
<p>We contribute to the emerging literature on Sora and text-to-video GenAI in two ways. First, we explored people&#x2019;s reactions to and concerns about Sora&#x2019;s integration and their potential solutions to prevent its negative impact. Second, we proposed policy recommendations to address some of the concerns expressed, namely, AI watermarking, leveraging existing laws, enhancing Sora&#x2019;s accessibility, and adopting an entrepreneurial approach to regulation.</p>
</sec>
<sec id="sec2">
<title>Related work: benefits, challenges, and public perceptions of Sora</title>
<p>Sora is an AI tool for text-to-video generation by OpenAI, whose demos show surprisingly good quality (<xref rid="R14" ref-type="bibr">In, 2024</xref>). Sora has blurred the lines between tangible and virtual realms and even <italic>&#x2018;prompted viewers to ponder their very existence</italic>&#x2019; (<xref rid="R6" ref-type="bibr">Cavalcante, 2024</xref>). (<xref rid="R21" ref-type="bibr">Li et al. 2024</xref>b) created a benchmark to assess the quality of AI-generated videos based on their adherence to real-world physics principles and showed Sora&#x2019;s advantage over other text-to-image generation tools. In recent literature, scholars have predominantly focused on addressing the challenges, benefits, and public perceptions of Sora.</p>
<sec id="sec2_1">
<title>Benefits</title>
<p>Sora is poised to disrupt multiple industries, including movies, education, gaming, healthcare, and robotics, with potential benefits for individuals (<xref rid="R23" ref-type="bibr">Liu et al., 2024</xref>). According to (<xref rid="R22" ref-type="bibr">Lin 2024</xref>), Sora may serve to broaden access to mental healthcare services. In the realm of robotics, Sora&#x2019;s capability to simulate intricate world dynamics proves crucial for advancements in autonomous driving (<xref rid="R12" ref-type="bibr">Guan et al., 2024</xref>), such as scenario generation (<xref rid="R20" ref-type="bibr">Li et al., 2024</xref>a). Democratization of art creation is a notable affordance of Sora &#x2013; &#x2018;<italic>Everyone will be a filmmaker</italic>&#x2019; (<xref rid="R8" ref-type="bibr">Cowen, 2024</xref>), which holds promise for broadening participation in artistic endeavours. Researchers speculate that medical professionals can use Sora to create informative videos for patients and the public (<xref rid="R33" ref-type="bibr">Waisberg et al., 2024</xref>). Sora can also become a potentially powerful tool for education and libraries, offering opportunities for diverse learning modalities, creativity, and critical thinking (<xref rid="R1" ref-type="bibr">Adetayo et al., 2024</xref>).</p>
</sec>
<sec id="sec2_2">
<title>Challenges</title>
<p>Sora may introduce ethical quandaries concerning privacy and security, copyright, disinformation, and truth integrity. AI&#x2019;s enhanced video generation capabilities may ease the creation of adversarial attacks (<xref rid="R13" ref-type="bibr">Hannon et al., 2024</xref>). Copyright infringement is a paramount concern brought by Sora &#x2013; it might have leveraged copyrighted materials such as videos for training (<xref rid="R15" ref-type="bibr">Karaarslan &#x0026; Ayd&#x0131;n, 2024</xref>). Other concerns raised by the researchers included misuse of AI-generated content (e.g., deep fake videos) and environmental costs. (<xref rid="R29" ref-type="bibr">Rodr&#x00ED;guez 2024</xref>) discussed the likely &#x2018;ultimate solipsism&#x2019; introduced by the rise of &#x2018;self-movies,&#x2019; where protagonists are the creators themselves, leading to implications of quality degradation. Moreover, the duality of Sora was expressed by (<xref rid="R6" ref-type="bibr">Cavalcante 2024</xref>), who argues that Sora simultaneously indicates democratizing creation, i.e., anyone can become a visual storyteller regardless of their technical or financial status, and ethical dilemmas such as disinformation that puts the integrity of truth at risk.</p>
</sec>
<sec id="sec2_3">
<title>Public perceptions</title>
<p>Researchers started to investigate public perceptions of Sora before its release. In the work of <xref rid="R24" ref-type="bibr">Mogavi et al. (2024)</xref>, both people&#x2019;s envisioned applications of Sora and expressed concerns about this emerging technology were captured by qualitatively analysing Reddit discussions. People predicted that Sora would democratize video marketing and gaming content production, enhance educational content by generating illustrative videos, and substantiate people&#x2019;s creativity and storytelling abilities. In the meantime, Sora threatens creative and artistic jobs and perpetuates social biases from training datasets. We extend this work by examining discussions on a different online platform, comparing discussions around Sora and Midjourney, and focusing on strategies to regulate governance challenges.</p>
</sec>
</sec>
<sec id="sec3">
<title>Methodology</title>
<sec id="sec3_1">
<title>Data collection</title>
<p>To address our research questions, we manually gathered 292 comments under four popular posts on X discussing public perceptions of different aspects of Sora. The first and second posts contained Sora videos released by OpenAI &#x2013; the first post was by a prominent tech influencer, while the second post was by OpenAI itself, potentially capturing views from different stakeholders. The third post compared Sora videos and Midjourney pictures created with the same prompts, emphasizing visual similarity. By including discussion about and comparisons with Midjourney, a text-to-image AI tool, we might be able to uncover how text-to-video models like Sora presented unique challenges to AI regulation and society compared to text-to-image models. The fourth post was by an artist complaining about Midjourney&#x2019;s copying behaviour, highlighting regulatory challenges. The four posts were chosen in a way that represents different stakeholders (OpenAI, tech influencers, artists, and the general public) and compares between different GenAI models (text-to-video vs text-to-image models). These posts selected exhibited high levels of engagement, indicated by the large number of views and comments. Descriptions of the posts are presented in <xref ref-type="table" rid="T1">Table 1</xref>. The comments were put in a hierarchical structure to reflect interaction dynamics. Our dataset includes 93, 139, 39, and 21 comments, respectively, from these posts. We arrived at this truncated list from the original number of comments by excluding comments that included little information. For example, although there were nearly 10 thousand comments under Post 2 by OpenAI by early March when we collected the data, most of the comments were short and not informative (e.g., &#x2018;<italic>This is great</italic>&#x2019;). </p>
<table-wrap id="T1">
<label>Table 1.</label>
<caption><p>Metadata about four X posts</p></caption>
<table>
<thead>
<tr>
<th align="center" valign="top"><bold>No.</bold></th>
<th align="center" valign="top"><bold>Poster</bold></th>
<th align="center" valign="top"><bold>Topic</bold></th>
<th align="center" valign="top"><bold>Post Time</bold></th>
<th align="center" valign="top"><bold>Number of Views</bold></th>
<th align="center" valign="top"><bold>Number of Comments</bold></th>
<th align="center" valign="top"><bold>Number of Retweets</bold></th>
<th align="center" valign="top"><bold>Number of Likes</bold></th>
<th align="center" valign="top"><bold>Number of Bookmarks</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="top">1</td>
<td align="center" valign="top">Tech influencer</td>
<td align="center" valign="top">Sora videos</td>
<td align="center" valign="top">Feb 15, 2024</td>
<td align="center" valign="top">9.4M</td>
<td align="center" valign="top">437</td>
<td align="center" valign="top">3.2K</td>
<td align="center" valign="top">19K</td>
<td align="center" valign="top">8K</td>
</tr>
<tr>
<td align="center" valign="top">2</td>
<td align="center" valign="top">OpenAI</td>
<td align="center" valign="top">Sora videos</td>
<td align="center" valign="top">Feb 15, 2024</td>
<td align="center" valign="top">94.3M</td>
<td align="center" valign="top">9.9K</td>
<td align="center" valign="top">76K</td>
<td align="center" valign="top">142K</td>
<td align="center" valign="top">40K</td>
</tr>
<tr>
<td align="center" valign="top">3</td>
<td align="center" valign="top">Tech influencer</td>
<td align="center" valign="top">Sora videos vs Midjourney images</td>
<td align="center" valign="top">Feb 16, 2024</td>
<td align="center" valign="top">11.2M</td>
<td align="center" valign="top">674</td>
<td align="center" valign="top">3.9K</td>
<td align="center" valign="top">31K</td>
<td align="center" valign="top">11K</td>
</tr>
<tr>
<td align="center" valign="top">4</td>
<td align="center" valign="top">Artist</td>
<td align="center" valign="top">Midjourney copying</td>
<td align="center" valign="top">Mar 8, 2024</td>
<td align="center" valign="top">2.7M</td>
<td align="center" valign="top">127</td>
<td align="center" valign="top">9.2K</td>
<td align="center" valign="top">63K</td>
<td align="center" valign="top">3.2K</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="sec3_2">
<title>Data analysis</title>
<p>We employed a thematic analysis (<xref rid="R2" ref-type="bibr">Braun &#x0026; Clarke, 2012</xref>) to analyse the comments. The thematic analysis involved identifying recurring themes, patterns, and categories within the comments, enabling us to gain insights into the predominant discussion topics. Through qualitative analysis, we aimed to uncover the diversity of opinions, concerns, and perspectives expressed by users regarding Sora (RQ1) and its implications for governance and society (RQ2). Emerging themes included people&#x2019;s assessment of Sora videos, Sora&#x2019;s impact on industry and profession, governance challenges, and potential solutions to the challenges. Under governance challenges, subthemes included OpenAI&#x2019;s for-profit nature, replacing humans, data copyright issues, and fabricated content. We regularly discussed to reach a consensus on the themes and organize the themes and subthemes into a hierarchical structure. Below, we use anonymized quotes to illustrate our findings. We retained grammatical errors and typos in the quotes but partially censored swear words with [***] and removed emojis.</p>
</sec>
</sec>
<sec id="sec4">
<title>Findings</title>
<p>We generated a word cloud based on our collected data using NVivo R.14.23.0, highlighting discussions on art/artists, laws, and content, as shown in <xref ref-type="fig" rid="F1">Figure 1</xref>, with larger font sizes indicating more mentions in the dataset.</p>
<fig id="F1">
<label>Figure 1.</label>
<caption><p>Word cloud based on total mentions</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="images\c42-fig1.jpg"><alt-text>none</alt-text></graphic>
</fig>
<sec id="sec4_1">
<title>Varied assessment of video quality</title>
<p>People had varied assessments about the video quality Sora afforded. While some regarded Sora&#x2019;s video generation capability as revolutionary, others pointed out failures in simulating the physical world and its lack of creativity.</p>
<p>Many people were amazed by Sora&#x2019;s video generation quality: &#x2018;<italic>These videos look so real,</italic>&#x2019; &#x2018;<italic>I cannot believe we are here already.&#x2019;</italic> Some people particularly praised the physical features captured in videos, &#x2018;<italic>the physics etc are just too good&#x2019;.</italic> Sora was seen as a revolutionary technology, &#x2018;<italic>It has come to a point where it is hard to imagine what AI will be able to do next&#x2019;.</italic> Some even suggested AI may have had access to alternative realities,
<disp-quote>
<p><italic>what if the AI has already access to alternative realities and is capturing those footages out there and that&#x2019;s the reason it looks so realistic, and we cannot see the difference</italic>.</p>
</disp-quote></p>
<p>To some, the video quality was not as impressive, &#x2018;<italic>I don&#x2019; think it even considers physics in that sense, like if it were a game engine&#x2019;.</italic> Failures in capturing physics were pointed out, e.g., &#x2018;<italic>Dog skins and snow doesn&#x2019;t follow physics correctly</italic>&#x2019;. Many people found that the walking woman&#x2019;s legs switched sides several times in a video. Some people thought the Sora videos were merely a product of the huge computational power or datasets. Some people were sceptical and thought OpenAI might have used videos that matched the prompt, &#x2018;<italic>How do we know its not just giving out unaltered training data that matches the prompt? a dog in the snow video is not uncommon.</italic>&#x2019;</p>
<p>When comparing Sora and Midjourney, people found they converged to similar output, &#x2018;<italic>This just shows that most foundational models are going to converge to a similar output. This should give some AI hype pause&#x2019;.</italic> Proprietary data was seen as a key to training high-performance models, whereas all models currently are trained on similar public data, &#x2018;<italic>It&#x2019;s now down to who has the most robust proprietary data. The issue now is all these models are trained off the same data sets&#x2019;.</italic> With all the AI- generated content online as potential training data, the convergence would only get worse, &#x2018;<italic>Especially as the internet becomes filled with &#x201C;noise&#x201D; in the form of AI outputs, convergence seems inevitable&#x2019;.</italic></p>
</sec>
<sec id="sec4_2">
<title>Impact on industries and professions</title>
<p>Many expressed the potential for Sora to disrupt the movie and content creation industries. This quote, &#x2018;<italic>Sora is incredible and scary&#x2019;</italic>, demonstrated ambivalence.</p>
<p>People agreed that the movie industry would be directly impacted by video generation AI like Sora, &#x2018;<italic>Hollywood is about to implode and go thermonuclear,</italic>&#x2019; &#x2018;<italic>RIP the entertainment industry. In a few months, people will make &#x201C;Avatar&#x201D; on their phones&#x2019;.</italic> One person even imagined the collaboration between Sora and Midjourney, &#x2018;<italic>Midjourney can generate the thumbnails while Sora the movie&#x2019;.</italic> Sora may democratise movie making, making everyone an artist, which has unpredictable implications. One person raved about this democratization process,
<disp-quote>
<p><italic>thank you, OpenAI has literally democratized cinema. Now, everyone can be an artist and create their own movies going forward. This will drastically reduce the exorbitant expenses of Hollywood, especially for low-quality content with high costs</italic>.</p>
</disp-quote></p>
<p>However, the democratization of movie creation may also lead to low-quality content,
<disp-quote>
<p><italic>democratize: Discard all talent through stealing people&#x2019;s work then allow anyone who don&#x2019;t know a thing about the art to create their own slop. Wow I love this new democratized cinema world.</italic></p>
</disp-quote></p>
<p>This impact may extend to the rest of the media production industry, content creators, and artists. On the bright side, Sora may enable writers to do new forms of storytelling,
<disp-quote>
<p><italic>perhaps one day regular people, as authors, will be able to create feature-length films from (only) their writing. It would be a refreshing restart to finally wrestle the art form away from Hollywood and allow real stories to be told again.</italic></p>
</disp-quote></p>
<p>Sora may democratize content creation for average people,
<disp-quote>
<p>a<italic>wesome tool. Tools like this integrated with no-code will truly democratize what regular people can create and push out online. This stuff is amazing and that exponential growth this year will be insane.</italic></p>
</disp-quote></p>
<p>The other side of this coin, however, is that Sora may cause job losses for artists, &#x2018;<italic>I don&#x2019;t think y&#x2019;all realize how many artists you&#x2019;re f[***] over right now</italic>&#x2019;; the rise of AI content farms,
<disp-quote>
<p><italic>i would like to miss it because it threatens every single creative profession. Music creation, art, story writing, screenwriting, photography, and film. Unless you are at the highest production level where protections exist, it&#x2019;ll be impossible for human creators to get traction</italic>,</p>
</disp-quote></p>
<p>and the frustration of art students,
<disp-quote>
<p><italic>AI is making me feel I&#x2019;m wasting my time studying arts. At this point I fear my future is gonna be affected by this medium, the fact that this fear is getting stronger with each new i see about AI &#x201C;progressing&#x201D; sickens me, and I&#x2019;m sure I&#x2019;m not the only one.</italic></p>
</disp-quote></p>
<p>Some people recognized the unfair game between GenAI and human artists, i.e., GenAI has been stealing artists&#x2019; work as its training data, &#x2018;<italic>It hasn&#x2019;t democratised anything. It has stolen people&#x2019;s data without consent to profit of it. Everyone could already be an artist some are just too lazy to&#x2019;.</italic> When an artist shared her experience of being copied by Midjourney, another artist was frustrated and questioned why AI evangelists could not see this dark side of GenAI, &#x2018;<italic>Breaks my heart. I&#x2019;m an amateur artist. I&#x2019;m sure I&#x2019;d be pissed if other people bastardised my professional efforts. I simply can&#x2019;t see why the AI evangelicals don&#x2019;t see it&#x2019;.</italic></p>
<p>Some were optimistic despite negative impacts, thinking Sora would not replace artists and content creators in the short term. Sora was not regarded as creative, &#x2018;<italic>It&#x2019;s never replacing real artists. Machines only do loops so that means that they only repeat what has been made. Only Humans are truly creative&#x2019;</italic>. Sora did not recreate the human touch, &#x2018;<italic>Even as machinery comes to the scene, there&#x2019;s always gonna be a demand for man-made stuff. It can never truly recreate that human touch so integral to the craft&#x2019;</italic>.</p>
</sec>
<sec id="sec4_3">
<title>Governance challenges</title>
<p>Governance challenges discussed included OpenAI&#x2019;s for-profit nature, the authenticity of content, taking people&#x2019;s jobs, training data, power consumption, and the copyright of AI-generated content.</p>
<sec id="sec4_3_1">
<title>For-profit OpenAI</title>
<p>People critiqued OpenAI&#x2019;s for-profit nature: &#x2018;<italic>What shocks me is not just the Quality but the level of Secrecy that OpenAI was able to maintain around this project is insanely impressive&#x2019;.</italic> Others thought OpenAI did not aim to research AI for the public good but only pursued profits:
<disp-quote>
<p><italic>ah yes for a company supposedly dedicated to researching AI and bringing it to the public I sure love the level of secrecy involved at the company, and it&#x2019;s not at all terrifying that a company with this level of secrecy wants 7 trillion dollars for further AI development.</italic></p>
</disp-quote></p>
<p>People thought OpenAI did not care about Sora ethical concerns:
<disp-quote>
<p><italic>they just don&#x2019;t care. And they don&#x2019;t care about the slew of fake-videos and propaganda and computer-generated porn of real people and all of the people they&#x2019;ll hurt that way. They don&#x2019;t care about anything as long as they get to line their pockets.</italic></p>
</disp-quote></p>
</sec>
<sec id="sec4_3_2">
<title>What is real? Falsified evidence, pornography, and propaganda</title>
<p>Some people claimed that they would no longer &#x2018;<italic>trust for videos on the web</italic>&#x2019; since &#x2018;<italic>nothing is real with AI&#x2019;.</italic> It was hard to tell AI- vs human-generated videos, &#x2018;<italic>Now people are releasing real videos saying it&#x2019;s AI, no one knows the difference.</italic>&#x2019; However, others argued that things were not real even before AI,
<disp-quote>
<p><italic>you don&#x2019;t know what&#x2019;s real no.... If you&#x2019;re taking anything you see in the mainstream media at face value, then idk what to tell you. 99% of it is spin, bias, or even flat out wrong. AI used for nefarious purposes will just be an extension of what&#x2019;s already happening.</italic></p>
</disp-quote></p>
<p>People also expressed that humans have been bad at distinguishing between fake and real all the time, &#x2018;<italic>I mean, most people fall for the worst faked videos all the time...so, nothing&#x2019;s really changed</italic>&#x2019;. They thought technologies were not to blame, &#x2018;<italic>Lighters allow anyone to set a fire anywhere. This dangerous tool should be strictly controlled, otherwise we will all die in fires. how right it sounds&#x2019;.</italic> Technological solutions were believed to come to the rescue, &#x2018;<italic>A guy said years ago on some podcast i watched, as good as technology is at making something fake there&#x2019;s as good technology identifying if somethings fake&#x2019;.</italic></p>
<p>Some people raised concern about falsified videos as court evidence, &#x2018;<italic>this is all fun and games until you end up in court watching a 60 second video evidence of yourself committing a crime you&#x2019;ve never done.</italic>&#x2019; Another person replied with actional steps to falsify such a video,
<disp-quote>
<p><italic>now you can Google a random person&#x2019;s name, get just a profile pic and in a few short sentences create a video of that person doing anything you want. You only need a starting photo, rather than a full video of someone actually committing the crime to add their face to.</italic></p>
</disp-quote></p>
<p>Some fear that innocent people may be put into jail, &#x2018;<italic>This tech will put people in prison with falsified evidence. Not there yet, but not far off at all&#x2019;.</italic></p>
<p>Video generation was also seen to facilitate easy creation of porn, &#x2018;<italic>Porn is about to get crazy!</italic>&#x2019;, &#x2018;<italic>Porn industry about to go spiral into alternate universes of weird.</italic>&#x2019; With simple prompts, video generation AI such as Sora will be able to generate pornography,
<disp-quote>
<p><italic>I&#x2019;m obviously missing something. Please explain to me in simple words why all this &#x201C;hyper-realistic&#x201D; imaging is interesting for anyone except criminals and pornographers. Do you really think &#x201C;a fat girl from Ohio with her father&#x2019;s video-camera&#x201D; (Coppola) will create a true masterpiece?</italic></p>
</disp-quote></p>
<p>Implications for child pornography are concerning, &#x2018;<italic>you are facilitating child pornography and you know that (: my question is how you people are able to sleep at night knowing what you&#x2019;re doing&#x2019;.</italic></p>
<p>Video generation AI may interfere with elections with propaganda videos, &#x2018;<italic>Fabricated political propaganda will be all over the internet</italic>&#x2019;, &#x2018;<italic>Looks like the 2024 election is going to be fun&#x2019;.</italic> Another person disagreed and argued that AI is not the one to blame but the underlying vulnerable democratic system,
<disp-quote>
<p><italic>if generated 10 second videos endanger democracy even though it&#x2019;s public knowledge that such video generation is possible, then don&#x2019;t you think the system was brittle to begin? I don&#x2019;t think generative AI is to blame for that.</italic></p>
</disp-quote></p>
</sec>
<sec id="sec4_3_3">
<title>Replacing humans and small businesses</title>
<p>People complained that instead of benefiting humans, AI is often used to replace poor people,
<disp-quote>
<p><italic>AI is a good representation of how humanity fails to use innovation to progress. We COULD automate menial jobs, implement UBI, and give humans the freedom to create and invent, but instead we replace The Poors&#x2122; with barely functioning machines and then leave human to starve.</italic></p></disp-quote></p>
<p>AI kept squeezing people into smaller boxes of job options,
<disp-quote>
<p><italic>each new automation technology squeezes humanity into a smaller box; and then you see new standard of &#x201C;normal&#x201D; people complain about increases in clinical diagnoses for types of people who no longer have a modern purpose.</italic></p>
</disp-quote></p>
<p>There was a heated debate between human autonomy and technological evolution. On the one hand, &#x2018;<italic>You&#x2019;re literally hurting jobs with this&#x2019;.</italic> On the other hand, &#x2018;<italic>OK let&#x2019;s stop technological evolution because...jobs&#x2019;.</italic> Optimists believed AI could not easily take jobs requiring creativity and human touch,
<disp-quote>
<p><italic>analytical/automated/repetitive tasks are likely to be replaced regardless. Things AI can&#x2019;t do: imagination, creativity, drawing on life experiences, intuition, emotional connection, manual dexterity, etc. Those jobs will receive more boost in the future.</italic></p>
</disp-quote></p>
<p>According to them, AI could free people from monotonous jobs, &#x2018;<italic>AI could be used to reduce the human necessity to work monotonous jobs and allow us to explore our creativity&#x2019;.</italic> People were more assured after noticing similar generated content by Sora and Midjourney with the same prompt &#x2013; they believed that GenAI was not creative and would not replace human workers,
<disp-quote>
<p><italic>i know folks are worried, but AI is going to just fool some CEOs into laying off in the short term till they realize no one will buy what is essentially uninspired clip art. Going to suck until the CEOs rehire folks though.</italic></p>
</disp-quote></p>
<p>Small Businesses without comparable computational power will likely be killed by tech giants such as OpenAI, &#x2018;<italic>OpenAI just can&#x2019;t stop killing startups&#x2019;.</italic> &#x2018;<italic>Sam Altman has more founders blood on his hand than any other CEO in history of mankind . . .</italic>&#x2019;. &#x2018;<italic>How are we supposed to keep working on our little SAAS product after this? I&#x2019;d be less anxious even if aliens landed on Earth&#x2019;.</italic></p>
</sec>
<sec id="sec4_3_4">
<title>Copyright of input and output data</title>
<p>Training data was a major concern for many. Some feared that YouTube or TikTok videos have been used for training, which has been written into the platforms&#x2019; terms of service,
<disp-quote>
<p><italic>my conspiracy is I feel TikTok, etc has been built to feed these ai tools and there&#x2019;s language in the terms that allows them to own certain rights to videos uploaded.</italic></p>
</disp-quote></p>
<p>Overall, people thought OpenAI was not transparent about where they obtained the training data and suspected they had scraped images and videos from the internet without consent or &#x2018;<italic>used copyrighted material without paying for it</italic>&#x2019;. People may unintentionally give up their data rights by not reading the terms of service closely. When one person asked, &#x2018;<italic>who approved their content to be scraped and put into the training?</italic>&#x2019; Others replied, &#x2018;<italic>You did. When you accepted the terms of service,&#x2019;</italic> &#x2018;<italic>Read the entire terms of service.</italic>&#x2019;</p>
<p>The similar output by Midjourney and Sora indicates the importance of proprietary training data like YouTube or X, &#x2018;<italic>All things being equal, all models eventually get to parity without proprietary training data sets.</italic>&#x2019; Some thought Google had an advantage for the massive amount of data it owns, &#x2018;<italic>Idk how the legal stuff would work but google is sitting on a f[***] goldmine.</italic>&#x2019; However, others pointed out that training on social media data may lead to bias in AI, as it learned human bias,
<disp-quote>
<p><italic>using social media to train your model on is very dangerous. It has been tried in the past and the ai always was trained by users to become racist within hours.</italic></p>
</disp-quote></p>
<p>Further, training on artists&#x2019; and content creators&#x2019; work, or normal users&#x2019; content, was seen as stealing, &#x2018;<italic>Stop stealing people&#x2019;s work,</italic>&#x2019; &#x2018;<italic>So which animators art did you steal for this?</italic>&#x2019;</p>
<p>People were also wondering who would own the copyright of AI-generated content. Some thought future copyright law would favour AI developers over AI users, &#x2018;<italic>In time, law will benefit AI program developers. They will get the copyright, not the &#x201C;ai artists&#x201D;. Mark my words&#x2019;.</italic> Others did not mind the fact that they may not be able to copyright AI-generated content since the sunk costs were low with minimum human efforts in the generation process, &#x2018;<italic>Does it matter when you can generate new content again and again without the huge sunk costs that the traditional content industries have?</italic>&#x2019;</p>
</sec>
</sec>
<sec id="sec4_4">
<title>Potential solutions</title>
<p>People favoured regulation and AI literacy education to hold Sora accountable and limit its negative impact.</p>
<sec id="sec4_4_1">
<title>Law-enforced labelling of AI content</title>
<p>Law-enforced labelling of AI-generated content and tools for identifying AI-generated content are deemed important regulatory approaches,
<disp-quote>
<p><italic>laws are needed immediately. Any AI-modified video or image should be required to include a disclaimer symbol, mark, or tag of some kind that makes it obvious&#x2014;even in films, artwork, and written content. Extreme penalties are needed for any who break these rules. Everything released should have a clear source or otherwise be unshareable. If we aren&#x2019;t extremely careful, we soon won&#x2019;t be able to believe our own eyes.</italic></p>
</disp-quote></p>
<p>Labelling of AI-generated content can facilitate copyright protection in the meantime.</p>
<p>However, many people found legal requirements to label AI-generated content infeasible for adversarial manipulation by users, &#x2018;<italic>People would just remove the watermarks or crop the videos</italic>&#x2019;; politicians&#x2019; poor tech literacy, &#x2018;<italic>It doesn&#x2019;t help that a huge percentage of our most powerful politicians were too befuddled by new technologies to figure out how to program their VCRs 20 years ago</italic>&#x2019;; enforceability of laws, &#x2018;<italic>Agreed, but like most regulations requiring watermarks/notations/etc. I worry that, while it may discourage the majority of problems, laws won&#x2019;t be enough to completely combat this type of conduct</italic>&#x2019;; and the slow process of legislation,
<disp-quote>
<p><italic>even if they do care, laws always take ages to create and implement in any capacity on purpose. Unfortunately, it&#x2019;s far too slow for how fast things are moving in the world of tech and stuff.</italic></p>
</disp-quote></p>
<p>A few people thought guaranteeing safety through strict law indicated &#x2018;<italic>taking out all the fun</italic>&#x2019; and &#x2018;<italic>restricting freedom of speech&#x2019;.</italic> They believed such guardrails might drive Sora unusable and instead, only illegal content should be limited,
<disp-quote>
<p><italic>many of the limitations you have put in are making the product unusable for the majority of users. The only content that should be limited is illegal content as what is &#x201C;misinformation&#x201D;, &#x201C;hateful content&#x201D;, and &#x201C;bias&#x201D; is different based on human interpretation, time, and other factors.</italic></p>
</disp-quote></p>
</sec>
<sec id="sec4_4_2">
<title>Educating the public</title>
<p>In addition to law enforcement, some argued education was a more sustainable approach to mitigating the negative impacts of emerging technologies, &#x2018;<italic>No. We don&#x2019;t need more laws. We need citizens who can think and reason. The problem of &#x201C;fake news&#x201D; (or spin) has been with us for centuries&#x2019;.</italic> Education is important partly because AI technologies will be used to nudge people to do bad things, &#x2018;<italic>We need to educate the public on AI and make it a priority because this technology is going to be used to manipulate and socially engineer people into doing very bad things.</italic>&#x2019; Source verification is one important topic for education, &#x2018;<italic>Needs to be verified by reliable source (?) or a live happening with large audience present&#x2019;.</italic> The moral education could happen organically as had happened with prior technological revolutions, &#x2018;<italic>Which has almost always been true with science and technology - the science always comes way before the ethics and, sometimes, morality can catch up&#x2019;.</italic></p>
</sec>
</sec>
</sec>
<sec id="sec5">
<title>Discussion</title>
<p><xref rid="R24" ref-type="bibr">Mogavi et al. (2024)</xref> utilized social media (Reddit) data and qualitative analysis to understand social media perspectives regarding people&#x2019;s concerns about its integration. People&#x2019;s concerns included threats to creative jobs, bias, harm to art, and unpredictability in creative workflows. In our study, people expressed a unique set of concerns, including (1) OpenAI&#x2019;s for-profit nature, which made their claimed safety measures less credible, (2) the blurred boundary between real and fake content, which had practical implications such as falsified evidence in court, pornography, and political propaganda during elections, (3) replacing human workers and small businesses, (4) copyright of input and output data, and (5) power consumption and environmental impact. We further gauged people&#x2019;s perceptions of potential regulation and solutions to these challenges, namely, law-enforced labelling of AI-generated content and educating the public. Below, we delineate the hype around Sora and propose governance recommendations.</p>
<sec id="sec5_1">
<title>Toward gauging tech hype around Sora</title>
<p>Prior to the release and widespread use of a product, and its early adoption, its mention in both media and technical texts helps it get attention and investments. Such attention is a double-edged sword as it also leads to the build-up of expectations to an extent beyond what is possible to deliver, leading to damaged credibility and reputation (<xref rid="R3" ref-type="bibr">Brown, 2003</xref>). The word &#x2018;hype&#x2019; has been used to represent this cycle, and this most famously includes the Gartner tech hype cycle methodology developed by Gartner Inc., a technology research and consulting firm, in 1995; Gartner uses the hype cycle in its consulting reports to understand the evolution of the hype around emerging technologies. Gartner&#x2019;s hype cycle has five stages: Innovation Trigger, Peak of Inflated Expectations, Trough of Disillusionment, Slope of Enlightenment, and Plateau of Productivity. At the innovation trigger stage, the technology is at the proof-of-concept while in the second stage, there are very few practical implementations despite expectations peaking. Thereafter, the third trough stage is a sharp decline of interest and many business failures. It is only after this that the technology picks up again more maturely and finally reaches mainstream adoption in the final stage (<xref rid="R31" ref-type="bibr">Steinert &#x0026; Leifer, 2010</xref>). We can infer that early expectations being inflated beyond what could be reasonably delivered is the expected norm of evolution of new technologies. It is further understood that the time needed for the full hype cycle might vary &#x2013; while &#x2018;<italic>normal technologies might take five to eight years, fast-track technologies might only take two to four years for maturity</italic>&#x2019; (<xref rid="R9" ref-type="bibr">Dedehayir &#x0026; Steinert, 2016</xref>). The bell-shaped hype curve produced is due to human irrationality, mainly the following: attraction to novelty and love for sharing, social contagion, and heuristic attitude in decision-making (<xref rid="R10" ref-type="bibr">Fenn &#x0026; Raskino, 2008</xref>). It is also possible that there may be multiple peaks and troughs of visibility and expectations rather than a single one and this may not conform to a hype cycle at all (<xref rid="R9" ref-type="bibr">Dedehayir &#x0026; Steinert, 2016</xref>; <xref rid="R31" ref-type="bibr">Steinert &#x0026; Leifer, 2010</xref>).</p>
<p>Our work on the public discourse around Sora is a suitable representation of tech hype at the inception stage. The actual product has not been publicly released, and only a teaser has been provided by OpenAI. People were nevertheless excited about this proof-of-concept technology, evidenced by heated discussions on Sora-related topics on social media. They had drawn on past experiences with AI to form perceptions and expectations about Sora. We find a directionality to the expectations around Sora, with most comments taking a clear positive or negative position, and expectations and perceptions were logical/fact-based or emotional, resonating with prior research on tech hype expectations (<xref rid="R30" ref-type="bibr">Shi &#x0026; Herniman, 2023</xref>).</p>
<p>Our findings indicate a sizable conflict on similar issues and features between people. A fundamental one is that people had conflicting assessments of Sora&#x2019;s creativity and video-generation quality: while some thought Sora was a revolutionary technique for capturing physical characteristics, others identified imperfections in physical simulation and noticed similarities between Sora-generated videos and Midjourney-generated pictures. These conflicts were also present in people&#x2019;s views on the desirability of the technology, its impact on employment, AI law enforcement, its impact on content democratization, the extent of AI regulation, and trust in OpenAI.</p>
<p>Sora could also be viewed as a continuation of previous successful generative AI models. An interesting parallel can be found with virtual influencers, which are artificially generated entities acting as social media influencers, often using hyper-realistic AI-generated imagery, but still human-curated: many of the thoughts evoked in this study included blurring of lines between reality and fiction, democratization using AI, and competition between artificially generated content and human-generated content, resonating with quotes of long-term followers of virtual influencers (<xref rid="R7" ref-type="bibr">Choudhry et al., 2022</xref>). This is indicative of something more meta: the public sentiment, perception, and expectations of AI technology may be influenced less by the specifics of that particular product and more by the general zeitgeist of AI. The conversations about Sora were less about the features of Sora itself but more about its potential impact on society, jobs, and freedom, implicating that such fundamental debates on AI have existed in the public sphere (<xref rid="R5" ref-type="bibr">Calhoun, 1993</xref>) for some time, suggesting that such public opinion could potentially be harnessed in creating fair legislation as has been envisaged previously (<xref rid="R16" ref-type="bibr">Leane, 2010</xref>), and resonating with a more flexible legislative approach (<xref rid="R35" ref-type="bibr">Wiener, 2004</xref>).</p>
</sec>
<sec id="sec5_2">
<title>Governance recommendations</title>
<sec id="sec5_2_1">
<title>AI watermarking</title>
<p>Building on results regarding legal enforcement of labelling requirements, it is necessary to clarify what the mandate is as much as it is to articulate procedures, rights, and responsibilities regarding enforcement mechanisms. One major governance recommendation from this work is labelling AI- generated content; AI watermarking is one approach to labelling by verifying authenticity and preventing misinformation or mislabelling (<xref rid="R11" ref-type="bibr">Fu et al., 2024</xref>; <xref rid="R28" ref-type="bibr">Regazzoni et al., 2021</xref>). It is also critical to protecting intellectual property (<xref rid="R28" ref-type="bibr">Regazzoni et al., 2021</xref>). Such a labelling approach places the burden of compliance on the organizations and researchers that produce generative AI models, services, and products rather than on end-users with varying degrees of technical literacy. This centralized compliance mechanism easily facilitates enforcement, disambiguates provenance, and provides clarity to users of AI systems and downstream information seekers. It also makes copyright or restrictions on fair use transparent to everyone who encounters the material.</p>
</sec>
<sec id="sec5_2_2">
<title>Reapplying existing laws</title>
<p>Results from this project also suggest leveraging existing law to apply it in new contexts, such as anti-discrimination law to address issues of bias, harassment, and equity, along with obscenity and child protection laws to protect minors with respect to pornography. Creative applications of broadly and flexibly written laws as pertain to similar harms or governance challenges irrespective of technical system, jurisdiction or context, or regulatory domain can help to address emergent harms, as identified in this research, as well as around emerging technologies more broadly. Such efforts have already been useful in protecting victims of revenge porn and deepfakes (<xref rid="R18" ref-type="bibr">Levendowski, 2013</xref>; <xref rid="R19" ref-type="bibr">Levendowski, 2023</xref>). As uses of generative AI expand, the scope and context of challenges increase, but many of the fundamental rights challenged in these scenarios are already protected and existing precedents should be applied and extended.</p>
</sec>
<sec id="sec5_2_3">
<title>Accessibility</title>
<p>The application of accessibility standards to AI systems is also emphasized by the results of this study. Concerns expressed about potential guardrails infringing upon speech or creativity, as well as potentially making systems unusable, can be assuaged by reconciling them with accessibility interests, given decades of research demonstrating the positive impact of accessibility standards and testing on usability for all (<xref rid="R26" ref-type="bibr">Petrie &#x0026; Kheir, 2007</xref>). In this sense, by making generative AI more accessible, a variety of concerns and the needs of many populations can be addressed via compliance with existing, or extended standards. Concerns about labor replacement and under- or over-representation in data and use of these system, can also be addressed with expanded user populations and the inevitable user innovation that accompanies accessibility (<xref rid="R27" ref-type="bibr">Raasch et al., 2008</xref>).</p>
</sec>
<sec id="sec5_2_4">
<title>An entrepreneurial approach to regulation</title>
<p>Literature from environmental technologies and their regulation indicates that innovation may be reduced by regulation, but regulation has a positive effect on public health (<xref rid="R36" ref-type="bibr">Zhou et al., 2021</xref>) and encourages socially positive innovation, as with privacy (<xref rid="R17" ref-type="bibr">Lev-Aretz &#x0026; Strandburg, 2020</xref>). Tradeoffs between public benefit and innovation resulting from regulation can also be seen in regulation perceptions expressed by the two largely non-intersecting interdisciplinary silos of law and technology, and law and economics (<xref rid="R4" ref-type="bibr">Butenko &#x0026; Larouche, 2015</xref>). Law and economics assume that innovation is good for welfare, while law and technology is more critical and views innovation as largely exogenous to the regulatory process. While law and technology literature wish that law intervenes early in emerging innovations, law and economics posits the very opposite; it is thus desirable to balance these opposing perspectives (<xref rid="R4" ref-type="bibr">Butenko &#x0026; Larouche, 2015</xref>). Technology regulation is a highly complex undertaking, and it requires an entrepreneurial approach with flexibility and empirical testing, constituting &#x2018;regulatory property&#x2019; to protect against market failures but guard against overprotection, fostering an ecology that allows diffusion of regulatory innovations, adapting the prevailing theory of regulatory politics, engaging of global economies, and the rewarding of policy innovation (<xref rid="R35" ref-type="bibr">Wiener, 2004</xref>). This entrepreneurial and nimble approach to regulation will resolve the trade-off between &#x2018;unethical AI&#x2019; and &#x2018;unusable AI&#x2019; in the regulation of generative AI models.</p>
</sec>
</sec>
</sec>
<sec id="sec6">
<title>Conclusion</title>
<p>Through thematic analysis of social media discussions about Sora, one generative AI tool, we gauged people&#x2019;s reactions to and concerns about Sora and regulatory solutions. We synthesized governance recommendations from public perceptions, including using AI watermarking to verify authenticity and protect intellectual property, leveraging existing laws to combat pornography and bias, and addressing the tension between ethics and usability of AI systems.</p>
<p>There are several limitations of this research. First, we leveraged qualitative social media analysis to probe perceptions of Sora&#x2019;s societal impact and ethical concerns. Potential biases exist in the sample selection process, and the chosen posts and stakeholders may influence the governance suggestions and interpretations of public concerns. We do not aim to provide a generalizable account of the complicated GenAI landscape. Future research could consider a larger-scale quantitative analysis to capture the evolving sentiment and perceptions around AI hype, and people&#x2019;s contrasting views as evidenced by our data. Second, bias may exist in our collected data due to groupthink and narrative-setting effects of early comments. Future research could conduct individual interviews to mitigate such bias.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="R1"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Adetayo</surname><given-names>A. J.</given-names></name><name><surname>Enamudu</surname><given-names>A. I.</given-names></name><name><surname>Lawal</surname><given-names>F. M.</given-names></name><name><surname>Odunewu</surname><given-names>A. O.</given-names></name></person-group><year>2024</year><article-title>From text to video with AI: the rise and potential of Sora in education and libraries</article-title><source>Library Hi Tech News</source></element-citation></ref>
<ref id="R2"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Braun</surname><given-names>V.</given-names></name><name><surname>Clarke</surname><given-names>V.</given-names></name></person-group><year>2012</year><source>Thematic analysis</source><publisher-name>American Psychological Association</publisher-name></element-citation></ref>
<ref id="R3"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Brown</surname><given-names>N.</given-names></name></person-group><year>2003</year><article-title>Hope against hype-accountability in biopasts, presents and futures</article-title><source>Science &#x0026; Technology Studies</source><volume>16</volume><issue>2</issue><fpage>3</fpage><lpage>21</lpage></element-citation></ref>
<ref id="R4"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Butenko</surname><given-names>A.</given-names></name><name><surname>Larouche</surname><given-names>P.</given-names></name></person-group><year>2015</year><article-title>Regulation for innovativeness or regulation of innovation?</article-title><source>Law, Innovation and Technology</source><volume>7</volume><issue>1</issue><fpage>52</fpage><lpage>82</lpage></element-citation></ref>
<ref id="R5"><element-citation publication-type="book"><person-group person-group-type="editor"><name><surname>Calhoun</surname><given-names>C.</given-names></name></person-group><year>1993</year><source>Habermas and the public sphere</source><publisher-name>MIT press</publisher-name></element-citation></ref>
<ref id="R6"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Cavalcante</surname><given-names>D. C.</given-names></name></person-group><year>2024</year><article-title>Sora: The Dawn of Digital Dreams</article-title></element-citation></ref>
<ref id="R7"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Choudhry</surname><given-names>A.</given-names></name><name><surname>Han</surname><given-names>J.</given-names></name><name><surname>Xu</surname><given-names>X.</given-names></name><name><surname>Huang</surname><given-names>Y.</given-names></name></person-group><year>2022</year><article-title>&#x201C;I Felt a Little Crazy Following a &#x2018;Doll&#x2019;&#x201D;: Investigating Real Influence of Virtual Influencers on Their Followers</article-title><source>Proceedings of the ACM on human&#x2013;computer interaction</source><volume>6</volume><comment>GROUP</comment><fpage>1</fpage><lpage>28</lpage></element-citation></ref>
<ref id="R8"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Cowen</surname><given-names>T.</given-names></name></person-group><year>2024</year><article-title>What will the main commercial uses be for Sora?</article-title></element-citation></ref>
<ref id="R9"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Dedehayir</surname><given-names>O.</given-names></name><name><surname>Steinert</surname><given-names>M.</given-names></name></person-group><year>2016</year><article-title>The hype cycle model: A review and future directions</article-title><source>Technological Forecasting and Social Change</source><volume>108</volume><fpage>28</fpage><lpage>41</lpage></element-citation></ref>
<ref id="R10"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Fenn</surname><given-names>J.</given-names></name><name><surname>Raskino</surname><given-names>M.</given-names></name></person-group><year>2008</year><source>Mastering the hype cycle: how to choose the right innovation at the right time</source><publisher-name>Harvard Business Press</publisher-name></element-citation></ref>
<ref id="R11"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Fu</surname><given-names>Y.</given-names></name><name><surname>Xiong</surname><given-names>D.</given-names></name><name><surname>Dong</surname><given-names>Y.</given-names></name></person-group><year>2024</year><comment>March</comment><article-title>Watermarking conditional text generation for ai detection: Unveiling challenges and a semantic-aware watermark remedy</article-title><source>Proceedings of the AAAI Conference on Artificial Intelligence</source><volume>38</volume><issue>16</issue><fpage>18003</fpage><lpage>18011</lpage></element-citation></ref>
<ref id="R12"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Guan</surname><given-names>Y.</given-names></name><name><surname>Liao</surname><given-names>H.</given-names></name><name><surname>Li</surname><given-names>Z.</given-names></name><name><surname>Hu</surname><given-names>J.</given-names></name><name><surname>Yuan</surname><given-names>R.</given-names></name><name><surname>Li</surname><given-names>Y.</given-names></name><name><surname>Xu</surname><given-names>C.</given-names></name></person-group><year>2024</year><article-title>World models for autonomous driving: An initial survey</article-title><source>IEEE Transactions on Intelligent Vehicles</source></element-citation></ref>
<ref id="R13"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hannon</surname><given-names>B.</given-names></name><name><surname>Kumar</surname><given-names>Y.</given-names></name><name><surname>Gayle</surname><given-names>D.</given-names></name><name><surname>Li</surname><given-names>J. J.</given-names></name><name><surname>Morreale</surname><given-names>P.</given-names></name></person-group><year>2024</year><article-title>Robust Testing of AI Language Model Resiliency with Novel Adversarial Prompts</article-title><source>Electronics</source><volume>13</volume><issue>5</issue><fpage>842</fpage></element-citation></ref>
<ref id="R14"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>In</surname><given-names>A. I.</given-names></name></person-group><year>2024</year><article-title>SORA: Unbelieve New Text to Video AI Model By OpenAI-37 Demo Videos-Still Can&#x2019;t Believe Real</article-title><source>Sign</source><volume>3</volume><issue>01</issue></element-citation></ref>
<ref id="R15"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Karaarslan</surname><given-names>E.</given-names></name><name><surname>Ayd&#x0131;n</surname><given-names>&#x00D6;.</given-names></name></person-group><year>2024</year><article-title>Generate Impressive Videos with Text Instructions: A Review of OpenAI Sora, Stable Diffusion, Lumiere and Comparable Models</article-title><source>Stable Diffusion, Lumiere and Comparable Models (February 19, 2024)</source></element-citation></ref>
<ref id="R16"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Leane</surname><given-names>G. W.</given-names></name></person-group><year>2010</year><article-title>Deliberative Democracy and the Internet: New Possibilities for Legitimising Law Through Public Discourse?</article-title><source>Canadian Journal of Law &#x0026; Jurisprudence</source><volume>23</volume><issue>2</issue><fpage>373</fpage><lpage>401</lpage></element-citation></ref>
<ref id="R17"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Lev-Aretz</surname><given-names>Y.</given-names></name><name><surname>Strandburg</surname><given-names>K. J.</given-names></name></person-group><year>2020</year><article-title>Privacy regulation and innovation policy</article-title><volume>Yale JL &#x0026; Tech.</volume><volume>22</volume><fpage>256</fpage></element-citation></ref>
<ref id="R18"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Levendowski</surname><given-names>A.</given-names></name></person-group><year>2013</year><article-title>Using copyright to combat revenge porn</article-title><source>NYU J. Intell. Prop. &#x0026; Ent. L.</source><volume>3</volume><fpage>422</fpage></element-citation></ref>
<ref id="R19"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Levendowski</surname><given-names>A.</given-names></name></person-group><year>2023</year><article-title>Defragging Feminist Cyberlaw</article-title><source>Berkeley Tech. LJ</source><volume>38</volume><fpage>797</fpage></element-citation></ref>
<ref id="R20"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Li</surname><given-names>X.</given-names></name><name><surname>Miao</surname><given-names>Q.</given-names></name><name><surname>Li</surname><given-names>L.</given-names></name><name><surname>Hou</surname><given-names>Y.</given-names></name><name><surname>Ni</surname><given-names>Q.</given-names></name><name><surname>Fan</surname><given-names>L.</given-names></name><name><surname>Wang</surname><given-names>F. Y.</given-names></name></person-group><year>2024</year><article-title>Sora for scenarios engineering of intelligent vehicles: V&#x0026;V, C&#x0026;C, and beyonds</article-title><source>IEEE Transactions on Intelligent Vehicles</source></element-citation></ref>
<ref id="R21"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Li</surname><given-names>X.</given-names></name><name><surname>Zhou</surname><given-names>D.</given-names></name><name><surname>Zhang</surname><given-names>C.</given-names></name><name><surname>Wei</surname><given-names>S.</given-names></name><name><surname>Hou</surname><given-names>Q.</given-names></name><name><surname>Cheng</surname><given-names>M. M.</given-names></name></person-group><year>2024</year><article-title>Sora generates videos with stunning geometrical consistency</article-title><source>arXiv preprint arXiv:2402.17403</source></element-citation></ref>
<ref id="R22"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Lin</surname><given-names>B.</given-names></name></person-group><year>2024</year><article-title>The Machine Can&#x2019;t Replace the Human Heart</article-title><source>arXiv preprint arXiv:2402.18826</source></element-citation></ref>
<ref id="R23"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Liu</surname><given-names>Y.</given-names></name><name><surname>Zhang</surname><given-names>K.</given-names></name><name><surname>Li</surname><given-names>Y.</given-names></name><name><surname>Yan</surname><given-names>Z.</given-names></name><name><surname>Gao</surname><given-names>C.</given-names></name><name><surname>Chen</surname><given-names>R.</given-names></name><name><surname>Sun</surname><given-names>L.</given-names></name></person-group><year>2024</year><article-title>Sora: A review on background, technology, limitations, and opportunities of large vision models</article-title><source>arXiv preprint arXiv:2402.17177</source></element-citation></ref>
<ref id="R24"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Mogavi</surname><given-names>R.H.</given-names></name><name><surname>Wang</surname><given-names>D.</given-names></name><name><surname>Tu</surname><given-names>J.</given-names></name><name><surname>Hadan</surname><given-names>H.</given-names></name><name><surname>Sgandurra</surname><given-names>S.A.</given-names></name><name><surname>Hui</surname><given-names>P.</given-names></name><name><surname>Nacke</surname><given-names>L.E.</given-names></name></person-group><article-title>Sora openai&#x2019;s prelude: Social media perspectives on sora openai and the future of ai video generation</article-title><source>arXiv preprint arXiv:2403.14665</source><year>2024</year></element-citation></ref>
<ref id="R25"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Muller</surname><given-names>M.</given-names></name><name><surname>Chilton</surname><given-names>L. B.</given-names></name><name><surname>Kantosalo</surname><given-names>A.</given-names></name><name><surname>Martin</surname><given-names>C. P.</given-names></name><name><surname>Walsh</surname><given-names>G.</given-names></name></person-group><year>2022</year><comment>April</comment><article-title>GenAICHI: generative AI and HCI</article-title><source>CHI conference on human factors in computing systems extended abstracts</source><fpage>1</fpage><lpage>7</lpage></element-citation></ref>
<ref id="R26"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Petrie</surname><given-names>H.</given-names></name><name><surname>Kheir</surname><given-names>O.</given-names></name></person-group><year>2007</year><comment>April</comment><article-title>The relationship between accessibility and usability of websites</article-title><source>Proceedings of the SIGCHI conference on Human factors in computing systems</source><fpage>397</fpage><lpage>406</lpage></element-citation></ref>
<ref id="R27"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Raasch</surname><given-names>C.</given-names></name><name><surname>Herstatt</surname><given-names>C.</given-names></name><name><surname>Lock</surname><given-names>P.</given-names></name></person-group><year>2008</year><article-title>The dynamics of user innovation: Drivers and impediments of innovation activities</article-title><source>International Journal of Innovation Management</source><volume>12</volume><issue>03</issue><fpage>377</fpage><lpage>398</lpage></element-citation></ref>
<ref id="R28"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Regazzoni</surname><given-names>F.</given-names></name><name><surname>Palmieri</surname><given-names>P.</given-names></name><name><surname>Smailbegovic</surname><given-names>F.</given-names></name><name><surname>Cammarota</surname><given-names>R.</given-names></name><name><surname>Polian</surname><given-names>I.</given-names></name></person-group><year>2021</year><article-title>Protecting artificial intelligence IPs: a survey of watermarking and fingerprinting for machine learning</article-title><source>CAAI Transactions on Intelligence Technology</source><volume>6</volume><issue>2</issue><fpage>180</fpage><lpage>191</lpage></element-citation></ref>
<ref id="R29"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Rodr&#x00ED;guez</surname><given-names>E. Q.</given-names></name></person-group><year>2024</year><article-title>Toward Solipsism: The Emergence of Sora and Other Video Generation AIs in Audiovisual Creation</article-title></element-citation></ref>
<ref id="R30"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Shi</surname><given-names>Y.</given-names></name><name><surname>Herniman</surname><given-names>J.</given-names></name></person-group><year>2023</year><article-title>The role of expectation in innovation evolution: Exploring hype cycles</article-title><source>Technovation</source><volume>119</volume><fpage>102459</fpage></element-citation></ref>
<ref id="R31"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Steinert</surname><given-names>M.</given-names></name><name><surname>Leifer</surname><given-names>L.</given-names></name></person-group><year>2010</year><comment>July</comment><chapter-title>Scrutinizing Gartner&#x2019;s hype cycle approach</chapter-title><source>Picmet 2010 technology management for global economic growth</source><fpage>1</fpage><lpage>13</lpage><publisher-name>IEEE</publisher-name></element-citation></ref>
<ref id="R32"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Sun</surname><given-names>R.</given-names></name><name><surname>Zhang</surname><given-names>Y.</given-names></name><name><surname>Shah</surname><given-names>T.</given-names></name><name><surname>Sun</surname><given-names>J.</given-names></name><name><surname>Zhang</surname><given-names>S.</given-names></name><name><surname>Li</surname><given-names>W.</given-names></name><name><surname>Ranjan</surname><given-names>R.</given-names></name></person-group><year>2024</year><article-title>From Sora What We Can See: A Survey of Text-to-Video Generation</article-title><source>arXiv preprint arXiv:2405.10674</source></element-citation></ref>
<ref id="R33"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Waisberg</surname><given-names>E.</given-names></name><name><surname>Ong</surname><given-names>J.</given-names></name><name><surname>Masalkhi</surname><given-names>M.</given-names></name><name><surname>Lee</surname><given-names>A. G.</given-names></name></person-group><year>2024</year><article-title>OpenAI&#x2019;s Sora in medicine: revolutionary advances in generative artificial intelligence for healthcare</article-title><source>Irish Journal of Medical Science</source></element-citation></ref>
<ref id="R34"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname><given-names>F. Y.</given-names></name><name><surname>Miao</surname><given-names>Q.</given-names></name><name><surname>Li</surname><given-names>L.</given-names></name><name><surname>Ni</surname><given-names>Q.</given-names></name><name><surname>Li</surname><given-names>X.</given-names></name><name><surname>Li</surname><given-names>J.</given-names></name><name><surname>Han</surname><given-names>Q. L.</given-names></name></person-group><year>2024</year><article-title>When does Sora show: The beginning of TAO to imaginative intelligence and scenarios engineering</article-title><source>IEEE/CAA Journal of Automatica Sinica</source><volume>11</volume><issue>4</issue><fpage>809</fpage><lpage>815</lpage></element-citation></ref>
<ref id="R35"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wiener</surname><given-names>J. B.</given-names></name></person-group><year>2004</year><article-title>The regulation of technology, and the technology of regulation</article-title><source>Technology in Society</source><volume>26</volume><issue>2-3</issue><fpage>483</fpage><lpage>500</lpage></element-citation></ref>
<ref id="R36"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zhou</surname><given-names>G.</given-names></name><name><surname>Liu</surname><given-names>W.</given-names></name><name><surname>Wang</surname><given-names>T.</given-names></name><name><surname>Luo</surname><given-names>W.</given-names></name><name><surname>Zhang</surname><given-names>L.</given-names></name></person-group><year>2021</year><article-title>Be regulated before be innovative? How environmental regulation makes enterprises technological innovation do better for public health</article-title><source>Journal of Cleaner Production</source><volume>303</volume><fpage>126965</fpage></element-citation></ref>
</ref-list>
</back>
</article>