Abstract
User-generated content dominates the World Wide Web, and advances in artificial intelligence have made its mass creation widely accessible. Major technology companies, often called Big Tech, possess knowledge of how to moderate it. However, startups and traditional firms do not and need support. The aim of this study was to identify the risks of fake user-generated content for companies that allow their users to contribute by delivering user-generated content. Furthermore, to identify protective measures to mitigate these risks. Two main research questions were selected for this purpose: “What risks arise from textual, as well as feedback-based, fake user-generated content for affected companies and their user-base?” and “What countermeasures can be applied to mitigate these risks?” A mixed-methods approach was used to answer these questions. Using design science research, a set of risks and corresponding mitigation measures was developed, where the artifact was the Fake Information Risk Mitigation (FIRM) risk mitigation catalog. This was done using a systematic literature review, including sources from multiple domains and, in addition to scientific sources, guidelines from Big Tech and their strategies to fight fake user-generated content. Using qualitative content analysis, the risks and measures mentioned were identified and grouped in a taxonomy-building process and linked together. Finally, they were quantitatively analyzed as groups. Each measure was classified as a prevention, detection, or recovery measure. Mentions of the most used algorithms, techniques, or artificial intelligence models for detection efforts were also collected, but evaluated separately from the measures. Disinformation, misinformation, fake news, and fake reviews were prominent in the selected literature. A total of 74 risks and 111 measures were identified. The four most mentioned risk groupings were “Societal Effects & Issues”, “Human Limitations, Vulnerabilities & Bias”, “Characteristics of User Generated Content Platforms,” and “Algorithmic Quality Issues”. Furthermore, the four most mentioned measure groupings were “Algorithmic and Artificial Intelligence Supported Measures,” “Strategic Measures,” “Improve Platform Characteristics” and “Supporting Human Fake Detection.” The most mentioned algorithms, techniques, and model names were long short-term memory, recurrent neural networks, random forest, and support vector machines. A potential overreliance of researchers and companies on detection measures and the consequent underutilization of preventative measures were discovered. A high reliance on algorithmic detection measures became apparent, while educational measures were less represented, and this study argued for an increased focus on them. Finally, the large amount and variety of discovered risks and measures supported an interdisciplinary approach to fake usergenerated content, as previous studies found far smaller amounts of risks/measures. This complexity also highlighted that companies and governments will not solve fake user-generated content individually, but only in cooperation with each other and other stakeholders.
| Originalsprache | Englisch |
|---|---|
| Qualifikation | Master/Diplom |
| Betreuung / Begutachtung |
|
| Publikationsstatus | Veröffentlicht - 2025 |
Wissenschaftszweige
- 502050 Wirtschaftsinformatik
- 509004 Evaluationsforschung
- 502007 E-Commerce
- 301401 Hirnforschung
- 503008 E-Learning
- 502058 Digitale Transformation
- 509026 Digitalisierungsforschung
- 303026 Public Health
- 102 Informatik
- 502032 Qualitätsmanagement
- 501016 Pädagogische Psychologie
- 602036 Neurolinguistik
- 502030 Projektmanagement
- 502014 Innovationsforschung
- 102006 Computer Supported Cooperative Work (CSCW)
- 502044 Unternehmensführung
- 502043 Unternehmensberatung
- 102016 IT-Sicherheit
- 301407 Neurophysiologie
- 102015 Informationssysteme
- 501030 Kognitionswissenschaft
- 305909 Stressforschung
JKU-Schwerpunkte
- Digital Transformation
Dieses zitieren
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver