Abstract
User-generated content dominates the World Wide Web, and advances in artificial intelligence have made its mass creation widely accessible. Major technology companies, often called Big Tech, possess knowledge of how to moderate it. However, startups and traditional firms do not and need support. The aim of this study was to identify the risks of fake user-generated content for companies that allow their users to contribute by delivering user-generated content. Furthermore, to identify protective measures to mitigate these risks. Two main research questions were selected for this purpose: “What risks arise from textual, as well as feedback-based, fake user-generated content for affected companies and their user-base?” and “What countermeasures can be applied to mitigate these risks?” A mixed-methods approach was used to answer these questions. Using design science research, a set of risks and corresponding mitigation measures was developed, where the artifact was the Fake Information Risk Mitigation (FIRM) risk mitigation catalog. This was done using a systematic literature review, including sources from multiple domains and, in addition to scientific sources, guidelines from Big Tech and their strategies to fight fake user-generated content. Using qualitative content analysis, the risks and measures mentioned were identified and grouped in a taxonomy-building process and linked together. Finally, they were quantitatively analyzed as groups. Each measure was classified as a prevention, detection, or recovery measure. Mentions of the most used algorithms, techniques, or artificial intelligence models for detection efforts were also collected, but evaluated separately from the measures. Disinformation, misinformation, fake news, and fake reviews were prominent in the selected literature. A total of 74 risks and 111 measures were identified. The four most mentioned risk groupings were “Societal Effects & Issues”, “Human Limitations, Vulnerabilities & Bias”, “Characteristics of User Generated Content Platforms,” and “Algorithmic Quality Issues”. Furthermore, the four most mentioned measure groupings were “Algorithmic and Artificial Intelligence Supported Measures,” “Strategic Measures,” “Improve Platform Characteristics” and “Supporting Human Fake Detection.” The most mentioned algorithms, techniques, and model names were long short-term memory, recurrent neural networks, random forest, and support vector machines. A potential overreliance of researchers and companies on detection measures and the consequent underutilization of preventative measures were discovered. A high reliance on algorithmic detection measures became apparent, while educational measures were less represented, and this study argued for an increased focus on them. Finally, the large amount and variety of discovered risks and measures supported an interdisciplinary approach to fake usergenerated content, as previous studies found far smaller amounts of risks/measures. This complexity also highlighted that companies and governments will not solve fake user-generated content individually, but only in cooperation with each other and other stakeholders.
| Original language | English |
|---|---|
| Qualification | Master |
| Supervisors/Reviewers |
|
| Publication status | Published - 2025 |
Fields of science
- 502050 Business informatics
- 509004 Evaluation research
- 502007 E-commerce
- 301401 Brain research
- 503008 E-learning
- 502058 Digital transformation
- 509026 Digitalisation research
- 303026 Public health
- 102 Computer Sciences
- 502032 Quality management
- 501016 Educational psychology
- 602036 Neurolinguistics
- 502030 Project management
- 502014 Innovation research
- 102006 Computer supported cooperative work (CSCW)
- 502044 Business management
- 502043 Business consultancy
- 102016 IT security
- 301407 Neurophysiology
- 102015 Information systems
- 501030 Cognitive science
- 305909 Stress research
JKU Focus areas
- Digital Transformation
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver