MODEL OF COMMUNICATIVE IMPACT IN DISINFORMATION MESSAGES BASED ON SPEECH ACT THEORY AND ARTIFICIAL INTELLIGENCE TOOLS
##plugins.themes.bootstrap3.article.main##
##plugins.themes.bootstrap3.article.sidebar##
Abstract
The present article explores the communicative impact of disinformation messages by combining the theoretical framework of speech act theory with the analytical capabilities of artificial intelligence (AI). The study focuses on the structural and pragmatic organisation of disinformation, viewed as a communicative act functioning simultaneously at the locutionary, illocutionary, and perlocutionary levels. The role of emotionally charged and evaluative lexical elements is of particular interest, insofar as these elements function as instruments of psychological influence, thereby shaping the perceptions, emotions, and behaviour of audiences. The objective of the research is to develop a methodological approach for identifying and classifying manipulative intentions within disinformation messages. The present study seeks to integrate speech act analysis with natural language processing (NLP) techniques and large language models (LLMs). The objective is to uncover factual distortions, as well as concealed rhetorical strategies, emotional framing and subtle linguistic manipulations. The methodology underpinning this study is predicated upon a multi-level analytical model that interprets disinformation as a structured communicative act. The framework comprises several stages: compiling a corpus of authentic disinformation texts from social media and propaganda sources; preprocessing linguistic data through tokenisation, lemmatisation, POS-tagging, and syntactic parsing; conducting locutionary analysis of propositional structures and semantic networks; performing illocutionary classification of speech acts (assertives, directives, commissives, expressives) using supervised machine learning; and carrying out perlocutionary analysis to detect sentiment, classify emotions, and identify expressive linguistic devices such as metaphor, hyperbole, epithet, and anaphora. The model is validated using AI tools (BERT, RoBERTa and GPT-based systems), combined with human-in-the-loop verification via fact-checking datasets. The findings show that disinformation relies on a complex interplay of linguistic mechanisms intended to create persuasive and manipulative content. Integrating speech act theory with AI-based linguistic analysis has proven effective in detecting emotional tone, communicative intent and manipulative structures across large volumes of data. In conclusion, this study demonstrates that combatting manipulative communication necessitates a broader approach than mere fact-checking, encompassing an analysis of the pragmatic and emotional aspects of disinformation. The proposed model offers a scalable, systematic approach to detection, thereby enhancing cognitive resilience and improving information security in today's digital landscape.
How to Cite
##plugins.themes.bootstrap3.article.details##
disinformation, speech act theory, artificial intelligence, NLP, language models, emotional manipulation, illocutionary intention
Austin, J. L. (1986). Word as action. New in foreign linguistics. Theory of Speech Acts, 17, 22–129.
Searle, J. (1986). What is a speech act. New in foreign linguistics: theory of Speech Acts, 17, 151–169.
Grajs G. P. (1985). Logic and Speech Communication. New in foreign linguistics: linguistic pragmatics, 16, 217–237.
Grishhenko, A. I. (2007). Sources of the emergence of expressive ethnonyms (ethnofolisms) in modern Russian and English: etymological, motivational and derivational aspects. Materials of the International Conference in Memory of L.V. Nikolenko and Y.P. Soloduba «Active processes in modern vocabulary and phraseology». (pp. 40-52). Yaroslavl: TOV «Remder».
Borisova, E. G. (2001). Perlocutionary Linguistics and its Teaching to Philology Students. Bulletin of Moscow University, 1, 115–133.
Vinay, R., Oehmichen, A., Agirre, E., & Davis, B. (2024). Emotional Manipulation Through Prompt Engineering Amplifies Disinformation Generation in AI Large Language Models. ArXiv preprint arXiv:2405.15923. Available at: https://arxiv.org/abs/2403.03550
Smith, S. T., Kao, E. K., Mackin, E. D., Shah, D. C., Simek, O., & Rubin, D. B. (2020). Automatic Detection of Influential Actors in Disinformation Networks. ArXiv preprint arXiv:2010.11920. Available at: https://arxiv.org/abs/2005.10879
Harris, S., Hadi, H. J., Ahmad, N., & Alshara, M. A. (2024). Fake News Detection Revisited: An Extensive Review of Theoretical Frameworks, Dataset Assessments, Model Constraints, and Forward-Looking Research Agendas. Technologies, 12, 222. DOI: https://doi.org/10.3390/technologies12110222
Simon, F. M., Altay, S., & Mercier, H. (2023). Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown. Harvard Kennedy School (HKS) Misinformation Review. DOI: https://doi.org/10.37016/mr-2020-127
Bell, E. (2023, March 3). Fake news, ChatGPT, truth, journalism, disinformation. The Guardian. Available at: https://www.theguardian.com/commentisfree/2023/mar/03/fake-news-chatgpt-truth-journalism-disinformation
Goldstein, J. A, Chao, J., & Grossman, S., Stamos, A., & Tomz, M. (2023). Can AI write persuasive propaganda? SocArXiv. DOI: https://doi.org/10.31235/osf.io/fp87b

This work is licensed under a Creative Commons Attribution 4.0 International License.