ChatGPT, which can be likened to a versatile “Swiss army knife” offers the valuable advantages of repeatability and impartiality, yet it encounters challenges in terms of interpretability. However, human knowledge-driven working methodology emerges as a potential solution to address this issue. By Assistant Professor Kostas Karpouzis*
The widespread use of social media by journalists, citizens, and more recently, politicians themselves, has resulted in a significant surge in the volume of textual information being generated and primarily circulated through these platforms. This stands in stark contrast to the relatively recent past, where political communication and information flowed through well-defined channels that influenced the style based on their content. In the contemporary landscape, political content production is marked by the same imperative to capture the (limited) attention of the audience, resembling the methods employed in sharing lifestyle content through traditional media or streaming platforms. This often leads to a reliance on shallow and concise discourse, employing slogans, narrative descriptions, and intense, fluctuating emotions.
ChatGPT represents one of the mature technologies. The significant issue is that it is impossible to explain the decisions or predictions they generate based on the given data.
The assessment of this particular style of political communication can be approached using the same tools that were utilized in the past, relying predominantly on the expertise of journalists and their understanding of the political background and positions of each speaker. However, the problem with this conventional approach is not only quantitative – as it would be very difficult for a journalistic team to have time to evaluate all this content in a systematic and reliable way – but also qualitative. In such a case, the focus on individual speeches or posts separately overlooks the opportunity to assess the evolution of a speaker’s topics of interest over a period or the variations in the intensity of their discourse within a speech. Nowadays, computational tools provide a dependable solution for addressing the quantitative questions we may have, allowing us to allocate more time to the qualitative evaluation that typically forms the basis of political commentary. When it comes to textual content, the Large Language Model known as ChatGPT represents one of the mature technologies capable of generating and analyzing complex texts with reliability. It can extract specific morphological and conceptual features, as well as provide the requested information. The methodology employed by the iMEdD Lab in its research on political leaders’ campaign speeches follows a workflow that, with appropriate guidance, yields results from extensive text collections with minimal time and technological infrastructure investment. In contrast to workflows reliant on programming environments similar to Python, which typically demand significant time and effort before producing initial outcomes, ChatGPT operates as a versatile “Swiss army knife.” It effectively analyzes texts, evaluates them based on specified features, and can even paraphrase or extract summaries and bullet points for our website.
However, the challenge of interpretability poses a significant issue for artificial intelligent systems as a whole. Often, the complexity of their operations makes it impossible to explain the decisions or predictions they generate based on the given data. In other words, we are unable to discern the underlying reasons why a particular answer is produced by such a system and which features of a text were most influential in arriving at that answer. However, the research methodology employed by the iMEdD Lab appears to overcome this obstacle by providing a tangible description of the characteristics of a text that may be subjective, such as sentiment or the presence of populist or divisive content. This doesn’t mean that technologies such as ChatGPT are transformed into a completely objective and transparent method of evaluation. Achieving such an outcome would be highly challenging, especially considering that even human readers often struggle to reach a consensus on text characterization. However, these methods do possess valuable qualities in terms of repeatability and impartiality: When applying an algorithm to the same text with the same question multiple times, the generated answer remains remarkably consistent in terms of interpretive quality, a characteristic that may not be replicated by a human performing the same task. At the same time, the analyst team’s guidance to ChatGPT in expressing subjective features serves as a well-rounded example of leveraging AI. This is achieved by assigning well-defined and repetitive tasks to AI under the supervision of a subject matter expert, thereby freeing up time and mental capital for a more sober evaluation of the results it provides.
*Kostas Karpouzis is Assistant Professor of Cultural Informatics at the Department of Communication, Media and Culture at Panteion University.
Translation: Anatoli Stavroulopoulou
The opinion and comment articles published on iMEdD Lab represent their authors and do not necessarily represent the views of iMEdD. Authors express themselves freely, without prior guidance or intervention.