The use of generative AI (GenAI) and Large Language Models (LLMs) leads to unintended alterations in the sentiment of original text, according to a study by Warwick Business School.
Academics from the Gillmore Centre for Financial Technology at the business school have published a new paper which explores the influence of the rise of LLMs on public sentiment.
The Centre said that it has drawn the conclusion that modifications to content introduced by LLMs make existing outcomes unreliable.
It says that the findings, which have been made possible through replicating and adapting well-established experiments, play a significant role in contributing towards the literature of GenAI and user generated content by showing that the "widespread adaptation of LLMs changes the linguistic features of any text".
The phenomenon was observed through a comprehensive analysis involving the examination of 50,000 tweets, utilising the powerful GPT-4 model to rephrase the text.
By implementing the "Valence Aware Dictionary for Sentiment Reasoning" (VADER) methodology to compare the original tweets with their GPT-4 rephrased counterparts, the researchers discovered that LLMs predominantly shift sentiment towards increased neutrality, effectively transitioning the text away from both positive and negative orientations.
“Our findings reveal a notable shift towards neutral sentiment in LLM-rephrased content compared to the original human-generated text," said Ashkan Eshghi, houlden fellow at the Gillmore Centre for Financial Technology. "This shift affects both positive and negative sentiments, ultimately reducing the variation in content sentiment."
He continued: "While LLMs do tend to move positive sentiments closer to neutrality, the shift in negative sentiments towards a neutral position is more pronounced. This overall shift towards positivity can significantly impact the application of LLMs in sentiment analysis.”
Recent Stories