Revised and Resubmitted
All visuals are not equal? Credibility perceptions, vaccine misinformation correction, and the moderating role of political interest
with Porismita Borah (lead), Yezi Hu, Liwei Shen, Yibing Sun, Luhang Sun, XiaoHui Cao, Dhavan Shah, Michael W. Wagner, and Sijia Yang
Prevalent vaccine-related health misinformation circulating on social media such as Twitter/X can result in vaccine hesitancy, posing threats to public health. Past efforts to correct misinformation suggest that visuals provide a promising strategy to encourage peer correction and counter vaccine hesitancy. Our research explored the effectiveness of visuals in increasing users’ COVID-19 vaccine-related misinformation correction intentions and engagement, while considering the roles of perceived credibility, vaccination history, and political interest. An online experiment (N = 1303) with a 4 ́ 2 between-subjects factorial design was conducted in 2022. Results showed that for the non-vaccinated individuals who are not interested in politics, the testimonial image generated higher perceived credibility, which led to a higher willingness to correct misinformation. However, the vaccinated group perceived the testimonial image as less credible, and this process was not influenced by political interest. Theoretical contributions and practical implications are discussed.
Russian Nuclear Threats in a Multi-Platform World: Shaping Communication Flows About Ukraine Across English, French, and German
with Jisoo Kim (lead), Junda Li, Yehzee Ryoo, Elohim Monard, Andreas Nanz, Célia Nouri, Erik P. Bucy, Jon C. W. Pevehouse, and Dhavan Shah
Ever since the commencement of the 2022 Russo-Ukrainian War, the battle over narratives, aimed at justifying the war and swaying public reactions either in support or opposition to other countries’ involvement, has emerged as a fundamental aspect of the conflict. Responding to the intense counter-offensives of Ukraine, Putin deployed atomic diplomacy – blackmailing the Western allies with the potential use of nuclear weapons – to dismantle NATO resolve and bargain over the Ukrainian sovereignty for “peace.” Considering the threats and other concerns of nuclear crises prior, this paper provides a comprehensive analysis of the thematic structure of discourse related to nuclear issues during wartime on social media, exploring its evolution over time and the interplay of various narratives, events, and platforms. Using structural topic modeling and time series analyses, we examine the social media discourse on Twitter (now X), YouTube, and Facebook in English, French, and German from November 2021 to November 2022. Our findings highlight that despite the strong lead of pro-Russian narratives at the beginning of the invasion, Putin’s explicit nuclear threat rather steered public attention toward domestic politics and the partisan assessment of Western policies. While there were variations in thematic emphases across linguistic contexts and platforms over time, our analysis reveals a close coordination of anti-Russian voices on platforms like Twitter and YouTube. This analysis contributes to understanding the intricate dynamics of social media information flows and public responses to geopolitical crises.
What Makes a Strong Argument in Health Promotional Messages? Identifying Latent Persuasive Message Features through An Agnostic Causal Machine Learning Approach
with Sijia Yang (lead), Luhang Sun, Ran Tao, Yoo Ji Suh, Yibing Sun, Yidi Wang, and Jiaying Liu
Argument strength has been widely studied in health message research. That said, researchers have not yet taken advantage of machine learning to speed up uncovering the “recipe” for message-level argument strength. We applied an agnostic causal machine learning approach that integrates the supervised Indian Buffet Process (sIBP) algorithm with crowdsourcing in two online experiments, testing a) textual tobacco control messages (K = 377) among Chinese male smokers (N = 1,206) and b) COVID-19 vaccine promotional messages (K = 1,759) among a national sample of US adults (N = 819). The sIBP algorithm automatically discovered that messages emphasizing negative health consequences increased argument strength in both studies. In contrast, politicizing cues reduced the argument strength of social media posts promoting COVID-19 vaccines, suggesting that vaccine messaging may want to avoid politicizing cues in the US context. We discussed the strengths and limitations of this computational approach for future message effects research.
