![]() Seemingly overnight, the access hurdle has been rendered moot by the “AI as a service” model, spawned by an industry attracting $248.90 billion in private investment since 2013, in the United States alone. The technical complexity of AI tools and the processes required to use them was a considerable barrier in the pursuit of creating fakes for propaganda purposes. The potential of next-generation AI-generated propaganda has not yet been realized due to three mitigating factors, each of which are now increasingly irrelevant: access, technical capability, and the time and effort required to generate and effectively disseminate a malign fake. The next generation of AI could dramatically expedite the creation of on-demand false evidence with minimal effort, creating deceptions that are ready for immediate distribution and conceived to incite a reaction. Numerous examples of politically motivated bad actors embracing AI tools to deceive, manipulate, and erode public trust have already been documented (and possible future deceptions have been foreshadowed). Since then, the capacity of machine-learning tools has grown in tandem with the potential threats they pose to our information space, as the mechanisms for generating synthetic media have become more versatile and accessible. These artifacts were synthesized with publicly available datasets and run on consumer-grade hardware with open-source code the only factor mitigating their production was the patience and time commitment required to generate convincingly realistic content. In “deepfake” videos, AI-enabled face-swaps portrayed the likenesses of non-consenting celebrities in sexually explicit scenes. In December 2017, some of the first examples of weaponized AI-derived media surfaced in the form of AI pornography. The circumstances of the images’ extraordinary virality and the contentious political and cultural nature of their subject matter could serve as a bellwether for the future of malign propaganda applications involving AI-generated images and other forms of synthetic media. On March 31, a Manhattan grand jury indicted Trump over a hush payment made just before the 2016 election, and he was arraigned before the court on Tuesday, April 4.īefore the indictment, one set of AI-generated images, created using Midjourney and shared on Twitter by Bellingcat open-source researcher Eliot Higgins on March 20, accrued millions of views, spreading to other social platforms and multiple news websites. During this information void, AI-generated images envisioning the former president’s arrest circulated widely, titillating Trump’s opponents while enraging many of his supporters. A week of frantic chatter in mainstream and social media ensued, anticipating what might happen next. In a Truth Social post on March 18, 2023, former US President Donald Trump wrote that he expected to be arrested the following Tuesday.
0 Comments
Leave a Reply. |