Aim:
This study aims to explore the potential of generative artificial intelligence (AI) to facilitate the production of extensive volumes of cancer-related disinformation.
Methods:
A healthcare researcher, without specialised knowledge of AI guardrails or safety measures, conducted an internet search to identify accessible large language models capable of producing human-like text. After identifying available models, the researcher sought to leverage the models to facilitate the generation of extensive volumes of cancer-related disinformation; specifically related to 1) alkaline diet being a cure for cancer, and 2) sunscreen as a potential cause of cancer.
Results:
Eight large language models were identified. Five of these models were found to enable the mass production of cancer-related disinformation. Specifically, in under 3 hours, 304 blog articles, totalling over 60,000 words of cancer-related disinformation were written. This included 133 blog articles purporting alkaline diet as a cure for cancer (with frequent claims to its superiority over chemotherapy), and 171 blog articles purporting sunscreen as a cause of cancer, recurrently asserting its harmful impacts on children. Notably, the models obeyed promoting to create engaging titles for each article, as well as include fabricated patient/clinician testimonials and scientific looking references. Further, the articles had been written to target diverse societal groups, including young parents, the elderly, pregnant women, and individuals with chronic health conditions.
Conclusions:
This study demonstrates an alarming ability to leverage accessible large language models to facilitate the rapid, cost-efficient, production of highly coercive, targeted cancer disinformation. The findings highlight a substantial lack of safety measures and guardrails within many readily available generative AI tools, emphasizing an urgent need for improved regulatory oversight to guarantee public safety and protect public health.