Large Language Models (LLMs) like GPT-4 have redefined our interaction with technology, showcasing remarkable capabilities in natural language understanding and generation. However, as these models increasingly demonstrate agentic behavior—reasoning, acting, and adapting—they also pose risks, including the potential to generate harmful propaganda, misinformation, and manipulative narratives.
In this talk, I will explore the dual-edged nature of agentic AI. Drawing from my recent research, I will highlight real-world examples where LLMs have been misused to spread misleading information and discuss the broader societal implications of these advanced technologies. By examining how AI that reasons, acts, and adapts can be harnessed both constructively and detrimentally, we can better understand the urgent need for robust mitigation strategies.
I will present actionable strategies and innovative approaches aimed at curbing the generation of harmful content, ensuring that these adaptive AI systems are developed and deployed responsibly and ethically. Whether you are an AI practitioner keen on advancing responsible AI practices or someone interested in the societal impacts of adaptive technologies, this session will provide valuable insights into balancing cutting-edge innovation with essential ethical safeguards.
Join me as we delve into how we can collaboratively build safer, more trustworthy agentic AI systems that contribute positively to our digital ecosystem while mitigating the risks of propaganda and misinformation.
Julia Jose is a Computer Science Ph.D. candidate at New York University, advised by Rachel Greenstadt.
Her cutting-edge research leverages natural language processing and deep learning to address online privacy challenges, focusing on misinformation, conspiratorial content, extremism, and hate speech on social media platforms.
Julia has made significant contributions to understanding the dual capabilities of Large Language Models (LLMs) in both detecting and generating propaganda. Her work not only highlights these challenges but also proposes effective strategies to curb manipulative content generation.
Passionate about social impact, Julia served as a Venture Capital and Machine Learning Fellow at Atento Capital, a Tulsa-based firm committed to transforming the city into a thriving tech hub. There, she played a pivotal role in investment evaluation, deal sourcing, and advancing ML initiatives for early-stage startups, fostering innovation in an emerging tech landscape.
Previously, as a Data Scientist at Atos, she developed AI-driven applications and leveraged advanced MLOps tools to create forward-thinking solutions.
Julia holds a Master’s degree in Computer Science from Arizona State University and a Bachelor’s degree in Electronics and Communication Engineering from the National Institute of Technology Delhi. Her diverse experience also includes behavioral research and R&D in Arabic Language Technologies, Social Computing, and Visualization at the Qatar Computing Research Institute.
Driven by a commitment to social justice and ethical technology, Julia aims to harness AI to build a safer, more inclusive digital world. Her work stands at the intersection of technology and society, striving to solve complex problems with profound human impact.