Caroline Sinders is a machine learning designer/user researcher, artist. For the past few years, she has been focusing on the intersections of natural language processing, artificial intelligence, abuse, online harassment and politics in digital, conversational spaces.Caroline is a designer and researcher at Wikimedia, and a BuzzFeed/Eyebeam Open Lab Fellow. She holds a masters from New York University’s Interactive Telecommunications Program from New York University
Emotional Trauma and Machine Learning
How do we create, code and make emotional data inside of systems? And how do we create the necessary context t in larger systems that use data. Is it possible to use machine learning to solve very hard problems around conversation? For the past two years, I’ve been studying internet culture, online conversations, memes, and online harassment. I also worked as a user researcher at IBM Watson helping design and layout systems for chat bot software. As a designer and researcher interested in all of the nuances of human conversations and emotions, from humor to sadness, to memes and harassment, I wonder is it possible to code in emotions for machine learning systems? And what are the ethical implications of that? Can we design systems to mitigate harassment, to elevate humor? And can these systems promote human agency, and allow for participation from users to help decide and structure the system the talk in- can design and user participation help set what is harassment and what is not?
With machine learning, often the creators of the system are deciding what norms of the system and the users are left out of the collaboration. How do we create systems that are transparent for users, that also facilitate user participation? With online communities, communication, and culture, users make, users, do, users are the community.