Emotion, Social Robots, and a New Human-Robot Relationship
People have welcomed conversational AI technologies into our homes, workplaces, and institutions where we interact with them on a daily basis. The proliferation of digital assistants in a multitude of embodiments (e.g,. speakers, displays, avatars, robots) in human environments over extended periods of time provides us with new ways to investigate, develop and assess the design of personified AIs that emotionally engage and support people to promote human flourishing across a wide range of applications and usage contexts. In this talk, I highlight a number research projects where we are developing, fielding, and assessing social robots in homes, schools, and hospitals. We explore different embodiments and develop adaptive algorithmic capabilities for our robots to sustain interpersonal engagement and personalize to people’s needs to support novel interventions in education and wellness. In addition to evaluating the impact of these capabilities and features on improving learning, sustaining engagement, nudging behavior, and shifting attitudes — we are also examining the nature of the relationship that people form with these personified AI technologies and how it contributes to these impacts. We conclude by reflecting on the ethical and responsible design of intelligent technologies that emotionally engage and build relationships with people.
Cynthia Breazeal is a Professor at the MIT Media Lab where she founded and directs the Personal Robots Group. She is also Associate Director of the Media Lab in charge of new strategic initiatives and Director of MIT’s initiative on Responsible AI for Social Empowerment and Education (RAISE). She is a pioneer in the field of social robotics and human-robot interaction. Her research focuses on the design and real-world impact of personalized and emotionally engaging personified AI technologies that promote personal growth, learning, creativity and flourishing by people of all ages. She is author of the seminal book ‘Designing Sociable Robots,’ named a AAAI Fellow, and is a recipient of the George R. Stibitz Computer & Communications Pioneer Award. She has spoken at prestigious venues such as TED, CES, SXSW, the World Economic Forum, and the United Nations on topics related to AI, innovation, and society. She is globally recognized as an award-winning innovator, designer, and entrepreneur. Her work has been recognized by the National Academy of Engineering, the National Design Awards, and Technology Review’s TR100/35 Award. She was founder, Chief Scientist and Chief Experience Officer of the mass consumer home robotics startup, Jibo, Inc. whose eponymous robot received numerous design and innovation awards by CES, Fast Company, Core 77, and was featured on the cover of TIME magazine as part of the 2017 Best Inventions Awards. She received her doctorate from MIT in Electrical Engineering and Computer Science in 2000.
Robots, Love, and Human Relations: Transparency and Consent with Commercial Products
From its beginning, the project of AI has been both about making machines more intelligent, and understanding human and other animal intelligence through modelling it on machines. What does it tell us about our lives, our morality, and our relationships to governments and corporations that individuals want to be “companions” with designed artefacts? What are the obligations of those of us who design the artefacts they love? This talk discusses:
– ) the biology of love,
– ) the morality of consent,
– ) the reason we use governments to regulate corporations,
– ) why the OECD and EU have decided AI must be transparent, and
– ) how much fun we can have anyway.
Joanna J Bryson is an academic recognised for broad expertise on intelligence, its nature, and its consequences. She advises governments, transnational agencies, and NGOs globally, particularly in AI policy. She holds two degrees each in psychology and AI (BA Chicago, MSc & MPhil Edinburgh, PhD MIT). Her work has appeared in venues ranging from reddit to the journal Science. From 2002-19 she was Computer Science faculty at the University of Bath; she has also been affiliated with Harvard Psychology, Oxford Anthropology, The Mannheim Centre for Social Science Research, The Konrad Lorenz Institute for Evolution and Cognition Research, and the Princeton Center for Information Technology Policy. During her PhD she first observed the confusion generated by anthropomorphised AI, leading to her first AI ethics publication “Just Another Artifact” in 1998. She has remained active in the field including coauthoring the first national-level AI ethics policy, the UK’s (2011) Principles of Robotics. She continues to research both the systems engineering of AI and the cognitive science of intelligence, with present focusses on the impact of technology on human cooperation, and new models of governance for AI and ICT. She is presently the Hertie School’s Professor of Ethics and Technology, a position she took up on 1 February 2020.
Attention by Design: Course-Corrections for Wandering Minds
A strong social consensus has formed around the association between digital technology and toxic psychological effects, with a notable emphasis on themes of distraction, addiction, and exhaustion. Although digital technologies might extend human communicational range, expand experiential horizons, and afford new modes of self-expression, their intensified temporalities and relentless demands for sensory and cognitive engagement pull us into coercive loops of escape and self-forgetting and exhaust our capacity to resist. Yet a new crop of technologies promise to promote wellbeing by helping beleaguered humans guard against, counteract, or “detox” from toxic tech. These attention-protecting, attention-boosting, attention-enabling technologies run the gamut from smart timers and notification filters to distraction-reducing computer software and course-correcting wearable devices designed to sense, and some times intervene, in users’ mental states. This talk parses the design logics of three such devices — the Muse headset, AttentiveU glasses, and the FeelZing patch. According to what conception of attention does each device operate? What attentional interventions does each make? What kind of attentional subjects do its features and functions address, enable, and, perhaps, bring into being? Considered together and in contrast, their respective formulae for promoting attentional wellbeing reveal tensions and fissures in the dominant attentional logics of the day and raise a set of ethical questions for the future of attention modulation.
Natasha Schüll is a cultural anthropologist and associate professor in the Department of Media, Culture, and Communication at New York University. Her 2012 book, Addiction by Design, parses the intimate relationship between the experience of gambling addiction and casino industry design tactics, showing how architectural, atmospheric, ergonomic, audiovisual, and algorithmic-computational techniques are marshalled to suspend—and monetize—gamblers’ attention. Her current book project, “Keeping Track,” explores the rise of sensor-based, digital technologies of the self and the new modes of introspection, self-care, and self-regulation they offer. Her documentary film, Buffet: All You Can Eat Las Vegas, has screened multiple times on PBS and appeared in numerous film festivals. Her research and op-eds have been featured in 60 Minutes, The New York Times, The Economist, The Atlantic, The Washington Post, The Financial Times, Salon, NPR, WGBH and WNYC.