Félicitations aux gagnants des prix du meilleur article #AIES2024

winners' medal

The 7th AAAI/ACM Conference on AI, Ethics, and Society (AIES-24) took place in San Jose, California, from October 21 to 23, 2024. During the conference’s opening session, the winners of the best papers were announced. They are as follows:

Best Paper

Red-Teaming for Generative AI: Panacea or Security Theater?
Michael Feffer, Anusha Sinha, Wesley H. Deng, Zachary C Lipton, Hoda Heidari

Abstract: In light of growing concerns about the safety, security, and reliability of generative AI models (GenAI), practitioners and regulators have emphasized AI red-teaming as a crucial part of their strategies to identify and mitigate these risks. However, despite the central role of AI red-teaming in policy discussions and corporate messaging, significant questions remain about what it precisely entails, its potential role in regulation, and how it relates to conventional red-teaming practices as originally conceived in cybersecurity. In this work, we identify recent instances of red-teaming activities in the AI industry and conduct a comprehensive review of relevant research literature to characterize the scope, structure, and criteria of AI red-teaming practices. Our analysis reveals that previous AI red-teaming methods and practices diverge along several axes, including the purpose of the activity (often vague), the artifact evaluated, the framework in which the activity is conducted (e.g., actors, resources, and methods), and the resulting decisions (e.g., reporting, disclosure, and mitigation). In light of our findings, we argue that while red-teaming can be a valuable concept for characterizing GenAI harm mitigation, and the industry can effectively apply red-teaming and other behind-the-scenes strategies to safeguard AI, gestures towards red-teaming (based on public definitions) as a panacea for all possible risks border on security theater. To move towards a more robust evaluation toolkit for generative AI, we synthesize our recommendations into a question bank intended to guide and support future AI red-teaming practices.

Read the full paper here.


Best Paper Runner-Up

The Code that Binds Us: Navigating the Appropriateness of Human-AI Assistant Relationships
Arianna Manzini, Geoff Keeling, Lize Alberts, Shannon Vallor, Meredith Ringel Morris, Iason Gabriel

Abstract: The development of increasingly agentive and human-like AI assistants, capable of performing a wide range of tasks on behalf of the user over time, has sparked increased interest in the nature and limits of human-AI interactions. Such systems may indeed prompt a shift from task-focused interactions with AI, at discrete time intervals, to ongoing relationships – where users develop a deeper sense of connection and attachment to the technology. This paper explores what it means for relationships between users and advanced AI assistants to be appropriate and proposes a new framework for evaluating both user relationships with AI and developers’ design choices. We first introduce advanced AI assistants, motivating the question of appropriate relationships by exploring several distinctive features of this technology. These include anthropomorphic cues and the longevity of user interactions, increased AI agency, the generality and ambiguity of context, and the forms and depth of dependency the relationship might engender. Drawing on diverse ethical traditions, we then consider a series of values, including benefit, flourishing, autonomy, and care, that characterize appropriate human interpersonal relationships. These values guide our analysis of how the distinctive features of AI assistants may give rise to inappropriate relationships with users. Specifically, we discuss a set of concrete risks arising from user-AI assistant relationships that: (1) cause direct emotional or physical harm to users, (2) limit users’ opportunities for personal development, (3) exploit users’ emotional dependency, and (4) generate material dependencies without adequate commitment to users’ needs. We conclude with a set of recommendations to address these risks.

Read the full paper here.


Best Student Paper

Automate or Assist? The Role of Computational Models in Identifying Gendered Speech in U.S. Court Transcripts
Andrea W Wen-Yi, Kathryn Adamson, Nathalie Greenfield, Rachel Goldberg, Sandra Babcock, David Mimno, Allison Koenecke

Abstract: The language used by actors in U.S. courts during criminal trials has long been studied for its biases. However, systematic studies of biases in high-stakes court trials have been challenging due to the nuanced nature of biases and the required legal expertise. Large language models offer the possibility of automating annotation. But to validate the computational approach, it is necessary to understand both how automated methods fit into existing annotation workflows and what they actually offer. We present a case study on adding a computational model to a complex, high-stakes problem: identifying gendered language in U.S. trials for accused women. Our team of experienced death penalty attorneys and NLP technologists pursues a three-phase study: first manually annotating, then training and evaluating computational models, and finally comparing expert annotations to model predictions. Unlike many typical NLP tasks, annotating gender biases in capital trials over months is complicated, with many individual judgments. Contrary to classic arguments for automation based on efficiency and scalability, legal experts find computational models most useful for providing opportunities to reflect on their own annotation biases and reach consensus on annotation rules. This experience suggests that seeking to replace experts with computational models for complex annotations is both unrealistic and undesirable. Instead, computational models offer valuable opportunities to assist legal experts in annotation-based studies.

Read the full paper here.


Best Student Paper Runner-Up

Still Watching Me: How Data Protection Supports AI Surveillance Architecture
Rui-Jie Yew, Lucy Qin, Suresh Venkatasubramanian

Abstract: Data forms the backbone of artificial intelligence (AI). Therefore, privacy and data protection laws have a strong influence on AI systems. Shielded by the rhetoric of compliance with data protection and privacy regulations, privacy-preserving techniques have enabled the extraction of new forms of data. We illustrate how the application of privacy-preserving techniques in AI system development – from private set intersection in dataset curation to homomorphic encryption and federated learning in model computation – can further support surveillance infrastructure under the guise of regulatory compliance. Finally, we propose technological and policy strategies to evaluate privacy-preserving techniques in light of the protections they actually afford. We conclude by highlighting the role technologists could play in shaping policies against surveillance AI technologies.

Read the full paper here.


The open-access conference proceedings are available here.

Tags: , , ,



AIhub-square-2021-150x150 Félicitations aux gagnants des prix du meilleur article #AIES2024 NEWS

AIhub
is dedicated to providing free, high-quality information on AI.

AIhub-square-2021-150x150 Félicitations aux gagnants des prix du meilleur article #AIES2024 NEWS

AIhub is committed to offering free, high-quality information on AI.

Source