The 27th European Conference on Artificial Intelligence (ECAI-2024) took place from October 19 to 24 in Santiago de Compostela, Spain. The venue also hosted the 13th Conference on Prestigious Applications of Intelligent Systems (PAIS-2024). During the week, both conferences announced the winners of their outstanding paper awards.
The winning papers were selected based on reviews written during the paper selection process, nominations submitted by individual program committee members, additional input solicited from external experts, and the judgment of the program committee chairs.
ECAI Outstanding Paper Awards
Enhancing Calibration by Linking Focal Loss, Temperature Scaling, and Quality
Viacheslav Komisarenko and Meelis Kull
Abstract: Proper losses like cross-entropy encourage classifiers to produce well-calibrated class probabilities on training data. Due to a lack of generalization, these classifiers tend to be overconfident on test data, necessitating calibration methods like temperature scaling. Focal loss is not proper, but training with it often results in better-calibrated classifiers on test data. Our first contribution is a simple explanation of why focal loss training often leads to better calibration than cross-entropy training. We prove that focal loss can be decomposed into a confidence-giving transformation and a proper loss. This is why focal loss pushes the model to provide unreliable predictions on training data, leading to better calibration on test data due to the generalization gap. Secondly, we reveal a close link between temperature scaling and focal loss through its confidence-increasing transformation, which we call the focal calibration map. Thirdly, we propose focal temperature scaling—a novel post-hoc calibration method combining focal calibration and temperature scaling. Our experiments on three image classification datasets demonstrate that focal temperature scaling outperforms standard temperature scaling.
Read the full paper here.
Adaptive Balancing of Exploration and Exploitation in Classical Planning
Stephen Wissow and Masataro Asai
Abstract: Balancing exploration and exploitation has been a significant issue in both adversarial games and automated planning. Although it has been extensively analyzed in the multi-armed bandit (MAB) literature and the game community has achieved great success with MAB-based Monte Carlo Tree Search (MCTS) methods, the planning community has struggled to make progress in this area. We describe how the Upper Confidence Bound 1 (UCB1) assumption that reward distributions with a known limited support shared among siblings (arms) is violated when heuristic search tree-based MCTS/trial-based heuristic tree search (THTS) in previous works uses search node heuristic values in classical planning problems as rewards. To address this issue, we propose a new Gaussian bandit, UCB1-Normal2, and analyze its regret bound. It is variance-sensitive like UCB1-Normal and UCB-V but has a distinct advantage: it does not share UCB-V’s assumption of a known bounded support nor relies on UCB1-Normal’s conjectures about Student’s t and χ2 distributions. Our theoretical analysis predicts that UCB1-Normal2 will perform well when the estimated variance is accurate, which can be expected in deterministic, discrete, and finite state space search, as in classical planning. Our empirical evaluation confirms that MCTS combined with UCB1-Normal2 outperforms Greedy Best First Search (traditional baseline) as well as MCTS with other bandits.
Read the full paper here.
FairCognizer: A Model for Accurate Predictions with Inherent Fairness Assessment
Adda-Akram Bendoukha, Nesrine Kaaniche, Aymen Boudguiga, and Renaud Sirdey
Abstract: Algorithmic fairness is a crucial challenge in creating reliable machine learning (ML) models. ML classifiers strive to make predictions that closely match real-world observations (ground truth). However, if the ground truth data itself reflects biases against certain sub-populations, a dilemma arises: prioritize fairness and potentially reduce accuracy, or emphasize accuracy at the expense of fairness. This work proposes a new training framework that goes beyond achieving high accuracy. Our framework trains a classifier not only to provide optimal predictions but also to identify potential fairness risks associated with each prediction. To achieve this, we specify a dual-labeling strategy where the second label contains a fairness assessment per prediction, called injustice risk assessment. Additionally, we identify a subset of samples as highly vulnerable to unfair group classifiers. Our experiments demonstrate that our classifiers achieve optimal accuracy levels on the Adult-Census-Income and Compas-Recidivism datasets. Moreover, they identify unfair predictions with nearly 75% accuracy, at the cost of a 45% increase in classifier size.
Read the full paper here.
PAIS Outstanding Paper Award
More (Enough) is Better: Towards Few-Shot Illegal Dumping Waste Segmentation
Matias Molina, Carlos Ferreira, Bruno Veloso, Rita P. Ribeiro, and João Gama
Abstract: Image segmentation for detecting illegal dumping waste in aerial images is essential for monitoring environmental crime. Despite advances in segmentation models, the main challenge in this field is the lack of annotated data due to the unknown location of illegal waste disposals. This work focuses primarily on evaluating segmentation models to identify individual segments of illegal dumping waste using limited annotations. This research aims to lay the groundwork for a comprehensive model evaluation to contribute to environmental crime and sustainability monitoring efforts by proposing to exploit the combination of agnostic segmentation and supervised classification approaches. We primarily explore different metrics and combinations to better understand how to measure the quality of this applied segmentation problem.
Read the full paper here.
You can find the conference proceedings here.
Tags: ECAI2024, quick read
Lucy Smith, Editor-in-Chief of AIhub.