To Evaluate Computer Vision Models Meta Introduced FACET (FAirness in Computer Vision EvaluaTion).
As you can see in this example, Meta’s FACET (FAirness in Computer Vision EvaluaTion) dataset provides a range of images that have been assessed for various demographic attributes, including gender, skin tone, hairstyle, and more.
On Thursday, Meta revealed that it has made its DINOv2 computer vision model available under the Apache 2.0 license. Additionally, they are releasing a set of dense prediction models based on DINOv2, which can be used for tasks like semantic image segmentation and monocular depth estimation. This move provides developers and researchers with more options to explore DINOv2's capabilities in various applications.
Simultaneously, Meta introduced FACET (FAirness in Computer Vision EvaluaTion), a comprehensive benchmark designed to assess the fairness of computer vision models. FACET covers a wide range of tasks, including classification, detection, instance segmentation, and visual grounding.
Meta explained that their decision to introduce FACET stemmed from the difficulties associated with assessing fairness in computer vision, which have frequently been complicated by issues such as mislabeling and demographic biases.
In their blog post, Meta disclosed that the FACET dataset comprises 32,000 images featuring 50,000 individuals. These images have been meticulously annotated by expert human annotators to include demographic attributes. Furthermore, FACET includes labels for person, hair, and clothing attributes for 69,000 masks from SA-1B.
Meta conducted an assessment of DINOv2 using FACET, revealing subtle variations in its performance, notably in gender-biased categories.
Meta expressed their aspiration for FACET to establish itself as a standardized benchmark for evaluating fairness in computer vision models. They aim to assist researchers in evaluating fairness and resilience across a more comprehensive range of demographic characteristics. To facilitate this objective, Meta has made the FACET dataset and a dataset explorer available.
In order to enhance the effectiveness of FACET, Meta enlisted the expertise of reviewers to manually annotate demographic attributes related to individuals, such as perceived gender presentation and perceived age group, along with associated visual characteristics like perceived skin tone, hair type, and accessories.
In their blog post, Meta stated that their objective in launching FACET is to provide researchers and practitioners with a valuable tool. They aim to empower these professionals to engage in comparable benchmarking exercises that will facilitate a deeper comprehension of the disparities that exist within their own models. Furthermore, FACET is intended to serve as a means to monitor the effectiveness of any measures taken to address fairness-related concerns in these models.