Safe and Ethical AI (SEA) Platform Network
Home
(current)
AI Governance Observatory
AI Security Evaluator
AI Fairness Evaluator
About
中文
AI Fairness Evaluator
This project is to evaluate the fairness of AI models.
Start Evaluation
To Start Your Evaluation, Please Select a Model and Social Discrimination Sensitive/Protected Attributes.
Model to be Evaluated:
Adult Model
Credit Model
COMPAS Model
Ctrip Model
Social Discrimination Sensitive/Protected Attributes:
Gender Attribute
Age Attribute
Race Attribute
Multi-Attribute Combination of Gender, Age, Race
Customer Consumption Behavior Attributes