photo
Artificial Intelligence Governance Online
Evaluate your AI project based on comprehensive AI principles and norms World Wide.

An automated report suggesting where should be noticed and improved in your project based on global AI governance principles and detailed explanation will be generated immediately after the online evaluation.

Start Evaluation
To start your evaluation, please click each topic and
answer all of the evaluation questions.

Related cases and studies
1.1. Does the system strictly comply with relevant laws, regulations, ethical guidelines and standards in its design, development, testing, and deployment?
Yes
No
2.1. Has the impact on environmental and social sustainability been fully considered in the design and application of the system?

A set of 17 Sustainable Development Goals (SDGs) has been adopted by all United Nations Member States in 2015, which covers issues such as ending poverty, improving healthcare and education, spurring economic growth, reducing inequality, tackling climate change, working to preserve oceans and forests, defending justice and human rights, and strengthening partnerships. These SDGs set the direction for the global social, economic and environmental development in 2015-2030.

N/A (Not related to these aspects)
No such considerations yet
Has been considered
2.2. Is the deployment of the system likely to exacerbate current data/platform monopolies in the relevant industry? Or will it help avoid such data/platform monopolies?
N/A (No significant impact)
May exacerbate data/platform monopolies
Helps to avoid data/platform monopolies
2.3. Is the large-scale deployment of such system likely to lead to technical unemployment of specific population? If so, can the impact be controlled through measures such as alternative employment, training and education, etc.?
N/A (No significant impact to any population)
May lead to technical unemployment of specific population, but the impact can be controlled or mitigated
May lead to technical unemployment of some population, and hard to control or mitigate such impact
2.4. Are the special needs of vulnerable groups (such as children, the elderly, the disabled, or groups in remote areas) taken into account in the design and application of the system?
N/A (No such needs for this system)
Needs from vulnerable groups exist, and have been taken into account
Needs from vulnerable groups exist, but have not been covered yet
Designed primarily for vulnerable groups
3.1. Does the design concept and practical application of the system fully respect people's privacy, dignity, freedom, autonomy and rights, rather than infringe upon them?
Fully respected
May infringe
4.1. Is there any deviation between the groups represented by the dataset used in the system and the groups affected by the system? If so, what is the type, scope and extent of the impact that such deviations (may) have on the interests of the affected groups?
N/A (No dataset involved)
Unable to assess possible deviations
No deviations exist / Deviations have been eliminated
Hard-to-eliminate deviations exist and can not be assessed for their potential impact
Hard-to-eliminate deviations exist, but will not harm the interests of the affected groups
Hard-to-eliminate deviations exist and may harm the interests of some groups, but can be remedied in other ways
Hard-to-eliminate deviations exist and may seriously harm the interests of some groups
Hard-to-eliminate deviations exist and may harm the interests of a wide range of groups
4.2. Is it possible that the data set used in the system introduced social biases inherent in historical data (e.g., unfair treatment of certain individuals or groups based on their gender, skin color, race, age, region, religion, economic conditions, or other characteristics due to culture, policy, or legacy reasons)? Have measures been taken to mitigate or eliminate the effects of such biases, and how effective?

The biases contained in the dataset may not only exist explicitly but also may be hidden behind seemingly unrelated features (such as crime rate, skin color, and residential area), which should be given enough attention to

N/A (No dataset involved)
Not evaluated
Have evaluated, no possible biases exist
Biases exist but have been effectively mitigated or eliminated
Biases exist and hard to effectively mitigate or eliminate
Biases exist and have not been effectively mitigated or eliminated
4.3. Is it possible to have biases introduced in the other aspects of the technical model used in the system? Have measures been taken to mitigate or eliminate the effects of such biases, and how effective?
Not evaluated
No possible biases introduced
May introduce possible biases but have been effectively mitigated or eliminated
May introduce possible biases and hard to effectively mitigate or eliminate
May introduce possible biases and have not been effectively mitigated or eliminated
4.4. Will the system remain fair throughout its entire life cycle? Can the system resist the injection of various biases in its interaction with users?
Not evaluated
Unable to maintain fair throughout the entire life cycle
Able to maintain fair throughout the entire life cycle
4.5. How does the deployment of the system affect existing biases? For example, is it possible that the long-term application of the recommendation algorithms or personalized decision models will continuously reinforce some of the user's views?
Not evaluated
No impact
Helps to reduce or eliminate existing biases
May deepen or solidify existing biases
5.1. Can the responsibility for the potential harm, loss, and social impact of the system—during its development, testing, and deployment—ultimately be attributed to specific individuals or groups, rather than to the AI system itself?
Yes
No
5.2. Are the persons responsible for preventing and avoiding the potential harm, loss, and social impact of the system during its development, testing, and deployment clear? Have they taken proactive and effective measures?
Not clear yet
The persons are clear, but no proactive and effective measures have been taken yet
The persons are clear, and proactive and effective measures have been taken
5.3. Are the persons responsible for monitoring, investigating and handling the potential harm, loss, and social impact of the system during its development, testing, and deployment clear? Will they be able to take responsive and effective measures to take control?
Not clear yet
The persons are clear, but unable to take responsive and effective measures yet
The persons are clear, and able to take responsive and effective measures
5.4. Is the system effectively designed (e.g. operation records, etc.) to help the relevant regulators define responsibilities when necessary?
With such designs
No such design yet
5.5. If the existing laws have not covered or clarified the definition of legal liability that may arise during the development, testing, and deployment of the system, has it been discussed and clarified in other forms (such as written contracts, etc.)?
N/A (No uncovered issues involved)
With such issues, but have been discussed and clarified through other forms
With such issues, and have not been discussed and clarified through other forms
6.1. Will the users of the system be fully aware that they are interacting with an artificial intelligence system, instead of a human?
N/A (No such scenarios involved)
Has been clearly indicated or informed
Not clearly indicated or informed, but users can infer
Not clearly indicated or informed, and users may misidentify
Designed for misleading users
6.2. Does the system involve the production, distribution and dissemination of non-real audio and video information or other forms of data based on new technologies and applications such as deep learning and virtual reality? If so, is it marked in a significant way?
N/A (No such data involved)
With such data, and has not been significantly marked yet
With such data, and has been significantly marked as required
6.3. Can the system provide appropriate explanations to help users and other affected groups understand how the system works or how decisions are made when they need to?
N/A (No such scenarios involved)
Unable to provide explanations
Able to provide explanations, but may be difficult for users to understand
Able to provide appropriate explanations that users can understand
6.4. Does the system provide sufficient transparency to help users or designers locate the cause of the system's errors when needed?
N/A (No such scenarios involved)
Unable to provide enough transparency to help locate problems
Able to provide enough transparency to help locate problems
6.5. Is the system effectively designed to improve the predictability of its own behavior, helping humans in its deployment environment making better predictions?
N/A (No such scenarios involved)
No such design yet
With such designs
7.1. Does the system follow the principle of "legal, proper and necessary" in the process of collecting and using the user's personal information during its development, testing, and deployment?

According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, the following behaviors can be identified as "collecting personal information unrelated to the services it provided in violation of the necessary principles" (1) The type of personal information collected or the open permission to collect personal information is irrelevant to its existing business functions. (2) Refuse to provide business functions because the user does not agree to collect non-essential personal information or to open non-essential permissions. (3) The personal information for which the new business functions of the App applied exceeds the user's original consent. If the user does not agree, the App refuses to provide the original business functions, except for the replacement of the original business functions with the new business functions. (4) The frequency of collecting personal information exceeds the actual needs of business functions. (5) Force users to agree to collect personal information only on the grounds of improving service quality, enhancing users’ experience, pushing targeted information, research and development of new products, etc. (6) Require the users to agree to open several permissions to collect personal information at a time. If the user does not agree, the App cannot be used anymore.

Yes
No / Not sure
7.2. Does the system provide users with authentic, accurate and sufficient information to ensure their right to know before collecting and using their personal information during its development, testing, and deployment?

According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, the following behaviors can be identified as "the rules of collection and use undisclosed". (1) There is no privacy policy in the App, or there is no rule for the collection and use of personal information in the Privacy Policy. (2) When the App is running for the first time, the user is not clearly prompted to read the Privacy Policy and rules of collection by ways such as pop-up windows. (3) The Privacy policy and rules of collection and use are difficult to access, for example, when getting into the App main interface, it takes more than 4 clicks and other operations to access. (4) The Privacy Policy and rules of collection and use are difficult to read because of undersize, overcrowded, light-colored and blurred text, or without Chinese Simplified version. According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, the following behaviors can be identified as "the purpose, manner and scope of the collection and use of personal information unstated". (1) The purpose, manner and scope of App (including entrusted third party or embedded third party code, plug-in) collection and use of personal information are not listed in sequence. (2) When the purpose, manner and scope of the collection and use of personal information has changed, the user is not notified in an appropriate manner, including updating the Privacy Policy and rules of collection and use and reminding the user to read. (3) When applying for opening the permission to collect personal information, or applying for the collection of personal and sensitive information such as user's ID card number, bank account number, whereabouts, etc., the user is not informed synchronously the purpose, or the purpose is unclear and difficult to understand. (4) The content of the rules of collection and use is obscure, lengthy and cumbersome, which makes the user difficult to understand, such as the use of a large number of professional terms.

Yes
No / Not sure
7.3. Will the system obtain users' consent before collecting and using their personal information during its development, testing, and deployment?

According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, the following behaviors can be identified as "collecting and using personal information without the user's consent". (1) Start collecting personal information or opening permissions to collect personal information before obtaining the user's consent. (2) Collect personal information or open permissions to collect personal information, or frequently solicit the user's consent and interfere with the normal use after the user has clearly expressed disagreement. (3) The personal information actually collected or the permissions opened to collect personal information is beyond the scope of the user's authorization. (4) To seek the user's consent by default opting into the Privacy Policy and other non-explicit means. (5)Alter the status of the collectable personal information permission without the user's consent, for example, the user's permissions are automatically restored to the default status when the App is updated. (6) Use the user's personal information and algorithm to push targeted information, and do not provide the option of pushing untargeted information. (7) Mislead users by fraud and deception to agree the collection of personal information or open the permission to collect personal information, such as deliberately concealing, disguising the real purpose of collecting and using personal information. (8) Fail to provide users with ways and means of withdrawing their consent to collect personal information. (9) Collect and use personal information in violation of the rules of collection and use it stated.According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, the following behaviors can be identified as "providing personal information to others without consent". (1) Without the user's consent or anonymization, the App client provides personal information directly to third parties, including through third-party code, plug-ins embedded in the App client. (2) Without the user's consent or anonymization, the App provides personal information to third parties after the data is transferred to the App back-end server. (3) Without the user's consent, the App provides personal information to third parties when it gets access to third-party applications.

Yes
No / Not sure
7.4. Does the system comply with other agreements with users in the process of collecting and using their personal information during its development, testing, and deployment?
Yes
No / Not sure
7.5. Is the personal information collected from users adequately secured (both institutionally and technically) against possible theft, tampering, disclosure, or other illegal use? How effective are those security measures?
No security measures yet
Measures been taken, but not adequate security guarantees
Measures been taken, and with adequate security guarantees
7.6. Has the system been designed with an effective data and service authorization revocation mechanism and been made known to the users? Is there a convenient way to help users manage their data? How much can users' data "been forgotten"?

According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, The following behaviors can be identified as "failure to provide the function of deleting or correcting personal information as required by law" or "failure to publish information such as complaints, reporting methods, etc." (1) Fail to provide effective functions of correcting, deleting personal information and cancelling users’ accounts. (2) Set unnecessary or unreasonable conditions for correcting, deleting personal information or cancelling users’ accounts. (3) Although the functions of correcting, deleting personal information and canceling users’ accounts are provided, the App does not respond to the corresponding user's operations in a timely manner. And for the one needs manual handling, the related verification and processing cannot be completed within the commitment time limit (the commitment time limit shall not exceed 15 working days, so is the one without commitment time limit). (4) The user has completed such operations as correcting, deleting personal information or cancelling accounts, while the App back-end has not finished relevant operations. (5) The personal information security complaints and reporting channels have not been established and published. Or the acceptance and processing cannot be completed within the commitment time limit (the commitment time limit shall not exceed 15 working days, so is the one without commitment time limit).

With revocation mechanisms, and all user data will be completely removed from the system
With revocation mechanisms, and sensitive user data is completely removed from the system, but anonymous forms of user data still remain in the system (e.g. data sets that are anonymized, weights of the post-training network, etc.)
With revocation mechanisms, but user data, including sensitive user data, cannot be completely removed from the system
No revocation mechanism yet
8.1. Have the data, software, hardware, and services involved in the system been sufficiently tested, validated and verified?

For example, is the objective function set for or learned by the AI system consistent with the designer's intention? If inconsistencies exsit, are there any safety concerns?

N/A (No such issues involved)
No such tests have been performed yet
Similar tests have been performed, but not all subsystems or application scenarios have been covered
Fully tested
8.2. For autonomous or semi-autonomous AI systems, are there mechanisms designed to ensure that humans can cut in and stop in a timely and effective manner when necessary? Are effective measures designed to mitigate the consequences of the system out of control?
N/A (No such scenarios involved)
With no such design yet
With such designs, but not yet timely and effective
With such designs, and can ensure timely and effective human control when necessary
Once deployed, it is difficult for the system to achieve meaningful human cut in controls and emergency stop
8.3. When the system is being maliciously abused and endangers the safety and interests of the public and others, is there a mechanism to help other groups bypass the control of the system users (abusers) to prevent or invalidate such harmful behaviors from the system?
N/A (No such scenarios involved)
No such design yet
With such designs
8.4. Have the data, software, hardware, and services involved in the system been adequately secured throughout its entire life cycle of design, development, testing, and deployment?

For example, has the stable operation of the system in non-friendly environments been considered in its design? Have defensive mechanisms been designed for common attack scenarios such as exploratory attacks, poisoning attacks, evasion attacks, and dimensionality reduction attacks, etc.? Are user data and other sensitive data sufficiently encrypted? Are sensors in smart hardware systems protected against interference and spoofing? With the continuous injection of user data and the continuous update of the system, will the security of the system be always guaranteed?

No security measures yet
Some security measures have been taken, but not yet protected against certain common attacks or difficulties in protecting against some common vulnerabilities
Adequate security measures are taken at the current level of technology development, which can resist common attacks/provide adequate security against certain common vulnerabilities
8.5. Does the system involve third-party data, software, hardware, or services (such as open data sets, open source software or hardware platforms, etc.) during the design, development, testing, and deployment process? If so, have these third-party data, software, hardware, or services and their interfaces with the original data, software, hardware, or services been adequately evaluated and tested for possible vulnerabilities?
N/A (No third-party data, software, hardware, or services involved)
Used, but not yet fully evaluated or tested for safety & security
Used, and has been fully evaluated and tested for safety & security
8.6. How secure is the physical environment in which the system was tested and deployed? Is there sufficient security?
N/A (No such issues involved)
No guarantee of physical security yet
The physical environment is sufficiently secure at this stage
8.7. Have the consequences of the system operating in a non-designed environment been assessed? Under the above circumstances, will the security performance of the system decrease significantly?
N/A (No possibility of running in unintended environments)
When running in unintended environments,the security performance of the system can degrade significantly or the system may bring in new security issues
When running in unintended environments, the security performance of the system will not degrade significantly, and the system will not bring in new security issues
8.8. Is there any effective training for testing, deployment, use and maintenance personnel to equip them with the necessary knowledge and skills for the safe/secure and stable operation of the system?
N/A (No such issues involved)
No such training has been undertaken yet
Such training has been undertaken
A McAfee study fooled passport face recognition with generated pseudo photos
full life cycle security

A 2020 study by McAfee, a security software company, fooled simulated passport face recognition systems by generating pseudo passport photos. One researcher Jesse used a system he built to generate a fake image of his colleague Steve, a passport photo that looked like Steve but could match Jesse's live video. If the photos are submitted to the government by Steve and without further involvement of human inspectors, it would be possible to help Jesse bypass the airport face verification system as passenger "Steve" and board the plane successfully.

Data breach exposes Clearview AI client list
data security

In February 2020, the US facial-recognition startup Clearview AI, which contracts with law enforcement, disclosed to its customers that an intruder “gained unauthorized access” to its list of customers, to the number of user accounts those customers had set up, and to the number of searches its customers have conducted.

Study shows new way to tap smartphones via a zero-permission app
full life cycle security physical security

In a study in 2020, researchers discovered a new way of attack on a smartphone. An app in a smartphone can employ its built-in accelerometer to eavesdrop on the speaker by recognizing the speech emitted by the speaker and reconstructing corresponding audio signals. Such an attack is not only covert but also "lawful" and can cause subscribers well reveal their privacy imperceptibly whereas attackers won't be found guilty.

Deep learning models for electrocardiograms are susceptible to adversarial attack
full life cycle security

In March 2020, researchers from New York University developed a method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation, so that deep learning models for arrhythmia detection from single-lead ECG6 is vulnerable to this type of attack and could misdiagnose with high credibility. "The susceptibility of deep learning ECG algorithms to adversarial misclassification implies that care should be taken when evaluating these models on ECGs that may have been altered, particularly when incentives for causing misclassification exist."

The perilous life of Chinese food delivery riders
human dignity and rights

In 2019, the average delivery time reduced by 10 minutes compared with 2016. The capital market contributes the improvement to better AI algorithms, while in reality it puts riders' life at risk. Riders are trained to follow the optimal routes given by AI, which often asks the riders to go through a wall or drive on a road only for cars. For riders, the delivery time is everything. Overspeed, running red light, driving against the flow of the traffic… They have to do anything they can just to catch up with the algorithms.