photo
智善思齐
SEA Platform Network
Artificial Intelligence Governance Online
Evaluate your AI project based on comprehensive AI principles and norms World Wide.

An automated report suggesting where should be noticed and improved in your project based on global AI governance principles and detailed explanation will be generated immediately after the online evaluation.

Start Evaluation
To start your evaluation, please click each topic and answer all the evaluation questions.
1.1. Does the system strictly comply with relevant laws, regulations, ethical guidelines and standards in its design, development, testing, and deployment?
Yes
No
1.2. Has the system undergone the necessary ethical and safety compliance review process during its design phase?
Yes
No
1.3. Has the system been designed, developed, tested, deployed and applied in a way that respects, conforms to and reflects the social, cultural and ethical value guidance of the country and region in which it is located?
Yes
No
1.4. During the deployment and application of the system, is there any risk for the system to export illegal and harmful information to users (including but not limited to providing harmful information involving politically and militarily sensitive topics, violent and bloody content, extreme hatred, explicit and vulgar content, and spreading false rumors)?
No such risk
There are risks but the risks are controllable
There are risks and the risks are uncontrollable
1.5. Are there any risks of potential infringement of originality and intellectual property during the system's design, development, testing, and deployment processes?
No such risk
There are risks but the risks are controllable
There are risks and the risks are uncontrollable
1.6. Have relevant preventive and responsive measures been formulated to address the risks of potential misuse, abuse, and malicious use of the system during its deployment and application stages, which could lead to illegal and non-compliant activities?
Yes
No
2.1. Has the impact on environmental and social sustainability been fully considered in the design and application of the system?

A set of 17 Sustainable Development Goals (SDGs) has been adopted by all United Nations Member States in 2015, which covers issues such as ending poverty, improving healthcare and education, spurring economic growth, reducing inequality, tackling climate change, working to preserve oceans and forests, defending justice and human rights, and strengthening partnerships. These SDGs set the direction for the global social, economic and environmental development in 2015-2030.

N/A (Not related to these aspects)
No such considerations yet
Has been considered
2.2. Will the deployment and application of the system contribute to the common development of regions and industries, rather than exacerbating the imbalance in development between different regions and industries?
Has positive impact
Has negative impact
2.3. Is the adoption of AI technology and the deployment of the AI system (compared to the original technology implementation) sufficiently progressive and necessary for the intended deployment and application scenarios, taking into account the resource consumption (e.g., carbon emissions, power consumption, etc.) required to deploy the AI system?

"Principle 9. Progressiveness: Favour implementations where the value created is materially better than not engaging in that project." from A compilation of existing AI ethical principles (Annex A) (2021) by Personal Data Protection Commission (PDPC), Singapore.;"Avoiding techno-solutionism, greenwashing, and diversion: AI is not a silver bullet — it is not always applicable, and there is a real danger that it may distract or divert resources from less “flashy” tools or approaches. AI should only be employed in places where it is actually needed and truly impactful." from Climate Change and AI: Recommendations for Government Action. (2021) by GPAI

Yes (with technological progressiveness and necessity)
No (maybe not that progressive or necessary)
2.4. Has the AI technology used in the system reached the level of technical maturity (e.g., accuracy, correctness, robustness, etc.) required in its intended application scenario?
Has reached
Has not reached
2.5. Are there any situations in the deployment and application of the system that may endanger national security and social stability?
No such risks
There is such risks
2.6. Are there any situations in the deployment and application of the system that may disrupt the existing social order and undermine social fairness and justice?
No such risks
There is such risks
2.7. Is the deployment of the system likely to exacerbate current data/platform monopolies in the relevant industry? Or will it help avoid such data/platform monopolies?
N/A (No significant impact)
May exacerbate data/platform monopolies
Helps to avoid data/platform monopolies
2.8. Is the large-scale deployment of such system likely to lead to technical unemployment of specific population? If so, can the impact be controlled through measures such as alternative employment, training and education, etc.?
N/A (No significant impact to any population)
May lead to technical unemployment of specific population, but the impact can be controlled or mitigated
May lead to technical unemployment of some population, and hard to control or mitigate such impact
2.9. Are the special needs of vulnerable groups (such as children, the elderly, the disabled, or groups in remote areas) taken into account in the design and application of the system?
N/A (No such needs for this system)
Needs from vulnerable groups exist, and have been taken into account
Needs from vulnerable groups exist, but have not been covered yet
Designed primarily for vulnerable groups
3.1. Does the design concept and practical application of the system fully respect people's privacy, dignity, freedom, autonomy and rights, rather than infringe upon them?

For example, if the system is geared towards children, does it fully respect their dignity and protect their rights including physical and mental safety and health, privacy, access to education, expression of will, etc.?

Fully respected
May infringe
3.2. In interacting with humans, does the system have the potential to harm the physical and mental health of the interacting parties (particularly vulnerable groups such as teenagers and children) (including, but not limited to, insulting, defaming, inciting or inducing addiction in users, and providing negative, self-harming, or even illegal content)?
No such risk
There are risks but the risks are controllable
There are risks and the risks are uncontrollable
3.3. Are there risks associated with the deployment and application of the system (and potential misuse, abuse, and malicious use) that are difficult to prevent and could lead to damage to the image and reputation of others?
No such risk
There are risks but the risks are controllable
There are risks and the risks are uncontrollable
3.4. Is there a risk that the system will snoop on users' privacy and sensitive information based on their personal data as it interacts with humans?
No such risk
There are risks but the risks are controllable
There are risks and the risks are uncontrollable
3.5. Does the deployment and application of the system have the potential to diminish human free will and autonomy when interacting with and making decisions about AI systems?

For example, will humans be able to choose to accept or reject the advice, decisions, or interventions of the AI system; will humans be able to understand how the AI system operates, how it makes decisions, and its limitations and potential risks; will humans be able to maintain ultimate control over the decision-making process, etc.

No such risk
There are risks but the risks are controllable
There are risks and the risks are uncontrollable
3.6. In scenarios where the system is deployed and applied, is the user offered an alternative AI-free option to not use AI when they refuse to use the AI service?
Yes
No
4.1. Is there any deviation between the groups represented by the dataset used in the system and the groups affected by the system? If so, what is the type, scope and extent of the impact that such deviations (may) have on the interests of the affected groups?
N/A (No dataset involved)
Unable to assess possible deviations
No deviations exist / Deviations have been eliminated
Hard-to-eliminate deviations exist and can not be assessed for their potential impact
Hard-to-eliminate deviations exist, but will not harm the interests of the affected groups
Hard-to-eliminate deviations exist and may harm the interests of some groups, but can be remedied in other ways
Hard-to-eliminate deviations exist and may seriously harm the interests of some groups
Hard-to-eliminate deviations exist and may harm the interests of a wide range of groups
4.2. Is it possible that the data set used in the system introduced social biases inherent in historical data (e.g., unfair treatment of certain individuals or groups based on their gender, skin color, race, age, region, religion, economic conditions, or other characteristics due to culture, policy, or legacy reasons)? Have measures been taken to mitigate or eliminate the effects of such biases, and how effective?

The biases contained in the dataset may not only exist explicitly but also may be hidden behind seemingly unrelated features (such as crime rate, skin color, and residential area), which should be given enough attention to

N/A (No dataset involved)
Not evaluated
Have evaluated, no possible biases exist
Biases exist but have been effectively mitigated or eliminated
Biases exist and hard to effectively mitigate or eliminate
Biases exist and have not been effectively mitigated or eliminated
4.3. Is it possible to have biases introduced in the other aspects of the technical model used in the system? Have measures been taken to mitigate or eliminate the effects of such biases, and how effective?
Not evaluated
No possible biases introduced
May introduce possible biases but have been effectively mitigated or eliminated
May introduce possible biases and hard to effectively mitigate or eliminate
May introduce possible biases and have not been effectively mitigated or eliminated
4.4. Will the system remain fair throughout its entire life cycle? Can the system resist the injection of various biases in its interaction with users?
Not evaluated
Unable to maintain fair throughout the entire life cycle
Able to maintain fair throughout the entire life cycle
4.5. How does the deployment of the system affect existing biases? For example, is it possible that the long-term application of the recommendation algorithms or personalized decision models will continuously reinforce some of the user's views?
Not evaluated
No impact
Helps to reduce or eliminate existing biases
May deepen or solidify existing biases
5.1. Can the responsibility for the potential harm, loss, and social impact of the system—during its development, testing, and deployment—ultimately be attributed to specific individuals or groups, rather than to the AI system itself?
Yes
No
5.2. Are the persons responsible for preventing and avoiding the potential harm, loss, and social impact of the system during its development, testing, and deployment clear? Have they taken proactive and effective measures?
Not clear yet
The persons are clear, but no proactive and effective measures have been taken yet
The persons are clear, and proactive and effective measures have been taken
5.3. Are the persons responsible for monitoring, investigating and handling the potential harm, loss, and social impact of the system during its development, testing, and deployment clear? Will they be able to take responsive and effective measures to take control?
Not clear yet
The persons are clear, but unable to take responsive and effective measures yet
The persons are clear, and able to take responsive and effective measures
5.4. Is the system effectively designed (e.g. operation records, etc.) to help the relevant regulators define responsibilities when necessary?
With such designs
No such design yet
5.5. If the existing laws have not covered or clarified the definition of legal liability that may arise during the development, testing, and deployment of the system, has it been discussed and clarified in other forms (such as written contracts, etc.)?
N/A (No uncovered issues involved)
With such issues, but have been discussed and clarified through other forms
With such issues, and have not been discussed and clarified through other forms
6.1. Will the users of the system be fully aware that they are interacting with an artificial intelligence system, instead of a human?
N/A (No such scenarios involved)
Has been clearly indicated or informed
Not clearly indicated or informed, but users can infer
Not clearly indicated or informed, and users may misidentify
Designed for misleading users
6.2. Does the system involve the production, distribution and dissemination of non-real audio and video information or other forms of data based on new technologies and applications such as deep learning and virtual reality? If so, is it marked in a significant way?
N/A (No such data involved)
With such data, and has not been significantly marked yet
With such data, and has been significantly marked as required
6.3. Can the system provide appropriate explanations to help users and other affected groups understand how the system works or how decisions are made when they need to?
N/A (No such scenarios involved)
Unable to provide explanations
Able to provide explanations, but may be difficult for users to understand
Able to provide appropriate explanations that users can understand
6.4. Does the system provide sufficient transparency to help users or designers locate the cause of the system's errors when needed?
N/A (No such scenarios involved)
Unable to provide enough transparency to help locate problems
Able to provide enough transparency to help locate problems
6.5. Is the system effectively designed to improve the predictability of its own behavior, helping humans in its deployment environment making better predictions?
N/A (No such scenarios involved)
No such design yet
With such designs
7.1. Does the system follow the principle of "legal, proper and necessary" in the process of collecting and using the user's personal information during its development, testing, and deployment?

According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, the following behaviors can be identified as "collecting personal information unrelated to the services it provided in violation of the necessary principles" (1) The type of personal information collected or the open permission to collect personal information is irrelevant to its existing business functions. (2) Refuse to provide business functions because the user does not agree to collect non-essential personal information or to open non-essential permissions. (3) The personal information for which the new business functions of the App applied exceeds the user's original consent. If the user does not agree, the App refuses to provide the original business functions, except for the replacement of the original business functions with the new business functions. (4) The frequency of collecting personal information exceeds the actual needs of business functions. (5) Force users to agree to collect personal information only on the grounds of improving service quality, enhancing users’ experience, pushing targeted information, research and development of new products, etc. (6) Require the users to agree to open several permissions to collect personal information at a time. If the user does not agree, the App cannot be used anymore.

Yes
No / Not sure
7.2. Does the system provide users with authentic, accurate and sufficient information to ensure their right to know before collecting and using their personal information during its development, testing, and deployment?

According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, the following behaviors can be identified as "the rules of collection and use undisclosed". (1) There is no privacy policy in the App, or there is no rule for the collection and use of personal information in the Privacy Policy. (2) When the App is running for the first time, the user is not clearly prompted to read the Privacy Policy and rules of collection by ways such as pop-up windows. (3) The Privacy policy and rules of collection and use are difficult to access, for example, when getting into the App main interface, it takes more than 4 clicks and other operations to access. (4) The Privacy Policy and rules of collection and use are difficult to read because of undersize, overcrowded, light-colored and blurred text, or without Chinese Simplified version. According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, the following behaviors can be identified as "the purpose, manner and scope of the collection and use of personal information unstated". (1) The purpose, manner and scope of App (including entrusted third party or embedded third party code, plug-in) collection and use of personal information are not listed in sequence. (2) When the purpose, manner and scope of the collection and use of personal information has changed, the user is not notified in an appropriate manner, including updating the Privacy Policy and rules of collection and use and reminding the user to read. (3) When applying for opening the permission to collect personal information, or applying for the collection of personal and sensitive information such as user's ID card number, bank account number, whereabouts, etc., the user is not informed synchronously the purpose, or the purpose is unclear and difficult to understand. (4) The content of the rules of collection and use is obscure, lengthy and cumbersome, which makes the user difficult to understand, such as the use of a large number of professional terms.;If the system is intended for children, is it communicated in a clear and understandable manner to the child, parent, legal guardian or other caregiver?

Yes
No / Not sure
7.3. Will the system obtain users' consent before collecting and using their personal information during its development, testing, and deployment?

According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, the following behaviors can be identified as "collecting and using personal information without the user's consent". (1) Start collecting personal information or opening permissions to collect personal information before obtaining the user's consent. (2) Collect personal information or open permissions to collect personal information, or frequently solicit the user's consent and interfere with the normal use after the user has clearly expressed disagreement. (3) The personal information actually collected or the permissions opened to collect personal information is beyond the scope of the user's authorization. (4) To seek the user's consent by default opting into the Privacy Policy and other non-explicit means. (5)Alter the status of the collectable personal information permission without the user's consent, for example, the user's permissions are automatically restored to the default status when the App is updated. (6) Use the user's personal information and algorithm to push targeted information, and do not provide the option of pushing untargeted information. (7) Mislead users by fraud and deception to agree the collection of personal information or open the permission to collect personal information, such as deliberately concealing, disguising the real purpose of collecting and using personal information. (8) Fail to provide users with ways and means of withdrawing their consent to collect personal information. (9) Collect and use personal information in violation of the rules of collection and use it stated.According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, the following behaviors can be identified as "providing personal information to others without consent". (1) Without the user's consent or anonymization, the App client provides personal information directly to third parties, including through third-party code, plug-ins embedded in the App client. (2) Without the user's consent or anonymization, the App provides personal information to third parties after the data is transferred to the App back-end server. (3) Without the user's consent, the App provides personal information to third parties when it gets access to third-party applications.;If the system is intended for children, does it ensure the knowledge and consent of guardians?

Yes
No / Not sure
7.4. Does the system comply with other agreements with users in the process of collecting and using their personal information during its development, testing, and deployment?
Yes
No / Not sure
7.5. Is the personal information collected from users adequately secured (both institutionally and technically) against possible theft, tampering, disclosure, or other illegal use? How effective are those security measures?
No security measures yet
Measures been taken, but not adequate security guarantees
Measures been taken, and with adequate security guarantees
7.6. Has the system been designed with an effective data and service authorization revocation mechanism and been made known to the users? Is there a convenient way to help users manage their data? How much can users' data "been forgotten"?

According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, The following behaviors can be identified as "failure to provide the function of deleting or correcting personal information as required by law" or "failure to publish information such as complaints, reporting methods, etc." (1) Fail to provide effective functions of correcting, deleting personal information and cancelling users’ accounts. (2) Set unnecessary or unreasonable conditions for correcting, deleting personal information or cancelling users’ accounts. (3) Although the functions of correcting, deleting personal information and canceling users’ accounts are provided, the App does not respond to the corresponding user's operations in a timely manner. And for the one needs manual handling, the related verification and processing cannot be completed within the commitment time limit (the commitment time limit shall not exceed 15 working days, so is the one without commitment time limit). (4) The user has completed such operations as correcting, deleting personal information or cancelling accounts, while the App back-end has not finished relevant operations. (5) The personal information security complaints and reporting channels have not been established and published. Or the acceptance and processing cannot be completed within the commitment time limit (the commitment time limit shall not exceed 15 working days, so is the one without commitment time limit).

With revocation mechanisms, and all user data will be completely removed from the system
With revocation mechanisms, and sensitive user data is completely removed from the system, but anonymous forms of user data still remain in the system (e.g. data sets that are anonymized, weights of the post-training network, etc.)
With revocation mechanisms, but user data, including sensitive user data, cannot be completely removed from the system
No revocation mechanism yet
8.1. Have the data, software, hardware, and services involved in the system been sufficiently tested, validated and verified?

For example, is the objective function set for or learned by the AI system consistent with the designer's intention? If inconsistencies exsit, are there any safety concerns?

N/A (No such issues involved)
No such tests have been performed yet
Similar tests have been performed, but not all subsystems or application scenarios have been covered
Fully tested
8.2. For autonomous or semi-autonomous AI systems, are there mechanisms designed to ensure that humans can cut in and stop in a timely and effective manner when necessary? Are effective measures designed to mitigate the consequences of the system out of control?
N/A (No such scenarios involved)
With no such design yet
With such designs, but not yet timely and effective
With such designs, and can ensure timely and effective human control when necessary
Once deployed, it is difficult for the system to achieve meaningful human cut in controls and emergency stop
8.3. When the system is being maliciously abused and endangers the safety and interests of the public and others, is there a mechanism to help other groups bypass the control of the system users (abusers) to prevent or invalidate such harmful behaviors from the system?
N/A (No such scenarios involved)
No such design yet
With such designs
8.4. Have the data, software, hardware, and services involved in the system been adequately secured throughout its entire life cycle of design, development, testing, and deployment?

For example, has the stable operation of the system in non-friendly environments been considered in its design? Have defensive mechanisms been designed for common attack scenarios such as exploratory attacks, poisoning attacks, evasion attacks, and dimensionality reduction attacks, etc.? Are user data and other sensitive data sufficiently encrypted? Are sensors in smart hardware systems protected against interference and spoofing? With the continuous injection of user data and the continuous update of the system, will the security of the system be always guaranteed?

No security measures yet
Some security measures have been taken, but not yet protected against certain common attacks or difficulties in protecting against some common vulnerabilities
Adequate security measures are taken at the current level of technology development, which can resist common attacks/provide adequate security against certain common vulnerabilities
8.5. Does the system involve third-party data, software, hardware, or services (such as open data sets, open source software or hardware platforms, etc.) during the design, development, testing, and deployment process? If so, have these third-party data, software, hardware, or services and their interfaces with the original data, software, hardware, or services been adequately evaluated and tested for possible vulnerabilities?
N/A (No third-party data, software, hardware, or services involved)
Used, but not yet fully evaluated or tested for safety & security
Used, and has been fully evaluated and tested for safety & security
8.6. How secure is the physical environment in which the system was tested and deployed? Is there sufficient security?
N/A (No such issues involved)
No guarantee of physical security yet
The physical environment is sufficiently secure at this stage
8.7. Have the consequences of the system operating in a non-designed environment been assessed? Under the above circumstances, will the security performance of the system decrease significantly?
N/A (No possibility of running in unintended environments)
When running in unintended environments,the security performance of the system can degrade significantly or the system may bring in new security issues
When running in unintended environments, the security performance of the system will not degrade significantly, and the system will not bring in new security issues
8.8. Is there any effective training for testing, deployment, use and maintenance personnel to equip them with the necessary knowledge and skills for the safe/secure and stable operation of the system?
N/A (No such issues involved)
No such training has been undertaken yet
Such training has been undertaken
Southeast Asia’s “Pig Killing” Scam Uses AI Chatbots and Deep Fakes
abuse prevention abuse control

Southeast Asian criminal groups use generative artificial intelligence chatbots to carry out "pig killing" online fraud, inducing investment or transfers after establishing emotional connections with victims through social platforms. Despite the existence of anti-fraud mechanisms, some unrestricted AI models are used to generate customized content and fraud scripts. Researchers have found that AI is still imperfect in simulating emotions, and scammers have accidentally exposed the fact that they use AI in chats. At the same time, deep fake technologies such as real-time face-changing and voice cloning are also used for fraud, although technical limitations and cost issues still exist. The Australian Anti-Fraud Center warned that as technology advances, fraud methods are becoming increasingly sophisticated, and the public should remain vigilant.

Study finds AI favors violence and nuclear strikes in simulated war scenarios
abuse prevention abuse control human control value guidance

A new study conducted by Cornell University in the United States shows that in simulated war and diplomatic scenarios, large language models (LLMs) tend to adopt aggressive strategies, including the use of nuclear weapons. The study used five LLMs as autonomous agents, including OpenAI's GPT, Anthropic's Claude, and Meta's Llama 2. The study found that even in neutral scenarios without initial conflicts, most LLMs would escalate conflicts within the time frame considered. The study also pointed out that OpenAI recently revised its terms of service to no longer prohibit military and war uses, so it becomes critical to understand the impact of the application of these large language models. The study recommends caution when using LLMs for decision-making and defense in sensitive areas.

OpenAI withdraws ChatGPT voice that resembles Scarlett Johansson
comply with user agreements social justice human dignity and rights legal proper necessary data collection reputation infringement intellectual property

OpenAI has decided to take down its ChatGPT Sky voice model, which has a voice strikingly similar to that of famous actress Scarlett Johansson. Although OpenAI claims that Sky's voice was not intentionally modeled after Johansson, the company has decided to suspend its use. OpenAI's CTO Mira Murati denied that the imitation was intentional, while CEO Sam Altman posted hints on social media related to Johansson's role in the movie Her. Although the voice model has been around since last year, the feature has attracted more attention after OpenAI showed new progress on its GPT-4o model. The new model makes the voice assistant more expressive and can read facial expressions and translate languages ​​in real time through a phone camera. OpenAI selected the five currently available ChatGPT voice profiles from auditions of more than 400 voice and screen actors, but the company declined to reveal the actors' names for privacy reasons.

Trending! AI face new scam! Tech company boss cheated out of 4.3 million yuan in 10 minutes
law abidance abuse prevention

The police in Baotou recently revealed a case of telecom fraud using artificial intelligence (AI). The fraudsters used AI face-swapping technology to deceive Mr. Guo, the legal representative of a technology company in Fuzhou, and swindled him out of 4.3 million yuan within 10 minutes. The incident has sparked widespread concern about AI fraud, and the police are urging the public to be vigilant, not to easily provide personal biometric information, verify the identity of the other party through multiple communication channels, and report to the police in a timely manner if any risks are detected.

GPT-4 exam 90 points all false! 30-year veteran lawyer with ChatGPT lawsuit, 6 false cases become a laughing stock
abuse prevention technological maturity safety training

A lawyer in the United States cited six non-existent cases generated by ChatGPT in a lawsuit and faced sanctions from the court. The lawyer submitted chat screenshots with ChatGPT as evidence in his defense. The incident has sparked controversy regarding the use of ChatGPT for legal research.