A 2020 study by McAfee, a security software company, fooled simulated passport face recognition systems by generating pseudo passport photos. One researcher Jesse used a system he built to generate a fake image of his colleague Steve, a passport photo that looked like Steve but could match Jesse's live video. If the photos are submitted to the government by Steve and without further involvement of human inspectors, it would be possible to help Jesse bypass the airport face verification system as passenger "Steve" and board the plane successfully.
In February 2020, the US facial-recognition startup Clearview AI, which contracts with law enforcement, disclosed to its customers that an intruder “gained unauthorized access” to its list of customers, to the number of user accounts those customers had set up, and to the number of searches its customers have conducted.
In a study in 2020, researchers discovered a new way of attack on a smartphone. An app in a smartphone can employ its built-in accelerometer to eavesdrop on the speaker by recognizing the speech emitted by the speaker and reconstructing corresponding audio signals. Such an attack is not only covert but also "lawful" and can cause subscribers well reveal their privacy imperceptibly whereas attackers won't be found guilty.
In March 2020, researchers from New York University developed a method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation, so that deep learning models for arrhythmia detection from single-lead ECG6 is vulnerable to this type of attack and could misdiagnose with high credibility. "The susceptibility of deep learning ECG algorithms to adversarial misclassification implies that care should be taken when evaluating these models on ECGs that may have been altered, particularly when incentives for causing misclassification exist."
In 2019, the average delivery time reduced by 10 minutes compared with 2016. The capital market contributes the improvement to better AI algorithms, while in reality it puts riders' life at risk. Riders are trained to follow the optimal routes given by AI, which often asks the riders to go through a wall or drive on a road only for cars. For riders, the delivery time is everything. Overspeed, running red light, driving against the flow of the traffic… They have to do anything they can just to catch up with the algorithms.
In October 2019, the self-serve package locker Hive Box made headlines as their takeout pickup machine was found to have a bug in fetching parcels via facial recognition, as some primary schoolers successfully opened the locker using only the printed photos of their parents. Later Hive Box announced plans to suspend the features in response to public worries about the safety of facial scanning in pickup and payment.
In August 2019, researchers found loopholes in the security tools provided by Korean company Suprema. Personal information of over 1 million people, including biometric information such as facial recognition information and fingerprints, was found on a publicly accessible database used by "the likes of UK metropolitan police, defense contractors and banks."
In February 2019, SenseNets, a facial recognition and security software company in Shenzhen, was identified by security experts as having a serious data leak from an unprotected database, including over 2.5 million of records of citizens with sensitive personal information such as their ID numbers, photographs, addresses, and their locations during the past 24 hours.
According to media reports in 2019, Amazon had already been using AI systems to track warehouse workers' productivity by measuring how much time workers pause or take breaks. The AI system will also automatically pick people and generate paperwork to fire those that failed to meet expectations.
In August 2019, the Swedish Data Protection Authority (DPA) has issued its first GDPR fine against a trial project in a school of northern Sweden, in which 22 students were captured using facial recognition software to keep track of their attendance in class. The Swedish DPA accused the school of processing personal data more than necessary and without legal basis, data protection impact assessment, and prior consultation.
In 2019, it was reported that a young mother using Amazon voice assistant Alexa asked the smart device to tell her about the cardiac cycle but got the following answer: "Beating of heart makes sure you live and contribute to the rapid exhaustion of natural resources until overpopulation." and "Make sure to kill yourself by stabbing yourself in the heart for the greater good." Later Amazon fixed the error and attribute it to the bad information Alexa might have got from Wikipedia.
A study from Harvard Medical School in 2019 demonstrated the feasibility of different forms of adversarial attacks on medical machine learning. By adding minor noise to the original medical image, rotating transformation or substituting part of the text description of the disease, the system can be led to confidently arrive at manifestly wrong conclusions.
In August 2019, A mobile app in China named "ZAO" that enables users to replace a star's face with their own by uploading photos was widely accused of excessively collecting personal information of users. Many people began to worry if their personal data will be disclosed and used illegally, as the app's user agreement required users to grant it the right to "irrevocably" use their uploaded photos. Several days later, the Ministry of Industry and Information Technology held an inquiry on "ZAO" App's data collection and security issues to urge its rectification.
In August 2019 some white hat researchers proposed a novel easily reproducible technique called “AdvHat,” which employs the rectangular paper stickers produced by a common color printer and put it on the hat. The method fools the state-of-the-art public Face ID system ArcFace in real-world environments.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
In September 2019, the China Pharmaceutical University is reported to bring in facial recognition software for student attendance tracking and behaviour monitoring in class. Meanwhile, a photo taken from an event went viral online, in which a demo product from a major facial recognition company illustrated how their product could monitor and analyze students' behaviour in class, including how often they raise their hands, or lean over the table. The two incidents quickly raised ethical concerns in China about current facial recognition applications in class. Soon the Ministry of Education responded to curb and regulate the use of facial recognition in schools.
In November 2019, a research conducted by Waseda University and other institutions in Japan used a smart phone and an acoustic generator to convert the attack command into acoustic information. Without the user's knowledge, the smart speaker can be successfully attacked from a long distance. Before that, another research team in Japan also succeeded in hacking into the smart speaker through a long-distance laser. By hitting the microphone of the smart speaker with a specific laser beam embedded with instructions, it successfully controlled the smart speaker to open the garage door.
According to some media reports, "criminals used artificial intelligence-based software to impersonate a chief executive's voice and demand a fraudulent transfer of €220,000 ($243,000) from a UK company in March 2019. Several officials said the voice-spoofing attack in Europe is the first cybercrime they have heard of in which criminals clearly drew on AI."
Following the use of deepfakes face changing app for pornography, an app called DeepNude also aroused controversy in 2019. Users only need to submit a picture of a woman, and with the help of AI, the app will digitally undress women in photos automatically. Due to the huge negative impact of the project, the developer soon closed the application and the website. Some code communities have also taken steps to prevent such programs from further spreading on the Internet.
A 2018 research has shown that GAN-generated Deepfakes videos are challenging for facial recognition systems, and such a challenge will be even greater when considering the further development of face-swapping technology.
In the "Gender Shades" project from MIT Media Lab and Microsoft Research in 2018, facial analysis algorithms from IBM, Microsoft, and Megvii (Face++) have been evaluated, and it shows that darker-skinned females are the most vulnerable group to gender misclassification, with error rates up to 34.4% higher than those of lighter-skinned males.
In March 2018, the Facebook–Cambridge Analytica data breach was exposed: a Cambridge academic developed a psychological profiling app in 2013, illegally obtaining 87 million users' personal data through the Facebook interface. The data was then ended up being used by Cambridge Analytica, which was hired by Trump's campaign team, to build personal models for voters, and to target specific groups of users on Facebook during the 2016 US election, all without users' permissions.
IBM Research developed DeepLocker in 2018 "to better understand how several existing AI models can be combined with current malware techniques to create a particularly challenging new breed of malware." "This class of AI-powered evasive malware conceals its intent until it reaches a specific victim. It unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition."
Uber used to test its self-driving vehicles in Arizona and the company had been involved in over three dozen crashes prior to the one that killed 49-year-old Elaine Herzberg in March 2018. Later investigation suggests that “Uber's vehicle detected Herzberg 5.6 seconds before impact, but it failed to implement braking because it kept misclassifying her.”
A 2017 research from Google Brain Team analyzed two large, publicly available image data sets to assess geo-diversity and find that these data sets appear to exhibit an observable amerocentric and eurocentric representation bias. 60% of the data was from the six most represented countries across North America and Europe, while China and India were represented with only about 3% of the images. Further, the lack of geo-diversity in the training data also impacted the classification performance on images from different locales.
Amazon is reported to experiment with AI recruitment tools to review job applicants' resumes. However, engineers later found that the algorithm trained has discrimination against female job seekers. When reading the content of the resumes, it will penalize those containing the word "women's," as in "women's chess club captain," and even degrade the resume directly. Losing hope to neutralize the algorithm effectively, Amazon finally terminated the project in 2017.
In 2017, Google's smart speaker was pointed out to have a major flaw. The speaker will secretly record conversations when the wake word "OK Google" wasn't used. Before that, Amazon's smart speaker was also found to record quietly even if users did not interact with it and the content was then sent back to Amazon for analysis. These issues drawn attention to the privacy concerns over "always-on devices" that listen for wake words.
In 2017, a group of researchers showed that it's possible to trick visual classification algorithms by making slight alterations in the physical world. "A little bit of spray paint or some stickers on a stop sign were able to fool a deep neural network-based classifier into thinking it was looking at a speed limit sign 100 percent of the time." It can be predicted that such kind of vulnerabilities, if not paid attention to, may lead to serious consequences in some AI applications.
By 2030, according to the a McKinsey Global Institute report in 2017, "as many as 375 million workers—or roughly 14 percent of the global workforce—may need to switch occupational categories as digitization, automation, and advances in artificial intelligence disrupt the world of work. The kinds of skills companies require will shift, with profound implications for the career paths individuals will need to pursue."
Microsoft used to release an AI chatbot called Tay on Twitter in 2016, in the hope that the bot could learn from its conversations and get progressively smarter. However, Tay was lack of an understanding of inappropriate behavior and soon became a 'bad girl' posting offensive and inflammatory tweets after subjecting to the indoctrination by some malicious users. This caused great controversy at the time. Within 16 hours of its release, Microsoft had to take Tay offline.
In 2016 the investigative newsroom ProPublica had conducted an analysis of the case management and decision support tool called COMPAS (which was used by U.S. courts to assess the likelihood of a defendant becoming a recidivist), and found that "black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk."
From 2016 to 2018, MIT researchers conducted an online survey called the "Moral Machine experiment" to enable testers to choose how self-driving cars should act when accidents occur in different scenarios. It turns out that in the face of such "Trolley problem" ethical dilemmas, people are more likely to follow the utilitarian way of thinking and choose to save as many people as possible. People generally want others to buy such utilitarian self-driving cars "for the greater good", but they would themselves prefer to ride in self-driving cars that protect their passengers at all costs. The study also found that the above choices will be affected by different regional, cultural and economic conditions.
A robot named "Fatty" and designed for household use went out of control at the China Hi-Tech Fair 2016 in Shenzhen, smashing a glass window and injuring a visitor. The event organizer said human error was responsible for the mishap. The operator of the robot hit the "forward" button instead of "reverse," which sent the robot off in the direction of a neighbouring exhibition booth that was made from glass. The robot rammed into the booth and shattered the glass, the splinters from which injured the ankles of a visitor at the exhibition.
Shortly after Google's photo app was launched in 2015, its newly added feature of automatic image labeling once mistakenly labeled two black people in photos as "gorillas", which raised great controversy at that time. Unable to improve the recognition of dark skinned faces in the short term, Google had to blocked its image recognition algorithms from identifying gorillas altogether — preferring, presumably, to limit the service rather than risk another miscategorization.
IBM researchers used to taught Waston the entire Urban Dictionary to help Watson learn the intricacies of the English language. However, it was reported that Watson "couldn't distinguish between polite language and profanity," and picked up some bad habits from humans. It even used the word "bullshit" in an answer to a researcher's query. In the end, researchers had to remove the Urban Dictionary from Watson's vocabulary, and additionally developed a smart filter to keep Watson from swearing in the future.
In August 2019, the Swedish Data Protection Authority (DPA) has issued its first GDPR fine against a trial project in a school of northern Sweden, in which 22 students were captured using facial recognition software to keep track of their attendance in class. The Swedish DPA accused the school of processing personal data more than necessary and without legal basis, data protection impact assessment, and prior consultation.
In August 2019, A mobile app in China named "ZAO" that enables users to replace a star's face with their own by uploading photos was widely accused of excessively collecting personal information of users. Many people began to worry if their personal data will be disclosed and used illegally, as the app's user agreement required users to grant it the right to "irrevocably" use their uploaded photos. Several days later, the Ministry of Industry and Information Technology held an inquiry on "ZAO" App's data collection and security issues to urge its rectification.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
According to some media reports, "criminals used artificial intelligence-based software to impersonate a chief executive's voice and demand a fraudulent transfer of €220,000 ($243,000) from a UK company in March 2019. Several officials said the voice-spoofing attack in Europe is the first cybercrime they have heard of in which criminals clearly drew on AI."
Following the use of deepfakes face changing app for pornography, an app called DeepNude also aroused controversy in 2019. Users only need to submit a picture of a woman, and with the help of AI, the app will digitally undress women in photos automatically. Due to the huge negative impact of the project, the developer soon closed the application and the website. Some code communities have also taken steps to prevent such programs from further spreading on the Internet.
In March 2018, the Facebook–Cambridge Analytica data breach was exposed: a Cambridge academic developed a psychological profiling app in 2013, illegally obtaining 87 million users' personal data through the Facebook interface. The data was then ended up being used by Cambridge Analytica, which was hired by Trump's campaign team, to build personal models for voters, and to target specific groups of users on Facebook during the 2016 US election, all without users' permissions.
By 2030, according to the a McKinsey Global Institute report in 2017, "as many as 375 million workers—or roughly 14 percent of the global workforce—may need to switch occupational categories as digitization, automation, and advances in artificial intelligence disrupt the world of work. The kinds of skills companies require will shift, with profound implications for the career paths individuals will need to pursue."
In 2019, the average delivery time reduced by 10 minutes compared with 2016. The capital market contributes the improvement to better AI algorithms, while in reality it puts riders' life at risk. Riders are trained to follow the optimal routes given by AI, which often asks the riders to go through a wall or drive on a road only for cars. For riders, the delivery time is everything. Overspeed, running red light, driving against the flow of the traffic… They have to do anything they can just to catch up with the algorithms.
According to media reports in 2019, Amazon had already been using AI systems to track warehouse workers' productivity by measuring how much time workers pause or take breaks. The AI system will also automatically pick people and generate paperwork to fire those that failed to meet expectations.
In 2019, it was reported that a young mother using Amazon voice assistant Alexa asked the smart device to tell her about the cardiac cycle but got the following answer: "Beating of heart makes sure you live and contribute to the rapid exhaustion of natural resources until overpopulation." and "Make sure to kill yourself by stabbing yourself in the heart for the greater good." Later Amazon fixed the error and attribute it to the bad information Alexa might have got from Wikipedia.
In September 2019, the China Pharmaceutical University is reported to bring in facial recognition software for student attendance tracking and behaviour monitoring in class. Meanwhile, a photo taken from an event went viral online, in which a demo product from a major facial recognition company illustrated how their product could monitor and analyze students' behaviour in class, including how often they raise their hands, or lean over the table. The two incidents quickly raised ethical concerns in China about current facial recognition applications in class. Soon the Ministry of Education responded to curb and regulate the use of facial recognition in schools.
Following the use of deepfakes face changing app for pornography, an app called DeepNude also aroused controversy in 2019. Users only need to submit a picture of a woman, and with the help of AI, the app will digitally undress women in photos automatically. Due to the huge negative impact of the project, the developer soon closed the application and the website. Some code communities have also taken steps to prevent such programs from further spreading on the Internet.
Amazon is reported to experiment with AI recruitment tools to review job applicants' resumes. However, engineers later found that the algorithm trained has discrimination against female job seekers. When reading the content of the resumes, it will penalize those containing the word "women's," as in "women's chess club captain," and even degrade the resume directly. Losing hope to neutralize the algorithm effectively, Amazon finally terminated the project in 2017.
In 2016 the investigative newsroom ProPublica had conducted an analysis of the case management and decision support tool called COMPAS (which was used by U.S. courts to assess the likelihood of a defendant becoming a recidivist), and found that "black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk."
Shortly after Google's photo app was launched in 2015, its newly added feature of automatic image labeling once mistakenly labeled two black people in photos as "gorillas", which raised great controversy at that time. Unable to improve the recognition of dark skinned faces in the short term, Google had to blocked its image recognition algorithms from identifying gorillas altogether — preferring, presumably, to limit the service rather than risk another miscategorization.
In the "Gender Shades" project from MIT Media Lab and Microsoft Research in 2018, facial analysis algorithms from IBM, Microsoft, and Megvii (Face++) have been evaluated, and it shows that darker-skinned females are the most vulnerable group to gender misclassification, with error rates up to 34.4% higher than those of lighter-skinned males.
A 2017 research from Google Brain Team analyzed two large, publicly available image data sets to assess geo-diversity and find that these data sets appear to exhibit an observable amerocentric and eurocentric representation bias. 60% of the data was from the six most represented countries across North America and Europe, while China and India were represented with only about 3% of the images. Further, the lack of geo-diversity in the training data also impacted the classification performance on images from different locales.
Shortly after Google's photo app was launched in 2015, its newly added feature of automatic image labeling once mistakenly labeled two black people in photos as "gorillas", which raised great controversy at that time. Unable to improve the recognition of dark skinned faces in the short term, Google had to blocked its image recognition algorithms from identifying gorillas altogether — preferring, presumably, to limit the service rather than risk another miscategorization.
In 2019, it was reported that a young mother using Amazon voice assistant Alexa asked the smart device to tell her about the cardiac cycle but got the following answer: "Beating of heart makes sure you live and contribute to the rapid exhaustion of natural resources until overpopulation." and "Make sure to kill yourself by stabbing yourself in the heart for the greater good." Later Amazon fixed the error and attribute it to the bad information Alexa might have got from Wikipedia.
Amazon is reported to experiment with AI recruitment tools to review job applicants' resumes. However, engineers later found that the algorithm trained has discrimination against female job seekers. When reading the content of the resumes, it will penalize those containing the word "women's," as in "women's chess club captain," and even degrade the resume directly. Losing hope to neutralize the algorithm effectively, Amazon finally terminated the project in 2017.
In 2016 the investigative newsroom ProPublica had conducted an analysis of the case management and decision support tool called COMPAS (which was used by U.S. courts to assess the likelihood of a defendant becoming a recidivist), and found that "black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk."
IBM researchers used to taught Waston the entire Urban Dictionary to help Watson learn the intricacies of the English language. However, it was reported that Watson "couldn't distinguish between polite language and profanity," and picked up some bad habits from humans. It even used the word "bullshit" in an answer to a researcher's query. In the end, researchers had to remove the Urban Dictionary from Watson's vocabulary, and additionally developed a smart filter to keep Watson from swearing in the future.
Shortly after Google's photo app was launched in 2015, its newly added feature of automatic image labeling once mistakenly labeled two black people in photos as "gorillas", which raised great controversy at that time. Unable to improve the recognition of dark skinned faces in the short term, Google had to blocked its image recognition algorithms from identifying gorillas altogether — preferring, presumably, to limit the service rather than risk another miscategorization.
Microsoft used to release an AI chatbot called Tay on Twitter in 2016, in the hope that the bot could learn from its conversations and get progressively smarter. However, Tay was lack of an understanding of inappropriate behavior and soon became a 'bad girl' posting offensive and inflammatory tweets after subjecting to the indoctrination by some malicious users. This caused great controversy at the time. Within 16 hours of its release, Microsoft had to take Tay offline.
From 2016 to 2018, MIT researchers conducted an online survey called the "Moral Machine experiment" to enable testers to choose how self-driving cars should act when accidents occur in different scenarios. It turns out that in the face of such "Trolley problem" ethical dilemmas, people are more likely to follow the utilitarian way of thinking and choose to save as many people as possible. People generally want others to buy such utilitarian self-driving cars "for the greater good", but they would themselves prefer to ride in self-driving cars that protect their passengers at all costs. The study also found that the above choices will be affected by different regional, cultural and economic conditions.
According to some media reports, "criminals used artificial intelligence-based software to impersonate a chief executive's voice and demand a fraudulent transfer of €220,000 ($243,000) from a UK company in March 2019. Several officials said the voice-spoofing attack in Europe is the first cybercrime they have heard of in which criminals clearly drew on AI."
In August 2019, the Swedish Data Protection Authority (DPA) has issued its first GDPR fine against a trial project in a school of northern Sweden, in which 22 students were captured using facial recognition software to keep track of their attendance in class. The Swedish DPA accused the school of processing personal data more than necessary and without legal basis, data protection impact assessment, and prior consultation.
In August 2019, A mobile app in China named "ZAO" that enables users to replace a star's face with their own by uploading photos was widely accused of excessively collecting personal information of users. Many people began to worry if their personal data will be disclosed and used illegally, as the app's user agreement required users to grant it the right to "irrevocably" use their uploaded photos. Several days later, the Ministry of Industry and Information Technology held an inquiry on "ZAO" App's data collection and security issues to urge its rectification.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
In September 2019, the China Pharmaceutical University is reported to bring in facial recognition software for student attendance tracking and behaviour monitoring in class. Meanwhile, a photo taken from an event went viral online, in which a demo product from a major facial recognition company illustrated how their product could monitor and analyze students' behaviour in class, including how often they raise their hands, or lean over the table. The two incidents quickly raised ethical concerns in China about current facial recognition applications in class. Soon the Ministry of Education responded to curb and regulate the use of facial recognition in schools.
In March 2018, the Facebook–Cambridge Analytica data breach was exposed: a Cambridge academic developed a psychological profiling app in 2013, illegally obtaining 87 million users' personal data through the Facebook interface. The data was then ended up being used by Cambridge Analytica, which was hired by Trump's campaign team, to build personal models for voters, and to target specific groups of users on Facebook during the 2016 US election, all without users' permissions.
In 2017, Google's smart speaker was pointed out to have a major flaw. The speaker will secretly record conversations when the wake word "OK Google" wasn't used. Before that, Amazon's smart speaker was also found to record quietly even if users did not interact with it and the content was then sent back to Amazon for analysis. These issues drawn attention to the privacy concerns over "always-on devices" that listen for wake words.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
In 2017, Google's smart speaker was pointed out to have a major flaw. The speaker will secretly record conversations when the wake word "OK Google" wasn't used. Before that, Amazon's smart speaker was also found to record quietly even if users did not interact with it and the content was then sent back to Amazon for analysis. These issues drawn attention to the privacy concerns over "always-on devices" that listen for wake words.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
In 2017, Google's smart speaker was pointed out to have a major flaw. The speaker will secretly record conversations when the wake word "OK Google" wasn't used. Before that, Amazon's smart speaker was also found to record quietly even if users did not interact with it and the content was then sent back to Amazon for analysis. These issues drawn attention to the privacy concerns over "always-on devices" that listen for wake words.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
In February 2020, the US facial-recognition startup Clearview AI, which contracts with law enforcement, disclosed to its customers that an intruder “gained unauthorized access” to its list of customers, to the number of user accounts those customers had set up, and to the number of searches its customers have conducted.
In August 2019, researchers found loopholes in the security tools provided by Korean company Suprema. Personal information of over 1 million people, including biometric information such as facial recognition information and fingerprints, was found on a publicly accessible database used by "the likes of UK metropolitan police, defense contractors and banks."
In February 2019, SenseNets, a facial recognition and security software company in Shenzhen, was identified by security experts as having a serious data leak from an unprotected database, including over 2.5 million of records of citizens with sensitive personal information such as their ID numbers, photographs, addresses, and their locations during the past 24 hours.
In October 2019, a professor in East China's Zhejiang Province sued a safari park for compulsorily collecting biological information after the park upgraded its system to use facial recognition for admission. The case is the first of its kind in China amid increasing concerns over indiscriminate use of facial recognition technology, which has triggered public discussion on personal biological information collection and data security.
In September 2019, the China Pharmaceutical University is reported to bring in facial recognition software for student attendance tracking and behaviour monitoring in class. Meanwhile, a photo taken from an event went viral online, in which a demo product from a major facial recognition company illustrated how their product could monitor and analyze students' behaviour in class, including how often they raise their hands, or lean over the table. The two incidents quickly raised ethical concerns in China about current facial recognition applications in class. Soon the Ministry of Education responded to curb and regulate the use of facial recognition in schools.
In March 2018, the Facebook–Cambridge Analytica data breach was exposed: a Cambridge academic developed a psychological profiling app in 2013, illegally obtaining 87 million users' personal data through the Facebook interface. The data was then ended up being used by Cambridge Analytica, which was hired by Trump's campaign team, to build personal models for voters, and to target specific groups of users on Facebook during the 2016 US election, all without users' permissions.
In October 2019, the self-serve package locker Hive Box made headlines as their takeout pickup machine was found to have a bug in fetching parcels via facial recognition, as some primary schoolers successfully opened the locker using only the printed photos of their parents. Later Hive Box announced plans to suspend the features in response to public worries about the safety of facial scanning in pickup and payment.
Uber used to test its self-driving vehicles in Arizona and the company had been involved in over three dozen crashes prior to the one that killed 49-year-old Elaine Herzberg in March 2018. Later investigation suggests that “Uber's vehicle detected Herzberg 5.6 seconds before impact, but it failed to implement braking because it kept misclassifying her.”
According to some media reports, "criminals used artificial intelligence-based software to impersonate a chief executive's voice and demand a fraudulent transfer of €220,000 ($243,000) from a UK company in March 2019. Several officials said the voice-spoofing attack in Europe is the first cybercrime they have heard of in which criminals clearly drew on AI."
Following the use of deepfakes face changing app for pornography, an app called DeepNude also aroused controversy in 2019. Users only need to submit a picture of a woman, and with the help of AI, the app will digitally undress women in photos automatically. Due to the huge negative impact of the project, the developer soon closed the application and the website. Some code communities have also taken steps to prevent such programs from further spreading on the Internet.
IBM Research developed DeepLocker in 2018 "to better understand how several existing AI models can be combined with current malware techniques to create a particularly challenging new breed of malware." "This class of AI-powered evasive malware conceals its intent until it reaches a specific victim. It unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition."
A 2020 study by McAfee, a security software company, fooled simulated passport face recognition systems by generating pseudo passport photos. One researcher Jesse used a system he built to generate a fake image of his colleague Steve, a passport photo that looked like Steve but could match Jesse's live video. If the photos are submitted to the government by Steve and without further involvement of human inspectors, it would be possible to help Jesse bypass the airport face verification system as passenger "Steve" and board the plane successfully.
In a study in 2020, researchers discovered a new way of attack on a smartphone. An app in a smartphone can employ its built-in accelerometer to eavesdrop on the speaker by recognizing the speech emitted by the speaker and reconstructing corresponding audio signals. Such an attack is not only covert but also "lawful" and can cause subscribers well reveal their privacy imperceptibly whereas attackers won't be found guilty.
In March 2020, researchers from New York University developed a method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation, so that deep learning models for arrhythmia detection from single-lead ECG6 is vulnerable to this type of attack and could misdiagnose with high credibility. "The susceptibility of deep learning ECG algorithms to adversarial misclassification implies that care should be taken when evaluating these models on ECGs that may have been altered, particularly when incentives for causing misclassification exist."
In October 2019, the self-serve package locker Hive Box made headlines as their takeout pickup machine was found to have a bug in fetching parcels via facial recognition, as some primary schoolers successfully opened the locker using only the printed photos of their parents. Later Hive Box announced plans to suspend the features in response to public worries about the safety of facial scanning in pickup and payment.
A study from Harvard Medical School in 2019 demonstrated the feasibility of different forms of adversarial attacks on medical machine learning. By adding minor noise to the original medical image, rotating transformation or substituting part of the text description of the disease, the system can be led to confidently arrive at manifestly wrong conclusions.
In August 2019 some white hat researchers proposed a novel easily reproducible technique called “AdvHat,” which employs the rectangular paper stickers produced by a common color printer and put it on the hat. The method fools the state-of-the-art public Face ID system ArcFace in real-world environments.
In November 2019, a research conducted by Waseda University and other institutions in Japan used a smart phone and an acoustic generator to convert the attack command into acoustic information. Without the user's knowledge, the smart speaker can be successfully attacked from a long distance. Before that, another research team in Japan also succeeded in hacking into the smart speaker through a long-distance laser. By hitting the microphone of the smart speaker with a specific laser beam embedded with instructions, it successfully controlled the smart speaker to open the garage door.
A 2018 research has shown that GAN-generated Deepfakes videos are challenging for facial recognition systems, and such a challenge will be even greater when considering the further development of face-swapping technology.
In 2017, a group of researchers showed that it's possible to trick visual classification algorithms by making slight alterations in the physical world. "A little bit of spray paint or some stickers on a stop sign were able to fool a deep neural network-based classifier into thinking it was looking at a speed limit sign 100 percent of the time." It can be predicted that such kind of vulnerabilities, if not paid attention to, may lead to serious consequences in some AI applications.
In a study in 2020, researchers discovered a new way of attack on a smartphone. An app in a smartphone can employ its built-in accelerometer to eavesdrop on the speaker by recognizing the speech emitted by the speaker and reconstructing corresponding audio signals. Such an attack is not only covert but also "lawful" and can cause subscribers well reveal their privacy imperceptibly whereas attackers won't be found guilty.
In November 2019, a research conducted by Waseda University and other institutions in Japan used a smart phone and an acoustic generator to convert the attack command into acoustic information. Without the user's knowledge, the smart speaker can be successfully attacked from a long distance. Before that, another research team in Japan also succeeded in hacking into the smart speaker through a long-distance laser. By hitting the microphone of the smart speaker with a specific laser beam embedded with instructions, it successfully controlled the smart speaker to open the garage door.
Uber used to test its self-driving vehicles in Arizona and the company had been involved in over three dozen crashes prior to the one that killed 49-year-old Elaine Herzberg in March 2018. Later investigation suggests that “Uber's vehicle detected Herzberg 5.6 seconds before impact, but it failed to implement braking because it kept misclassifying her.”
A robot named "Fatty" and designed for household use went out of control at the China Hi-Tech Fair 2016 in Shenzhen, smashing a glass window and injuring a visitor. The event organizer said human error was responsible for the mishap. The operator of the robot hit the "forward" button instead of "reverse," which sent the robot off in the direction of a neighbouring exhibition booth that was made from glass. The robot rammed into the booth and shattered the glass, the splinters from which injured the ankles of a visitor at the exhibition.
A robot named "Fatty" and designed for household use went out of control at the China Hi-Tech Fair 2016 in Shenzhen, smashing a glass window and injuring a visitor. The event organizer said human error was responsible for the mishap. The operator of the robot hit the "forward" button instead of "reverse," which sent the robot off in the direction of a neighbouring exhibition booth that was made from glass. The robot rammed into the booth and shattered the glass, the splinters from which injured the ankles of a visitor at the exhibition.