Twitter’s photo cropping algorithm benefits young, thin women

[ad_1]

May, Twitter Say It will stop use artificial intelligence It is found that the algorithm prefers white and female faces when automatically cropping images.

Now a Unusual game A review of the misbehavior of AI programs found that the same algorithm can identify the most important areas in the image, distinguishes based on age and weight, and prefers texts in English and other Western languages.

Top entry, contributor Bogdan KulinichHe is a postgraduate student of EPFL Computer Security in Switzerland, showing how Twitter’s image cropping algorithm can benefit thinner and younger people. Kulynych uses deepfake technology to automatically generate different faces, and then tests the cropping algorithm to see how it responds.

“Basically, the thinner, younger, and more feminine images are, the more they will be favored,” said Chief Scientist Patrick Hall. Dutch Bank, A company engaged in artificial intelligence consulting. He is one of the four judges of the competition.

The second judge, Ariel Herbert-Vorth, Security researcher Open artificial intelligence, Saying that the biases found by the participants reflect the biases of the people who provided the data used to train the model. But she added that these items show how thorough analysis of algorithms can help product teams eliminate problems with their AI models. “If someone is like’Hey, this is bad,’ then it’s much easier to solve this problem.”

The “Algorithm Bias Bounty Challenge” held last week definition, A sort of Computer security The conference in Las Vegas showed that having external researchers check algorithms for inappropriate behavior may help companies eradicate problems before they cause real harm.

Just like some companies, Including Twitter, Encourage experts to find security vulnerabilities in their code by providing rewards for specific vulnerabilities. Some artificial intelligence experts believe that companies should allow outsiders to access the algorithms and data they use in order to identify problems.

“It’s really exciting to see this idea explored, and I believe we will see more,” said Amit Elazari, Intel’s director of global cybersecurity policy and a lecturer at the University of California, Berkeley, suggested using bug bounty methods to eradicate artificial intelligence bias. She said that looking for bias in artificial intelligence “can benefit from empowering people.”

In September, a Canadian Students draw attention The way Twitter algorithm crops photos. The algorithm aims to zero out faces and other areas of interest (such as text, animals, or objects). But the algorithm usually prefers white faces and women in images that show several people. Twittersphere soon discovered other examples of prejudice that showed racial and gender bias.

In the bounty competition last week, Twitter provided participants with code for the image cropping algorithm and provided prizes for teams that showed evidence of other harmful behaviors.

Others have discovered additional prejudice. A study showed that the algorithm is biased against people with white hair. Another person revealed that the algorithm prefers Latin scripts over Arabic scripts, giving it a Western-centric bias.

BNH Hall stated that he believes other companies will follow the example of Twitter. “I think there is some hope for this kind of takeoff,” he said. “Because of the upcoming regulation, and the number of artificial intelligence bias incidents is increasing.”

In the past few years, much of the hype surrounding artificial intelligence has been undercut by examples of how algorithms easily encode deviations.Commercial facial recognition algorithms have been proven Race and sexism, Image processing code Has been found to show sexist thoughts, And a procedure for judging the possibility of a person committing a crime again has been proven to be Prejudice against black defendants.

Facts have proved that this problem is difficult to eradicate.Recognizing fairness is not simple. Some algorithms, such as those used to analyze medical X-rays, may Internalize racial prejudice in a way that is not easy for humans to find.

“When considering determining the bias in a model or system, one of the biggest questions we face-every company and organization faces-how do we scale it?” says Raman Chowdhury, The head of Twitter’s machine learning ethics, transparency, and accountability group.



[ad_2]

Source link