The struggle to define when artificial intelligence is “high risk”


EU leaders insist that solving ethical issues surrounding artificial intelligence will lead to fiercer market competition for artificial intelligence products and services, increase the adoption of artificial intelligence, and help the region compete with China and the United States. Regulators hope that high-risk labels encourage more professional and responsible business practices.

Corporate interviewees said that the draft legislation went too far, costs and rules would stifle innovation. At the same time, many human rights organizations, artificial intelligence ethics and anti-discrimination organizations believe that the “Artificial Intelligence Act” is far from enough, making people vulnerable to powerful companies and governments that have the resources to deploy advanced artificial intelligence systems. (The bill specifically excludes the military’s use of artificial intelligence.)

(Mainly) Strict business

Although some public comments on the AI ​​bill came from individual EU citizens, the responses were mainly from professional groups of radiologists and oncologists, unions of Irish and German educators, and major European companies such as Nokia, Philips, Siemens and BMW Group.

American companies also have good representatives, with comments from Facebook, Google, IBM, Intel, Microsoft, OpenAI, Twilio and Workday. In fact, according to data collected by the staff of the European Commission, the United States ranks fourth in most commentary sources, after Belgium, France, and Germany.

Many companies have expressed concerns about the cost of the new regulation and questioned how to label their artificial intelligence systems. Facebook hopes that the European Commission will make it clearer whether the artificial intelligence bill’s authorization to manipulate people’s subconscious technologies extends to targeted advertising. Both Equifax and Mastercard are opposed to listing any artificial intelligence that judges personal credit as high risk, claiming that this will increase costs and reduce the accuracy of credit assessments.However, countless Learn have discovered Instance Discrimination involving algorithms, financial services, and loans.

NEC, a Japanese facial recognition company, argues that the “Artificial Intelligence Act” imposes excessive responsibilities on the providers of artificial intelligence systems rather than users, and that the draft proposal to mark all remote biometric systems as high-risk will bring costly cooperation. Regulatory cost.

One of the company’s main disputes over the draft legislation is how it handles general models or pre-trained models that can complete a series of tasks, such as OpenAI’s GPT-3 Or Google’s experimental multimodal model motherSome of these models are open source, while others are proprietary creations sold to customers by cloud service companies that have the AI ​​talent, data, and computing resources needed to train such systems. In its 13-page response to the Artificial Intelligence Act, Google believes that it is difficult or impossible for the creators of general artificial intelligence systems to follow the rules.

Other companies dedicated to the development of general-purpose systems or general artificial intelligence, such as Google’s DeepMind, IBM, and Microsoft, also recommend adjustments to artificial intelligence that can perform multiple tasks. OpenAI urges the European Commission to avoid banning general systems in the future, even if certain use cases may fall into the high-risk category.

Companies also want to see the creators of the Artificial Intelligence Act change the definition of key terms. Companies such as Facebook argue that the bill uses too broad terms to define high-risk systems, leading to over-regulation. Others suggested more technical changes. For example, Google hopes to add a new definition to the draft bill to distinguish between “deployers” of artificial intelligence systems and “suppliers”, “distributors” or “importers” of artificial intelligence systems. The company believes that doing so may shift the responsibility for the modification of the AI ​​system to the company or entity that made the change, rather than the company that created the original system. Microsoft made a similar suggestion.

The cost of high-risk artificial intelligence

Then there is the question of how much cost the high-risk label will bring to the enterprise.

A sort of Learn The European Commission staff set the compliance cost of a single artificial intelligence project under the Artificial Intelligence Act at approximately 10,000 euros and found that the company can expect the initial total cost of approximately 30,000 euros. As the company develops a professional approach and is deemed business as usual, the cost is expected to drop to close to 20,000 Euros. The study used a model created by the German Federal Statistical Office and acknowledged that costs may vary depending on the size and complexity of the project. As developers acquire and customize AI models and then embed them in their own products, the study concluded that “complex ecosystems may involve complex sharing of responsibilities.”


Source link