EU leaders insist that addressing ethical questions that surround AI will lead to a more competitive market for AI goods and services, increase adoption of AI, and help the region compete alongside China and the United States. Regulators hope high-risk labels encourage more professional and responsible business practices.
Business respondents say the draft legislation goes too far, with costs and rules that will stifle innovation. Meanwhile, many human rights groups, AI ethics, and antidiscrimination groups argue the AI Act doesn’t go far enough, leaving people vulnerable to powerful businesses and governments with the resources to deploy advanced AI systems. (The bill notably does not cover uses of AI by the military.)
(Mostly) Strictly Business
While some public comments on the AI Act came from individual EU citizens, responses primarily came from professional groups for radiologists and oncologists, trade unions for Irish and German educators, and major European businesses like Nokia, Philips, Siemens, and the BMW Group.
American companies are also well represented, with commentary from Facebook, Google, IBM, Intel, Microsoft, OpenAI, Twilio, and Workday. In fact, according to data collected by European Commission staff, the United States ranked fourth as the source for most of the comments, after Belgium, France, and Germany.
Many companies expressed concern about the costs of new regulation and questioned how their own AI systems would be labeled. Facebook wanted the European Commission to be more explicit about whether the AI Act’s mandate to ban subliminal techniques that manipulate people extends to targeted advertising. Equifax and MasterCard each argued against a blanket high-risk designation for any AI that judges a person’s creditworthiness, claiming it would increase costs and decrease the accuracy of credit assessments. However, numerous studies have found instances of discrimination involving algorithms, financial services, and loans.
NEC, the Japanese facial recognition company, argued that the AI Act places an undue amount of responsibility on the provider of AI systems instead of the users and that the draft’s proposal to label all remote biometric identification systems as high risk would carry high compliance costs.
One major dispute companies have with the draft legislation is how it treats general-purpose or pretrained models that are capable of accomplishing a range of tasks, like OpenAI’s GPT-3 or Google’s experimental multimodal model MUM. Some of these models are open source, and others are proprietary creations sold to customers by cloud services companies that possess the AI talent, data, and computing resources necessary to train such systems. In a 13-page response to the AI Act, Google argued that it would be difficult or impossible for the creators of general-purpose AI systems to comply with the rules.
Other companies working on the development of general-purpose systems or artificial general intelligence like Google’s DeepMind, IBM, and Microsoft also suggested changes to account for AI that can carry out multiple tasks. OpenAI urged the European Commission to avoid the ban of general-purpose systems in the future, even if some use cases may fall into a high-risk category.
Businesses also want to see the creators of the AI Act change definitions of critical terminology. Companies like Facebook argued that the bill uses overbroad terminology to define high-risk systems, resulting in overregulation. Others suggested more technical changes. Google, for example, wants a new definition added to the draft bill that distinguishes between “deployers” of an AI system and the “providers,” “distributors,” or “importers” of AI systems. Doing so, the company argues, can place liability for modifications made to an AI system on the business or entity that makes the change rather than the company that created the original. Microsoft made a similar recommendation.
The Costs of High-Risk AI
Then there’s the matter of how much a high-risk label will cost businesses.
A study by European Commission staff puts compliance costs for a single AI project under the AI Act at around 10,000 euros and finds that companies can expect initial overall costs of about 30,000 euros. As companies develop professional approaches and become considered business as usual, it expects costs to fall closer to 20,000 euros. The study used a model created by the Federal Statistical Office in Germany and acknowledges that costs can vary depending on a project’s size and complexity. Since developers acquire and customize AI models, then embed them in their own products, the study concludes that a “complex ecosystem would potentially involve a complex sharing of liabilities.”
The Fight to Define When AI Is ‘High Risk’
Source: Pinoy DB
0 Comments