The European exception: Why is the EU shielding itself against technology? | Technology

“Who do I have to call if I want to talk to Europe?” asked Henry Kissinger, then Secretary of State to President Richard Nixon, sarcastically, to caricature the complex organizational structure of a European Union in full construction. Over the years, Americans have learned that there is a telephone number that they should have well written down: that of the Commissioner of Competition. This key figure makes transcendental decisions, such as approving or preventing mergers (in 2001 he stopped that of the American companies GE and Honeywell) and fining companies for abuse of a position of power, such as the 2,420 million euros claimed from Google last year. The technology sector is in the crosshairs of Brussels. Both because of the tremendous figures that it moves and because of the scarce regulation that surrounded it until now.

It is no coincidence that the Commissioner for Competition, Margrethe Vestager, carries the posturing “Vice President of the Commission for a Europe in the Digital Age”. The EU is taking very seriously the construction of a unique model in the world, one in which guarantees of privacy and respect for the user are above all other interests. If that means limiting the use of any technology, it will be done; if you have to put a stop to companies, too. If giants like Meta threaten to leave the continent if their requests are not met, they are firmly answered. “I’m on Instagram. If it stops working I’ll gain 10 or 20 minutes a day,” said Vestager herself. Europe is sending a message: the law of the jungle does not work on EU territory.

The regulatory deployment with which the EU wants to develop this protection shield against the power of the big technology companies has three legs for now. The first of these, the General Data Protection Regulation (RGPD), came into force in 2018. The second, the package made up of the Digital Services Directive (DSA) and the Digital Markets Directive (DMA), was agreed upon a two weeks and is pending minor paperwork. The third, the European Artificial Intelligence Regulation, is being discussed right now and is expected to be ready over the next year.

1. Defense of data privacy

The RGPD establishes a fundamental principle as simple as it is unprecedented in the rest of the world: citizens have the right to know what data is collected from them. Companies will have to ask permission to collect information and will have to say what they want to use it for.

No other country or union of countries has so far implemented such an ambitious regulation. That has caused tensions with some big US companies, the dominators of the technology market. As Google, Facebook or Apple host the data of their users in the US, a country with more lax regulations, Brussels has demanded that it be guaranteed that the data of its citizens will be treated in accordance with European protection standards. Although the GDPR was initially seen in the United States as an example of legendary European overregulation, the State of California recently passed an almost identical regulation. There are voices in Washington that advocate replicating the European standard at the federal level.

Despite being in force for four years, the application of the GDPR is still not exhaustive. The European Data Protection Supervisor has said that the European institutions themselves fail to comply with some of the precepts. Still, it is considered the most advanced standard of its kind and was the first unequivocal signal that Brussels was determined to tackle the effects of digitization firmly.

But the EU is a living organism. There is nothing set in stone. Proof of this is that there are sectors that advocate undoing the GDPR. “We should modernize the norm. Algorithms are good only if they work with good data. We need a change of mentality: if personal data is not allowed to be processed, we will not move forward, ”German MEP Axel Voss, of the European People’s Party (EPP), explains to this newspaper. The veteran CDU politician believes that Europe’s gaze should be “less restrictive and more open”. Or put another way: if we want European technology companies to emerge, fewer limits must be placed on their ability to process sensitive personal data.

2. Transparency and accountability

The balance between protecting the rights of citizens without clipping the wings of innovation is not easy. It is one of the points of friction between the two great European political families, those that negotiate the great community regulations: conservatives and socialists. The Digital Services Directive (DSA), agreed two weeks ago by the European Parliament and the Council and which is expected to come into force in 2024, establishes a series of transparency measures for companies. These will have to meet a series of requirements, such as opening their algorithms and subjecting them to audits to verify that no group is discriminated against or quickly remove the illegal content that is disseminated through them. “What is illegal offline it will be too on-line”, proclaimed Vestager by way of summary when the DSA agreement was closed.

Consensus in March, the Digital Markets Directive (DMA) aims to end the monopolies of big tech. The standard focuses on digital companies with an annual turnover of more than 8,000 million and a market value of more than 80,000 million. These conditions are met by Meta, Amazon, Alphabet (Google’s parent company), Apple and Microsoft, as well as some Chinese companies such as Alibaba and the European SAS.

The big technology companies prefer to keep a low profile until both directives are a reality. Asked by EL PAÍS about their first impressions, Meta and Apple prefer not to comment. “We welcome the goals of the DSA. As the law is finalized and enforced, the details will matter. We look forward to working with lawmakers to get the remaining technical details to ensure the law works for everyone,” says a Google spokesperson. Amazon and Microsoft refer to unofficial comments from spokespersons in which they welcome the strengthening of European consumer protection against both large and small companies.

3. Limit the uses of artificial intelligence

The third pillar, under construction, is the European Regulation on Artificial Intelligence (AI). This document, whose approval is scheduled for next year, will order the uses of a technology that the EU considers essential for competitiveness, but of which it is also aware that its most harmful uses must be limited.

The negotiation of this section is not being easy. This week a special report on AI was approved in Strasbourg that sets the position of the European Parliament in the face of the talks between the political groups that are already underway in the discussion of amendments to the draft regulation. The document recommends increasing investments to 20,000 million per year, but also emphasizes that the adverse effects of AI must be controlled with a risk approach proposed by the Commission. The harmless ones can operate without problems, but as their potential dangers increase they are subjected to more controls or even virtually prohibited.

Facial recognition in public spaces would enter this last chapter, except for “national security reasons”, those that deal with medical information or autonomous weapons that do not have any human control in the process. “We agree with the technology risk assessment approach, but we believe that more transparency measures must be included and a good explanation of how the collected data is used,” stresses Alex Agius Saliba, a very active MEP from the Socialists and Democrats group. in technological matters. “We also want the necessary guarantees to be given to demonstrate that the databases on which these tools are based do not generate discrimination based on gender, race or income,” he adds.

The negotiation of the decisive AI regulation has several hot spots. One of the main ones is what to do with facial recognition, a useful application for some and paradigmatic of the Surveillance Status for others. The Greens advocate its ban without conditions. The popular ones are not so afraid of him: “We should be open. Facial recognition can be used while respecting people’s privacy,” says Voss, from the EPP.

The intermediate position, the one that has prevailed for the moment, is that of the socialists. “I am not in favor of the prohibition of tools: I think that it is necessary to define very well who can use them and for what”, illustrates the MEP for the PSOE Ibán García del Blanco. “When there is a probability that some fundamental right declines due to a greater public interest, there must be a judicial control. Just like, to enter a home, the police need a warrant.”

Another element that has not yet been agreed upon is that of governance. The socialists defend the creation of a kind of European AI agency to ensure the proper use of algorithms; the popular see no need for it. The most immediate future will depend on what they agree on.

You can follow THE COUNTRY TECHNOLOGY in Facebook Y Twitter or sign up here to receive our weekly newsletter.

Exclusive content for subscribers

read without limits

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button