23 June 2021

Commission's AI proposal to bolster antitrust enforcement in algorithm-driven markets

+ 1 other expert

The European Commission's landmark proposal on regulating artificial intelligence (AI) is a watershed moment in the evolution of AI global standards. AI as a dynamic grouping of technologies - capable of streamlining operations, optimising resource allocation, predicting behaviour and personalising customer treatment - is increasingly being harnessed by modern businesses to achieve competitive advantages. Unsurprisingly, such advantages are also susceptible to creating competition concerns, ranging from digital cartels to self-preferencing by dominant platforms. However, as AI applications are autonomous and programmed for self-learning, competitive harm might be caused even without human intervention. This complicates how liability is determined. Notably, the Commission places the onus on the undertaking using the AI. Thus, businesses must not only be aware of AI's anti-competitive potential, but they should also strive to comply with newly proposed rules, especially where these relate to transparency and monitoring obligations.

AI in the digital economy

We are in the middle of what many call the "Fourth Industrial Revolution", where the use of AI by modern business - despite generating market efficiencies and consumer benefits – may, inadvertently or otherwise, result in anti-competitive outcomes and harm consumers. Moreover, AI is frequently used with similarly disruptive technologies (like big data analytics, the internet of things, or quantum computing), thereby compounding possible competitive harms. A recent market study on algorithms by the UK Competition & Market Authority (CMA), a position paper on the supervision of algorithms by the Dutch competition authority, and a market survey by the Norwegian Competition Authority suggest that regulators are very aware of the potential competitive threats posed by AI. Particularly high on their list of concerns are price collusion, self-preferencing by platforms, and price discrimination. Already, the CMA has released a summary of responses to its market study, which indicates that most respondents agree on these competitive harms. Additional harms that were flagged included those caused by technologies associated with algorithms, such as the Internet of Things (IoT), which could shift algorithmic harms from being purely online, and transfer them to physical environments.

This is also supported by the Commission's preliminary report on its competition sector inquiry into the IoT, which shows that almost 50% of smart home device manufacturers are considering launching new consumer IoT services within the next three years. It is predicted that by increasingly relying on artificial intelligence tools, these services will expand the functionalities of the manufacturers’ smart home devices and improve user experience.

However, as AI systems evolve and become increasingly sophisticated, their functioning can also be more opaque and, as such, difficult to assess. For companies and authorities to ascertain if competition law is being flouted or bypassed through AI, determining the working of the relevant AI systems that are being investigated is imperative. Especially for authorities, this would require the support of specific expertise, technical documentation, record keeping, transparency obligations, mandatory audits, and the like. Therefore, the EU's first of its kind regulatory proposal for AI can be expected to fill the gap by supporting national competition authorities (NCAs) and the European Commission in identifying and investigating AI-related competition law infringements. Under the proposal, AI national supervisory authorities (under the guidance of a new European Artificial Intelligence Board) must inform NCAs about any competition law issues which they may encounter while conducting market surveillance functions.

We may thus expect greater antitrust regulatory intervention, as clarity on the intended use and scale of AI systems, together with better knowledge of their working, will boost the willingness of NCAs to investigate businesses deploying AI systems. As to the substantive assessment of AI-linked competition law infringements, the AI proposal makes it clear that its provisions are without prejudice to applicable competition law, indicating that the current antitrust rules are broadly fit for purpose. Accordingly, digital and tech players (like multisided platforms, online dealers, and other intermediaries who are progressively relying more on AI applications) must acquaint themselves with possible competition law infractions associated with the use of AI.

AI and algorithms

AI systems are a dynamic subset of algorithms, encompassing key self-learning attributes that are stimulated when fed with data flows. Against this background, the AI proposal defines AI systems quite broadly as "software that is developed with one or more listed techniques and approaches and can for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with". The proposal lists the applicable techniques and approaches and may be amended to include others to keep pace with the rapid development of AI systems.

Digital cartelisation

During a recent International Competition Network (ICN) panel discussion, competition authorities indicated that companies are increasingly using algorithms and machine learning tools, which make cartels less detectable and harm less determinable. In cases where algorithms are intentionally applied to form a price cartel, the antitrust enforcement is relatively straightforward, as competitors through concerted AI processes have directly exchanged sensitive information. This is also the case when similar AI systems are knowingly used by competitors, and it is predictable that the underlying algorithms will mutually interact to facilitate tacit collusion. The assessment becomes more complex where a third party develops AI systems for different players in the same market, possibly based on the same data sets. In such a hub-and-spoke scenario, the hub algorithm may be relied on by its different users to fix prices. Collusion becomes more likely if the use of more complex pricing algorithms becomes widespread. As a corollary, this could result in commercially sensible parallel conduct pursued by the AI systems being considered tacit collusion.

However, collusion might equally be an unintended consequence of the AI system's self-leaning traits. Individual algorithms complemented by deep learning can self-determine pricing strategies based on their study of the market and/or interaction with other AI systems. As their working is not readily ascertainable, they pose a challenge to competition regulators in determining liability. Companies too may find themselves in a difficult position if the AI system autonomously behaves anti-competitively.

For several products sold via online market places, vendors are already heavily reliant on AI-driven automated pricing solutions. Price comparison websites, booking platforms and market intermediaries are similarly investing in AI technologies. In addition to dynamic pricing or price collusion, AI systems can lead to a limitation of production. By gauging the rise in demand and interest of a product, AI could - similarly to pricing algorithms or in connection with them - negotiate a limitation of production in order to increase a price. Nevertheless, since pricing algorithms can generate efficiencies, they may enjoy exemption from the cartel prohibition as well. In the Webtaxi case, an exemption was granted by the Luxembourg Competition Authority as it was held that the joint use of a pricing algorithm by competing taxi operators would result in more cost efficiency by guiding the taxi closest to a customer, reducing prices for consumers.

Resale Price Maintenance (RPM) or compelling distributors to charge fixed or minimum prices is another breach of the cartel prohibition that may be facilitated via AI systems. Four consumer electronics manufacturers were fined by the European Commission for imposing online resale prices. The Commission observed that "the use of sophisticated monitoring tools allowed these manufacturers to effectively track resale price setting in the distribution network and to intervene swiftly in case of price decreases". Moreover, through monitoring algorithms, market participants can access potentially sensitive data on each other's business activities. This month, parallel probes by the Commission and UK's CMA were initiated against Facebook to check if its Marketplace platform uses commercially valuable data, including its rivals' advertising data to distort competition in digital advertising markets.

Therefore, by adhering to the obligations of the proposed regulation, including internal audits and compliance checks, at least unintended AI-induced competitive harm could be pre-empted and antitrust liability avoided.

Exclusionary abuse through self-preferencing algorithms

Dominant online market players, including tech companies that operate as multisided platforms, are feeling the heat of enforcement action aimed at their AI architectural systems that could leverage their dominance through exclusionary practices, such as self-preferencing. In Google shopping, the European Commission heavily fined the search engine giant for the more favourable positioning of its comparison shopping service (and the display in general search results pages), at the expense of competing comparison shopping services.

The European Commission is similarly investigating Amazon's business practices regarding its “Buy Box” and Prime label, which allegedly artificially favour its own retail offers and the offers of its marketplace sellers that use Amazon's logistics and delivery services. The Italian competition authority is also assessing Amazon's ability to discriminate based on whether or not the sellers on its marketplace use Amazon logistics services.

Going forward, the notion of self-preferencing is likely to be enlarged to cover differentiated treatment. Responses to the UK CMA's study on algorithms in platform-to business relations indicated that anti-competitive outcomes are probable if platforms treat non-affiliated businesses differently based on, for example, fees paid to the platform. Respondents also noted that ranking algorithms which position products in online market place listings could be equally problematic if utilised by large platforms to indirectly restrict access to customers. Further, advanced recommender systems that predict user preferences may lead to dominant firms enjoying unrivalled influence over online consumers, helping them easily outperform competitors.

Exploitative abuse through individualised pricing

In addition to exclusionary harm, algorithms are capable of facilitating exploitative conduct, such as perfect price discrimination or individualised pricing. Dominant market players in the online world have access to diverse data sets on their customers and competitors from a variety of sources. By monitoring customer behaviour and predicting the extent to which a consumer is willing to pay, algorithms can charge individual customers the highest acceptable price. As noted in the summary of responses to the CMA's algorithm market study, there is also the likelihood of individualised pricing algorithms targeting consumer vulnerabilities and "susceptibilities" such as insecurities, weaknesses and natural biases. For example, Uber's head of economic research has stated that its analysis shows that "people are more likely to pay higher surge prices if their mobile device is almost out of battery". The CMA's summary of responses, however, mentions Uber noting that it does not take into account any rider-specific or any device-specific (for example, payment method or low battery) information for the purpose of pricing. Thus, the question of the transparency of the concerned AI systems' working is brought to the forefront in such cases.

The question of liability

Commissioner Margaret Vestager sums up the strict approach to determining liability for AI-induced harm by stating that "businesses also need to know that when they decide to use an automated system, they will be held responsible for what it does. So they had better know how that system works." Be that as it may, AI systems are inherently capable of independent action. Thus, undertakings using them are always liable to facing antitrust scrutiny if their AI systems distort competition without that being the intended aim. As establishing causation for connecting an anti-competitive outcome with an AI application requires quite a formidable technical assessment; reliance might instead be placed on mere correlations between the use of AI systems and distortions of competition. The UK's CMA has already made it clear that businesses must be able to explain the algorithms they use, and that businesses will be held responsible for any misuse that harms competition.

Businesses should follow post-marketing monitoring and reporting obligations laid down in the Commission's AI proposal and report incidents that could violate competition rules. They must also strive to abide by the standards laid down in the proposal for high-risk AI and voluntary codes of conduct for non-risk AI systems. Such actions could change the Commission's strict liability enforcement approach for AI-related anti-competitive conduct, and influence it to conduct a case by case analysis. If businesses comply with the regulation and promptly report any anti-competitive "rogue" AI conduct, the Commission may - in appropriate cases - dispel with strict liability. This would achieve a working balance between innovation and regulatory intervention, allowing the EU to foster AI technology development and use across all industries. This would cement its leading position in human-centric and trustworthy AI.

What may be expected?

The proposal on an AI regulation, along with other key proposals like the Digital Services Act, the Digital Markets Act (see our previous article), and the Data Governance Act, will enter a myriad regulatory landscape governing the digital economy. As regards competition enforcement and AI, noting where the proposed AI national supervisory authorities report competition law infringements to NCAs will be invaluable. We anticipate NCAs closely coordinating with the AI bodies during their investigations.

Finally, we might witness NCAs and other public bodies seeking the help of AI systems to detect competition law violations. The competition authorities of Spain have argued that regulators should be in a position to make use of AI systems for detecting anticompetitive conduct. Meanwhile, Germany's top railway company, Deutsche Bahn, is testing a cartel screening algorithm that will scan for traces left behind by digital cartels, such as identical bids or suspicious pricing patterns. This is interesting in light of the proliferation of "black box" algorithms, whose functioning is not easily understood by reading the underlying code. In the areas of merger control, the Commission's chief economist, Pierre Régibeau, has advocated that agencies utilise algorithms to detect killer acquisitions. It will be particularly instructive to see how algorithms could be used by the Commission to support its current policy of encouraging below threshold referrals from member states (see our previous article). Naturally, any such detective AI systems will need to abide by the requirements of the AI proposal themselves.