Quantcast
Channel: AI & Big Data Archives - Truth on the Market
Viewing all articles
Browse latest Browse all 45

Navigating the AI Frontier, Part I

$
0
0

The European Union is on the verge of enacting the landmark Artificial Intelligence Act (AI Act), which will—for better or worse—usher in a suite of new obligations, and hidden pitfalls, for individuals and firms trying to navigate the development, distribution, and deployment of software.

Over the coming months, we will be delving into the nuances of the proposed text, aiming to illuminate the potential challenges and interpretive dilemmas that lie ahead. This series will serve as a guide to understanding and preparing for the AI Act’s impact, ensuring that stakeholders are well-informed and equipped to adapt to the regulatory challenges on the horizon. 

The AI Act was approved unanimously by representatives of EU national governments on Jan. 26 (the approved text is available here). But this is not the end of the legislative process. Most importantly, the European Parliament has yet to approve the law’s final text, with a final vote scheduled for April 10-11. It is generally expected that Parliament will give its approval.

Once the AI Act is enacted, we will still need to wait, likely until the summer of 2026, until it’s fully applicable. Some of its provisions will, however, become binding sooner than that. For example, the act’s prohibitions on practices such as using AI systems with “subliminal techniques” will come into force after just six months, and the codes of practice will become effective after nine months (perhaps around Easter 2025). Whereas, e.g., the rules on general-purpose AI models are expected to take effect about a year after enactment, so roughly the summer of 2025. 

For this post, we want to offer some thoughts on potential issues that could arise from how the act defines an “AI system,” which will largely determine the law’s overall scope of application.

The AI Act’s Scope and How It Defines an ‘AI System’

As we have written previously, there has been a concern, dating back to the first drafts of the AI Act, that it will not be “at all limited to AI, but would be sweeping legislation covering virtually all software.” The act’s scope is determined primarily by its section defining an “AI system” (Article 3(1)). This definition has undergone various changes, but the end result remains very broad:

‘AI system’ is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

“Varying levels of autonomy” may still be read to include low levels of autonomy. “Inferring” from “input” to generate “content” also could have a very broad reading, covering nearly all software.

Some helpful clarity is provided in the act’s preamble (preambles are used to aid interpretation of EU legislation)—i.e., in Recital 6, which explicitly states that the definition:

should be based on key characteristics of artificial intelligence systems, that distinguish it from simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations.

Even here, however, some risks of excessive scope remain. Programmers already widely use AI tools like GitHub Copilot, which generates code, partially automating the programmers’ jobs. Hence, some may argue that the code they create includes rules that are not “defined solely by natural persons.” 

Moreover, the recital characterizes the capacity to “infer” in a way that could be interpreted broadly, including software that few would characterize as “AI.” The recital attempts to clarify that “[t]he capacity of an AI system to infer goes beyond basic data processing, enable learning, reasoning or modelling.” The concepts of “learning,” “reasoning,” and “modeling” are, however, all contestable. Some interpretations of those concepts—especially older interpretations—could be applied to what most today would see as ordinary software. 

Given this broad definition, there is a palpable risk that traditional software systems such as expert systems, search algorithms, and decision trees all might inadvertently fall under the act’s purview, despite the disclaimer in Recital 6 that the definition “should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations.”

The ambiguity arises from the evolving nature of these technologies and their potential overlap with AI functionalities. Indeed, in one sense, these technologies might be considered under the umbrella of “AI” insofar as they attempt to approximate human learning. But in another sense, they do not. These techniques have been employed for decades and can be, as they long have been, used in ways that do not implicate more recent advances in artificial-intelligence research. 

For instance, expert systems (in use since 1965) are designed to employ rule-based processing to mimic human experts’ decision-making abilities. These, it could be argued, infer from inputs in ways that are not entirely dissimilar to AI systems, particularly when they are enhanced with sophisticated logical frameworks that allow for a degree of dynamic response to new information. Similarly, search algorithms—particularly those that employ complex heuristics or optimization techniques to improve search outcomes—might blur the lines between traditional algorithmic processing and AI’s inferential capabilities.

Decision trees (also in use since the 1960s) further complicate this picture. In their simplest form, decision trees are straightforward, rule-based classifiers. When they are used within ensemble learning methods like random forests or boosted trees, however, they contribute to a system’s ability to learn from data and make predictions, edging closer to what might be considered AI. 

Thus, although some of these techniques might be considered AI, they are, in many cases, components of software that have been used for quite a long time without cause for concern. Regulatory focus on such software techniques is almost certain to miss the mark and be either underinclusive or overinclusive. This is because they are, in a sense, attempting to regulate the use of math.

That’s why it would seem that a much better approach to address risks arising from the uses of AI (or, indeed, any computer systems) is via legal regimes focused on—and well-tested in dealing with—specific harms (e.g., copyright law) associated with the use of these systems. The EU’s alternative approach to regulating AI technology faces the heavy burden of demonstrating that existing laws are insufficient to handle such harms. We are skeptical that EU legislators have satisfied that burden.

Nonetheless, assuming the EU maintains its current course, the interpretive ambiguities surrounding the AI Act raise substantial concerns for software developers. Without greater clarity, the law potentially subjects a wide array of software systems to its regulatory framework, regardless of whether or not they employ learning algorithms. This uncertainty threatens to cast a shadow over the entire software industry, potentially requiring developers of even traditional software to navigate the AI Act’s compliance landscape.

Such a scenario would inevitably inflate compliance costs, as developers might need to conduct detailed analyses to determine whether their systems—at a granular level—fall within the act’s scope, even when using well-established, non-learning-based techniques. This not only burdens developers with additional regulatory overhead, but also risks stifling innovation through the imposition of undue constraints on the development and deployment of software solutions.

Conclusion

Inevitably, there is going to be some degree of uncertainty as to the AI Act’s scope. This uncertainty could be partially alleviated by, e.g., explicitly limiting the act to specific techniques that currently are broadly considered to be “AI” (even “machine learning” itself is excessively broad).

The AI Act mandates that the European Commission develop guidelines on the application of the law’s definition of an AI system (currently: Article 82a). One can hope that the guidelines will be sufficiently specific to address the concern that “the public is told that a law is meant to cover one sphere of life” (AI), “but it mostly covers something different” (software that few today would consider AI).

The post Navigating the AI Frontier, Part I appeared first on Truth on the Market.


Viewing all articles
Browse latest Browse all 45

Latest Images

Trending Articles





Latest Images