This article has been translated from Romanian with the help of Notion AI.
What are the effects of AI and data legislations on companies? To understand this, we need to start with American movies versus European movies 🙂 Who hasn’t seen one of those Marvel movies, of which there have been over (I believe) 30?
Action, explosions, noise, fantastic events, successes despite all adversities, and in the end, the good triumphs. In contrast to this picture, the European film is artistic, slow-paced, perhaps black and white, with sadness, complications, and not always a happy ending. The classic separation between Europe and America, American superiority versus European backwardness.
The same goes for technology. America is the country of Marvel, where everything is possible in terms of innovation and companies are mega successes, while Europe is the country of slow-paced films, companies that don’t innovate, and seem sad compared to their American counterparts. We hear these things quite often, and the main reason why European companies are seen this way is that there is not enough venture capital to encourage innovation. Oh, and there’s also the issue of regulation. Apparently, there are too many regulations that stifle innovation. Is it really true?
This is where we find ourselves with the dilemma, and this is what we want to investigate in this analysis. Specifically, we will explore the impact of new EU regulations on data and artificial intelligence on companies.
Why do we care?
- If you work in a tech company and/or probably develop tech products, interesting times lie ahead. How do you prepare for that?
- We often hear about European legislation and European bureaucracy versus the allure of investment and regulation climate in the US. Is it really that bad?
So what is it about?
- We will discuss the main elements of European legislation regarding data and artificial intelligence
- We will see the impact on companies and how or if the EU can help in other ways to reduce the impact.
- We will conclude by discussing how you should prepare for a series of changes.
Let’s get started, then.
What is the EU up to this time?
What isn’t it doing? 🙂 That’s the question. We are approaching the end of the Von der Leyen Commission’s mandate, and there have been numerous regulations and reform proposals related to the digital space. Here are just a few of them, noting that you can see the entire universe here:
- Legislation on artificial intelligence
- Data legislations (data governance and data act)
- Cybersecurity resilience act
- Gigabit act
- Digital identity legislation
- Regulation of work on ride-sharing platforms
- European chip act
- Cybersecurity solidarity act
- Digital services act
- Digital markets act
- Network and information systems directive 2
We have already discussed most of them on the website, but more from the perspective of their influence on Romania and its citizens. But what about the private sector? Is it really being strangled by cumbersome and hard-to-implement regulations? Some regulations apply to all companies (such as GDPR, which regulates data protection), others are sector-specific (the draft directive on platform work), and others focus more on consumer protection, impacting companies through consumer protection.
Let’s briefly look at some of them that have an impact on the IT sector as well as non-IT companies that are more advanced and have an impact:
- AI Act
- data legislations
effects of AI and data legislations on companies - AI Act
We have discussed the AI Act here, but let’s briefly review the main elements of the legislation to establish a foundation for the anticipated impact on companies. It has not yet come into effect but is expected to do so in 2024. The key points of the legislation are:
- Definition of an AI system: a machine-based system that is designed to operate with varying levels of autonomy and can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.
- Classification of AI systems into 4 risk categories:
Who has obligations?
The legislation classifies several types of actors involved in the development, commercialization, import, or distribution of AI systems, namely: suppliers, implementers, importers, distributors, and manufacturers of devices. The main obligations they have will only materialize IF any of these actors put high-risk systems on the market. According to some estimates, 30% of AI systems produced in the EU would fall into this category, but the EU’s impact assessment suggests that it is around 15%. The situation will only become clear after these high-risk systems are registered in the European database required by the legislation.
Let’s highlight some key points from the compromise text between the Parliament and the Council.
- Conduct conformity assessments before placing high-risk systems on the market (these can be done internally or outsourced) in three areas: quality management system, technical documentation, and conformity assessment between technical documentation and the design and development process.
- Ensure that individuals providing human oversight of AI systems are informed about the risk of confirmation bias (i.e., they should not rely solely on the output of the AI system in making decisions).
- Provide specifications for the data used to train the AI system, including information about limitations and possible existing assumptions within the dataset. For example, if the data is predominantly from a specific geographic region, which could create limitations in using the systems in other regions.
- Prepare technical documentation.
- Take corrective actions to bring the system into conformity and inform the importer and implementer.
Attention! You become a supplier if you make substantial modifications to an AI system already on the market, whether it becomes high-risk or if you put your brand on an existing system.
Implementers of such systems must:
- Ensure that they put the systems into operation in accordance with the instructions for use.
- If they exercise control over the system, implement human oversight.
- Ensure that individuals overseeing the system are “competent, qualified, and adequately trained and have the resources to ensure the supervision of the AI system.”
- Monitor the implemented cybersecurity measures.
- If they exercise control over the input data, ensure that the data is relevant and representative for the intended purpose of the system.
- Inform the manufacturer, importer, distributor, and competent national authorities if the use of the system could entail risks to health, safety, or fundamental rights of individuals and suspend the use of the system if necessary.
- Maintain a log automatically generated if they have access and control over it.
- [in labour situations] Before putting a system to work, inform employee representatives and reach an agreement regarding its use. If necessary, conduct a data protection impact assessment.
- Conduct an impact assessment in the intended context of use (with a few exceptions, such as safety systems in traffic management or utility provision).
Importers must ensure that:
- The supplier has conducted a conformity assessment.
- There are technical documentation and instructions for use.
- The system bears the conformity marking.
If the system interacts with individuals, suppliers must inform users that they are interacting with an AI system. Users of deep-fake systems must also highlight that the content is falsified or artificially created.
Counterbalancing these obligations, from which I have listed only a part, is the inclusion of measures to stimulate innovation in this act. The main instrument is the “regulatory sandbox,” a tool for experimenting with products and testing innovations before they are brought to market under the supervision of competent authorities. This is not presented as an obligation of the Member States, but rather an option. Furthermore, the proposal states that startups and small providers should have priority access to such instruments.
Now, let’s look at the impact and complaints from the main actors targeted by this legislation. Let’s start with the perspective on sandboxes:
- An organization representing EU startups states that the legislation on sandboxes should be clearer because startups often do not necessarily fit into a specific domain or may change their location.
- Others are more pessimistic. A report by AI Austria shows that two-thirds of the surveyed startups believe that their activity would be slowed down due to the required adaptations. Approximately 30% of the included startups would classify their systems as high-risk, and the costs of adaptation could reach several hundred thousand euros.
- Not to mention the venture capital climate. The same report indicates that venture capital funds could redirect their investments, considering the decreased interest in high-risk systems that would become more expensive.
So, how is it? Is there too much regulation? As we can see, most of the rules apply to high-risk AI systems, and the market for such systems is limited. At least for now. According to estimates by the Commission, up to 15% of systems would fall into the high-risk category. However, the estimates from the cited reports indicate that around 30% of systems would be affected. The AI market will continue to grow, especially after the explosion of generative AI, such as Chat-GPT, which will have different rules within this legislation. But let’s not digress now.
At first glance, there doesn’t seem to be a great balance between the regulations in the legislation and the proposed regulatory sandbox. Does this compensate for the exodus of venture capital from the EU? Perhaps not, but the funds made available at the EU level through programs like the Digital Europe Programme could help. The problem is that the pitch in front of a venture capitalist is very different from the funding application required to access such funds.
Let’s not forget about the Digital Innovation Hubs (DIHs), whose role is to support companies in testing solutions before bringing them to the market. The problem here is visibility, as confirmation bias, which the AI Act aims to reduce, is present in companies. Confirmation bias of the form “it’s easier with venture capital, I don’t bother because I don’t know what I gain here, etc.”
effects of AI and data legislations on companies - Data, data, data
There is no artificial intelligence without data. The Data Act and the Data Governance Act aim to facilitate access to data to stimulate innovation. More data, well-trained systems, better results, and so on. But wait, wasn’t there something about data protection and minimizing data collection? Yes, there is, but that pertains to the protection of personal data, and the principles can be found in these two regulations. Here, we mainly talk about non-personal data emitted by intelligent devices, for instance.
Speaking of which, Bard was confused when it comes to these two regulations:
The idea is that the Data Act refers to who accesses and how data is accessed, who can benefit from it, while data governance creates the framework for companies to grant access to data or request access to data, including from the public sector, to create value from it.
As a company, you can find yourself in two situations here:
- You may have data that others would need. Perhaps you have a series of devices that measure the energy efficiency of a building or data regarding the speed at which QA is performed on a production line. What do you do with this data? You might not want to give it away because it belongs to you. But what if you can gain something from it?
- You may need data to develop a new product. Where do you get the data from? Do you directly test with a potential customer? Perhaps the datasets are too limited, or maybe you want to scale an existing product for a different industry and need access to data from another company in a different sector. Maybe you can obtain the data from a third-party organization that acts as a “data intermediary.”
This is where the two legislations come into play. What are the obligations and rights?
- Users who generate data through the devices they use will be able to port their data to other companies. For example, users could transfer their usage data to a third-party repair service to fix a device without limitations from the original manufacturer.
- Device manufacturers will need to provide information about the quantity of data generated, how users can access the data, and how users can request the transfer of data to third parties, among other details.
- The transfer of data from data holders to data recipients will be done with reasonable compensation, with the only limitations being related to trade secrets.
- The legislation includes provisions that limit unfair terms in contracts for data distribution, particularly with regard to small and medium-sized enterprises (SMEs). The legislation also introduces the concept of a “trade secret holder,” which allows companies to refuse access to data for legitimate competitive reasons.
- it also introduces intermediaries that facilitate data transfers. If you are unsure where or to whom you should request data from, you could use such a service.
- For certain situations of public interest and emergencies, public sector organizations can request companies access to their data.
- The legislation introduces the concept of “data altruism,” which allows for the voluntary distribution of generated data by companies, not only upon request but also proactively.
What do the affected parties say about this?
- DigitalEurope, an organization representing large technology companies in the EU, states that the Data Act is a “leap into the unknown” that does not clearly explain what data needs to be distributed and to whom.
- The Alliance for Digital SMEs, which represents SMEs in Europe, emphasizes that SMEs should have more protection under this legislation but praises the explicit provisions regarding contractual terms for SMEs.
- The International Road Transport Union, an organization representing companies in the transport and logistics sector, considers this legislation beneficial because they already had data distribution agreements and purchased devices to access data, such as fuel consumption data.
- Siemens, on the other hand, suggests that such laws could jeopardize trade secrets, especially for companies that produce aftermarket parts.
So, what is the verdict? Is there too much regulation? It seems that the criticism is less about excessive regulation and more about the need to protect trade secrets. Data markets are crucial for the digital transformation of companies. It is not just about the products they create but also about the data they generate.
How companies "suffer"
The intentions of European regulations are to promote ethical values in the digital space, but they also consider market perspectives, especially considering the competitive landscape where the EU stands between the US and China.
What will be new for European companies?
- For AI, there will be transparency obligations toward users regarding the AI systems they produce, distribute, import, or use, regardless of the risk category they fall into.
- For AI, there will be conformity assessments for high-risk systems, resulting in a declaration of conformity and the famous CE marking.
- For AI, there will be impact assessments regarding the fundamental rights of Europeans for high-risk AI systems.
- Developers of AI applications will have the possibility to participate in regulatory sandboxes together with national authorities.
- Implementers of AI systems will be required to designate a person responsible for overseeing the system if they exercise control
Always be prepared...
Finally, a non-exhaustive list of things to consider implementing when it comes to digitizing or digitally transforming your business.
What do you need to do to adapt to these new legislations?
- Get information. The regulations we discussed are in the final stages of adoption (those related to data are more advanced, the Data Governance Act is already adopted, and the Data Act is yet to be adopted, while the AI Act is still in the final negotiation stages). But that doesn’t mean we should wait for them to be applied to get up to speed with what needs to be done at the last minute.
- Analyze the state of your company in terms of data governance (what and how it is collected, where it resides, where it goes, what we don’t collect) – but not just for personal data. For that, we already have GDPR, and the provisions related to data in the new regulations complement aspects of the latter.
- A piece of advice also from EY regarding preparation for the AI Act – analyze the AI models you already use, but also consider what you plan to acquire. AI is expanding and it is trendy, but you need to consider what and where you will use it. Consult here (the list may evolve) the category of high-risk AI systems to know what kind of obligations you are heading towards, both as a developer and as a business user.
- Prepare your human resources to better use and understand how AI works, what confirmation bias is. Also, consider implementing human oversight of these systems, and don’t forget about transparency requirements. People – customers, employees – need to be informed if they will interact with an AI system.
- Start collecting data, standardize it, try to understand its value both for your business and as a value in a data market.