The world's first comprehensive AI law: what it means for Ukraine
01 May 2026 18:35AI has long since ceased to be merely a topic of science fiction. Today, it is part of our daily routine: it filters our email, recommends movies for our evening viewing, translates texts, and even helps doctors make diagnoses. According to McKinsey estimates, generative AI could contribute up to $4.4 trillion to the global economy each year. That’s more than the entire GDP of the United Kingdom. And indeed, AI has made all kinds of processes significantly faster.
However, where there are limitless possibilities, there are also major threats to privacy, security, and even ethics. Recognizing this, the European Union has developed an Artificial Intelligence Act to regulate the use of this technology within the EU. However, the new rules also create new requirements for IT companies, including Ukrainian businesses.
What will this law change? What are the main requirements of the AI Act for companies working with AI in Europe? And how will the AI Act affect the Ukrainian IT sector?
What is this law?

Before delving into the requirements and details, it’s important to understand what this law is and when it was adopted. The AI Act is the world’s first law to systematically regulate the development and use of artificial intelligence. It was adopted by the European Parliament in March 2024 and entered into force in August of the same year.
The law is being implemented in phases: the first restrictions took effect as early as February 2025, and the full list of requirements for companies will take effect in August 2026.
The logic behind the law is simple: the greater the potential impact of a system on humans, the stricter the requirements for it.
The law divides all AI systems into four risk categories:
minimal — most of the familiar tools we interact with daily: spam filters, recommendation algorithms on Netflix or YouTube, games, and basic automation tools. They pose no significant threat to humans, so there are no mandatory requirements for them.
limited — chatbots, deepfakes, systems that generate text, images, or video. There is a risk here, but it is manageable. There is one main requirement for them: the user must always know that they are communicating with a machine, not a human (or that the content is artificially generated).
High — the largest and most important category for business. This includes systems used in medicine, education, recruitment, credit scoring, critical infrastructure management, and the justice system. Such systems require clear instructions and monitoring of potential risks. Employees must constantly monitor their operation and assess how it impacts people’s rights.
Unacceptable — these are systems that pose a threat to human rights. This includes social scoring systems that evaluate people based on their behavior or personal traits; manipulative algorithms that can cause psychological harm; and biometric technologies for identifying people based on sensitive characteristics (fingerprints or facial features). Real-time facial recognition in public places is also prohibited. The use of these technologies is completely banned. No exceptions are provided.
Here are some examples:
- if your product is a recommendation algorithm or a spam filter, the AI Act practically does not apply to your company.
- If you’re developing a chatbot, you must inform users that they’re communicating with a machine (i.e., the chatbot must include a statement that they’re interacting with an automated system).
- If your system influences decisions in medicine, finance, or HR—be prepared for extensive documentation, audits, and legal liability.
- And if your product falls into the category of unacceptable risk—you’ll simply have to shut it down.
The key thing to know is that the law applies to anyone selling or providing AI services in the European market, regardless of where the company is located (in fact, just like the GDPR—the EU’s General Data Protection Regulation, which requires any company in the world to comply with European rules for handling EU citizens’ data. Fines for non-compliance with this law can reach up to €20 million or 4% of the company’s global turnover).
So, as they say, if you work with European clients, follow European rules.
What penalties are imposed for violating this law?
The fines are, in fact, very high.
If a company violates the rules in the “unacceptable risk” category, it must pay a fine of up to €35 million or 7% of its annual global turnover.
In the case of a violation in the “high risk” category, the fine will be up to €15 million or 3% of annual global turnover.
If the rules for the “limited risk” category are violated, the fine can reach €7.5 million or 1% of annual global turnover.
Additionally, in each case of a violation, the higher amount will be applied. Therefore, for large corporations, this amounts to billions. For Ukrainian medium-sized businesses, the figures are smaller but still capable of destroying a company.

What about ChatGPT, Claude, Google Gemini, and other models?
There are rules for them as well. Companies must explain what data their systems were trained on and must not infringe on copyrights. The most powerful models will undergo additional safety checks. And all content automatically generated by AI (deepfakes, chatbots, generated images, and videos) must be labeled. Users must know that a machine created it.
Additionally, starting in February 2025, companies must train employees who work with artificial intelligence to understand this technology and its potential risks.
How will this law affect Ukraine?
To claim that the AI Act is “Brussels’ problem” is only possible if one is unaware of the true extent of the Ukrainian IT sector’s dependence on the European market.
Ukraine is currently one of the most powerful tech hubs in Central and Eastern Europe. By the end of 2024, our IT sector exported $6.4 billion in services, accounting for about 37% of the country’s total service exports. The outsourcing market is estimated at $1.2 billion in 2025 and continues to grow despite the war. Clients include companies from Germany, the Netherlands, the UK, and the Baltic states. And this puts Ukraine squarely within the scope of the AI Act.
In the AI segment, Ukraine ranks second among Central and Eastern European countries in terms of the number of AI companies. As of 2025, there were over 240 such companies. The Ukrainian artificial intelligence market reached $419.4 million, and the number of AI/ML specialists has grown fivefold over the past decade and now stands at 5,200 people.
In fact, the AI Act affects Ukraine in several ways simultaneously:
- Outsourcing companies that develop AI for European clients must comply with the requirements of this law.
- Startups looking to enter the EU market must incorporate these requirements into their products from the very beginning to avoid paying significantly more later on.
- Companies registered in the EU through legal structures in Poland, Estonia, or Lithuania (so-called “shell companies”) must also comply with the AI Act.
- Freelancers providing AI services to businesses in the EU must understand the risks their solutions entail.
It is also worth noting that Ukraine is currently moving toward European integration. The adoption of laws that will meet the requirements of the AI Act is only a matter of time. The Ministry of Digital Transformation has already announced its intention to make digital technologies compatible with the EU, and some documents in the field of AI (such as the Concept for the Development of Artificial Intelligence in Ukraine) have already been drafted with European standards in mind.

What media expert Oleksandr Glushchenko thinks about the AI Act
UA.NEWS spoke with media expert Oleksandr Glushchenko about this law and whether the Ukrainian IT business is ready for it. The expert says that Ukrainian entrepreneurs are most shocked by three things: the size of the fines, the bureaucratic burden during the R&D phase, and the strict requirements for datasets.
“The fines are draconian and exceed those under the GDPR,” adds Glushchenko.
Bureaucracy, he says, begins even before the first line of code is written:
“For high-risk systems, you need to create a risk management system, maintain automatic event logging, and write voluminous technical documentation. And all of this—before you’ve even started development.”
And data requirements are a particular headache:
“The developer must prove that the datasets used to train the model are relevant, error-free, and representative—that is, free of bias.”
In practice, according to the expert, this means that a Ukrainian startup that has collected millions of photos from the internet to train its system must prove that it has the right to use these photos and that the sample is free of racial bias. And a software company seeking to enter the EU market must now hire a separate auditing organization (a so-called Notified Body) to verify the code and architecture and authorize the placement of the CE mark on the software.
“This is a process that used to apply mainly to physical goods—toys, electronics. Now it applies to software code,” says Glushchenko.
According to Glushchenko, the market in Ukraine has split into two categories. Major players such as Grammarly, MacPaw, SoftServe, Ciklum, and GlobalLogic are already actively preparing for the law’s implementation. They are forming internal AI ethics committees, hiring AI Compliance Officers, and conducting an inventory of all their AI products. Small and medium-sized businesses are at the highest risk.
“Most have either heard about the law in passing or believe it doesn’t apply to them—‘we just use the OpenAI API.’”
However, in the long term, the expert sees a major advantage in the law:
“Instead of trying to comply with 27 different laws in every EU country, a Ukrainian company will adapt to a single standard. Companies that learn compliance now will gain a huge competitive advantage in the outsourcing market.
Additionally, in 2024, Ukraine’s Ministry of Digital Transformation presented the ‘White Paper on AI Regulation in Ukraine,’ which provides for full alignment with the AI Act.”
He also gives a specific example: a hypothetical German bank is choosing between two vendors to develop an AI-based credit scoring system. The Indian company offers a lower price, but the Ukrainian one offers an architecture with built-in compliance and ready-made documentation templates.
“The German bank is guaranteed to choose the Ukrainians to avoid fines,” the expert adds.
Glushchenko sees a separate opportunity in a new niche—AI Legal Tech.
“A whole new industry is emerging. Ukrainian lawyers and engineers can create startups that automate the verification of other AI models for compliance with European legislation.”
Strategically, he is betting that Ukraine could become a testing ground for AI products from European corporations:
“Since launching high-risk systems in the EU will be accompanied by heavy bureaucracy, Ukraine (with special legal regimes like ‘Diya.City’ and planned ‘regulatory sandboxes’) can offer European companies an ideal testing ground. It will be possible to develop and test AI in Ukraine (under clear rules, but without draconian European fines during the testing phase), and bring it to market as a ready, calibrated product.”
When asked whether we will be happy with this law in three years, Glushchenko gives a positive verdict:
“The so-called ‘Brussels effect’ will kick in—when EU laws, due to the size of the European market, effectively become global standards. The situation will be exactly the same as with the implementation of the GDPR (General Data Protection Regulation) in 2018.
In the first 1–2 years after full implementation (2025–2026), there was and still is a lot of frustration. Companies complain about costs, bureaucracy, release delays, and the “death of innovation.” Some startups are forced to shut down or change their business model. But in three years (closer to 2027–2028), everyone will be happy with this law.”
Four steps Glushchenko recommends for the IT sector
According to the expert, startups should integrate the principles of “Trustworthy AI” into their processes right now. Glushchenko highlights four main steps:
- Conduct an audit using the risk classifier: Honestly assess your product and determine which of the four categories under the AI Act it falls into.
- Implement the ISO/IEC 42001 standard (AI Management System): Although the AI Act does not explicitly require this standard at the moment, compliance with it effectively covers 80% of the European regulator’s requirements for process documentation.
- Organize your data (Data mapping): Document where the data used to train models comes from, who owns it, and how it is processed.
- Ensure the "Human-in-the-loop" principle: For critical decisions, always allow a human operator to review or override the AI’s decision (Article 14).

The gray area is closed. The race has begun
It can be said that the AI Act will mark the beginning of a new order, where responsibility for algorithmic decisions will be legally established and quantifiable.
For Ukraine, with its powerful yet vulnerable IT sector, this is both a challenge and an opportunity. A challenge—because compliance costs time, money, and skilled personnel, which are in short supply. An opportunity—because companies that adapt before others will gain a competitive advantage in a market where trust in technology is becoming a strategic asset.
In Brussels, startups are already competing to be the first to obtain certification and secure a place in the new market. In Kyiv, Lviv, and Kharkiv, this race is just beginning.