EU AI Act: Deadlines and Technical Obligations for Spanish Companies
The clock is already ticking
Regulation (EU) 2024/1689 on artificial intelligence (the EU AI Act) entered into force on August 1, 2024. It is not a directive that each member state transposes at its discretion. It is a regulation with direct application. And its deadlines are not theoretical: the first prohibitions already apply as of February 2025.
For any European company that develops or deploys AI systems, the question is no longer “does this affect me?” but “what do I need to do, and by when?”
The calendar that matters
The EU AI Act has a phased rollout. These are the critical dates:
February 2, 2025 (already passed): Prohibition of unacceptable AI practices. Social scoring systems, subliminal manipulation, exploitation of vulnerabilities, and real-time mass biometric surveillance are prohibited. If your company operates any of these systems, you are already non-compliant.
August 2, 2025: Obligations for general-purpose AI models (GPAI). This directly affects anyone developing or fine-tuning foundation models. Requirements include technical documentation, copyright compliance policy, and a summary of training content.
August 2, 2026: Most obligations enter into force. This is when the bulk of the regulation activates: risk classification, requirements for high-risk systems, transparency obligations, and regulatory sandboxes.
August 2, 2027: Full obligations for high-risk AI systems listed in Annex I (toys, medical devices, aviation, vehicles, elevators, pressure equipment, machinery).
Risk classification: where your system falls
The EU AI Act classifies AI systems into four levels. The classification determines the obligations:
Unacceptable risk (prohibited): As mentioned above. If you believe your system might fall here, you need legal counsel immediately, not a technical article.
High risk (strict regulation): AI systems used in critical infrastructure, education, employment, essential services, migration, justice, and democratic processes. Also the systems listed in Annex III: credit scoring, personnel selection, insurance risk assessment, among others.
Does your company use AI to filter CVs? High risk. To evaluate client creditworthiness? High risk. To prioritize support tickets? Probably not, but it depends on the implementation.
Limited risk (transparency obligations): Chatbots, deepfakes, content generation systems. The primary obligation is informing the user that they are interacting with AI or that content was artificially generated.
Minimal risk (no specific obligations): The vast majority of AI applications. Spam filters, product recommendations, logistics route optimization. No specific regulatory obligations, although good practices always apply.
What you actually need to do
If your system is high risk
The obligations are substantial. Summarizing Articles 8 through 15 of the Regulation:
Risk management system (Art. 9): A continuous process (not a one-time document) for identifying, assessing, and mitigating AI system risks. It must cover known and reasonably foreseeable risks, with documented mitigation measures.
Data governance (Art. 10): Training datasets must be relevant, representative, and, as far as possible, free of errors. Data design decisions, known biases, and measures adopted to mitigate them must be documented.
Technical documentation (Art. 11): Before marketing the system, you must produce technical documentation demonstrating compliance. This includes: general system description, design elements, development process, capabilities and limitations, performance metrics, and risk mitigation measures.
Automatic logging (Art. 12): The system must generate logs enabling traceability. Who used it, when, what data it processed, what result it produced. Logs must be retained for an appropriate period relative to the system’s purpose.
Transparency (Art. 13): Clear usage instructions for the deployer. System capabilities and limitations, accuracy level and performance metrics, known risks, and input data specifications.
Human oversight (Art. 14): The system must be designed so that people can supervise it. This does not necessarily mean “a human approves every decision,” but there must be mechanisms for an operator to intervene, correct, or stop the system.
Accuracy, robustness, and cybersecurity (Art. 15): Appropriate levels of accuracy (documented and communicated), robustness against errors and adversarial attacks, and cybersecurity measures proportional to the risk.
If you use GPAI models (foundation models)
If you develop or fine-tune general-purpose models (anything based on GPT, Llama, Mistral, etc.), from August 2025 you need:
- Technical documentation of the model per Annex XI
- Copyright compliance policy per Directive 2019/790
- Detailed summary of training content
If the model poses “systemic risk” (over 10^25 FLOPS of training compute, or designated by the Commission), obligations are even greater: systematic risk assessment, adversarial testing, serious incident reporting, and cybersecurity measures.
For most European companies using APIs from OpenAI, Anthropic, or Google, the GPAI model provider obligations fall on the model developer, not the user. But if you fine-tune or build high-risk systems on top of those models, you are responsible for the documentation of your complete system.
If your system is limited risk
Primary obligation: transparency. If your chatbot converses with customers, it must disclose that it is an AI system. If you generate synthetic images or video, they must be clearly marked. If you use emotion detection or biometric categorization systems, you must inform the subjects.
Penalties
Fines are proportional to company size, but the maximums are significant:
- Prohibited practices: up to EUR 35 million or 7% of global annual turnover
- Non-compliance with high-risk obligations: up to EUR 15 million or 3%
- Incorrect information to authorities: up to EUR 7.5 million or 1%
For SMEs, fines are proportionally adjusted. But even the smallest fine can be existential for a 50-person company.
Three steps to start today
If your company develops or deploys AI systems and you have not started the compliance assessment:
-
Inventory your AI systems (this week). Everything using machine learning, deep learning, or complex rule-based systems. Include third-party APIs. Classify each according to the risk pyramid.
-
Prioritize by risk (this month). High-risk systems need immediate attention. Technical documentation is not written over a weekend. The risk management system is a continuous process that needs time to mature.
-
Establish governance (this quarter). Assign internal responsibilities. The AI Act does not require a formal “AI Officer,” but someone must be accountable for compliance. In an SME, this could be the CTO. In a larger organization, you need a cross-functional team.
The EU AI Act is not the end of AI in Europe. It is the formalization of practices that every responsible organization should already follow. Documenting systems, managing risks, being transparent with users. The regulation simply adds deadlines and penalties to what should be common sense.
If you need help assessing the EU AI Act’s impact on your organization, our consulting team can perform a compliance assessment and define the technical roadmap. For companies developing AI and machine learning solutions, we offer specific guidance on technical documentation and risk management. See our technology solutions for more detail.
About the author
abemon engineering
Engineering team
Multidisciplinary engineering, data and AI team headquartered in the Canary Islands. We build, deploy and operate custom software solutions for companies at any scale.
