The EU Artificial Intelligence Act—Regulation (EU) 2024/1689—is now law across the bloc. It was published on July 12, 2024, entered into force on August 1, 2024, and will apply in stages from 2025 to 2027, with most obligations taking effect on August 2, 2026. The European Parliament approved the Act on March 13, 2024, followed by the Council on May 21, 2024.

Why it matters to fashion tech: The Act installs a risk-based rulebook that reaches both software providers and retail brands deploying AI across design, merchandising, sizing, marketing, and HR. General-purpose AI (GPAI) models—the foundation models underpinning many commercial tools—face additional duties, including training-data summaries and a copyright-compliance policy.

The Risk Ladder—at a Glance

  • Unacceptable Risk (Banned): Manipulative or exploitative systems; biometric categorisation using sensitive traits; emotion recognition in workplaces and schools, among others. Non-compliance can draw fines up to €35m (~$40.75 million) or 7 per cent of global turnover.
  • High Risk: Uses listed in Annex III (e.g., employment/worker management, biometric identification/categorisation, education, access to critical services). These trigger lifecycle controls: risk management, data governance and quality, technical documentation, logging, transparency, human oversight, and requirements on accuracy, robustness, and cybersecurity.
  • Limited Risk: Transparency-only duties (e.g., informing people when they interact with an AI system; labelling deepfakes or AI-generated marketing content in specified contexts).
  • Minimal Risk: All other systems (no AI-Act-specific duties, while GDPR and consumer law continue to apply).
  • GPAI Add-ons: Providers of foundation models must publish a training-data summary and maintain a copyright policy—materials that are increasingly used in procurement diligence.

Mapping Fashion AI Use-Cases to Risk Categories

Design & Product Creation (Generative Design, Trend Boards, Copy and Imagery)
Creative assistants and content tools generally fall into limited or minimal risk. Transparency applies where users converse with bots and where synthetic media could be mistaken for real—machine-readable markers on outputs at the provider level and labelling for deployers publishing ‘deepfake-like’ assets. Where tools rely on GPAI, providers carry the training-data summary and copyright policy obligations.

Demand Planning & Allocation (Forecasting, Replenishment, Pricing Support)
Back-office analytics typically fall under minimal risk. These uses are not listed in Annex III and therefore do not trigger the high-risk regime unless repurposed for worker-management decisions or deployed in safety-critical contexts. Standard MLOps and GDPR practices remain relevant; the AI Act adds no special controls beyond general transparency for direct human–AI interactions.

Sizing & Fit Recommendation (Body-Measurement Extraction, Virtual Try-On, Size Advice)
Prohibited territory is reached where biometric categorisation uses sensitive traits (e.g., health status, religion, race). Typical fit recommenders that infer measurements from images without sensitive-trait categorisation tend to sit in limited/minimal risk. Systems using biometric categorisation or emotion recognition attract transparency duties at minimum and may be high-risk if they match Annex III criteria.

Marketing & Customer Experience (Ad Targeting, Personalisation, Chat, Creative at Scale)
AI designed to materially distort consumer behaviour with risk of harm is prohibited; dark-pattern optimisation falls within this perimeter. Chatbots and assistants are generally limited risk and require disclosure to users. For AI-generated or edited promotional content, provider-side machine-readable markers and deployer-side labelling apply where content could be mistaken as real.

Important Carve-Out: Hr Tech
Recruitment, worker evaluation, and scheduling are listed Annex III areas and therefore high risk. Deployers in scope face extra steps and, in certain cases, a Fundamental Rights Impact Assessment (FRIA) prior to first use.

What Applies When—Fashion Timeline

  • August 1, 2024 to February 1, 2025: Law in force; preparation phase.
  • From February 2, 2025: Prohibitions and general provisions apply (e.g., bans on manipulative AI, sensitive-trait biometric categorisation, and emotion recognition in workplaces/schools).
  • From August 2, 2025: Governance framework (AI Office, notified bodies), penalties regime, and GPAI duties begin.
  • From August 2, 2026: Most obligations—including those for high-risk systems—apply.
  • From August 2, 2027: Additional classification obligations under Article 6(1) take effect.

Data-Governance Focal Points Appearing in Due Diligence

For any AI Procured or Built

  • Model identification, including whether a system is GPAI-based; provider documentation such as the training-data summary and copyright policy (with EU text-and-data-mining opt-out handling).
  • Output labelling plans: machine-readable marking for synthetic media and a process to label realistic AI-generated content in marketing workflows.
  • Prohibition screens: exclusion of emotion recognition in workplace/school settings, biometric categorisation by sensitive traits, and manipulative optimisation.

Where a Use may be High Risk (e.g., HR/Worker-Management; Biometric Categorisation)

  • A risk-management system covering foreseeable risks and misuse across the lifecycle, with documented testing and metrics.
  • Data governance and quality: provenance, collection purpose, preparation, bias testing/mitigation, and representativeness for the intended population.
  • Technical documentation (Annex IV) and logging to ensure traceability.
  • Human oversight definitions: intervention and stop criteria, mitigation of automation bias, and competence/training requirements.
  • Accuracy/robustness/cybersecurity declarations and safeguards against data poisoning and adversarial inputs.
  • Deployer duties: notifying affected people when decisions concern them; performing FRIA where required and notifying the market authority of FRIA results.

Practical Mapping Guide—Typical Classifications in Fashion

  • Design Tools: limited/minimal risk; creative-ops flows increasingly add synthetic-media labelling and retain provider attestations (GPAI data summaries and copyright policies).
  • Demand Planning: minimal risk; documentation, privacy, and fairness-by-design practices remain standard even without high-risk controls.
  • Sizing/Fit: limited to high risk depending on biometric features; sensitive-trait categorisation is prohibited; transparency increases where biometric or emotion-recognition features are present; bias testing across body types is common practice.
  • Marketing/Personalisation: generally limited risk with prohibited edge cases; bot disclosure and deepfake labelling are becoming routine.

Penalties and Accountability
Violations of prohibited uses can attract fines up to €35 million (~$40.75 million) or 7 per cent of global turnover. Non-compliance with other obligations can reach €15 million (~$17.46 million) or 3 per cent. Supplying misleading information can be penalised up to €7.5 million (~8.73 million) or 1 per cent. SMEs benefit from capped levels. Governance attention increasingly centres on controls and documentation, not solely on tooling choices.

Sector Response Patterns
Across brands and retailers, common near-term priorities include: cataloguing AI by business process and mapping to risk categories; isolating Annex III-adjacent areas (notably HR and biometrics) for full high-risk controls; embedding transparency where customers or staff interact with AI and labelling synthetic content; requesting GPAI documentation from vendors (training-data summaries, copyright policies, model cards); and running AI-literacy programmes for teams using these systems.