Significant development in the progression of the AI ACT this week.

Significant development in the progression of the AI ACT this week.

This week there was a significant development in the progression of the AI ACT. According to a report, the EU Council is close to finishing their compromise text and could reach an agreement next month.

Once finalized, the text will form the basis of the Council’s position in the trilogy negotiations between the EU’s three institutions, which are expected to start in the early part of next year.

Scope

  • Biometric recognition
  • General-purpose AI
  • Law enforcement
  • Innovation
  • Governance
  • Transparency obligations
  • High-risk AI systems
  • A new transparency obligation has been added, requesting the providers of systems susceptible of causing significant harm to include the expected output in the instructions for use when appropriate.
  • Penalties

The list of violations that could lead to fines of up to 20 million or 4% of annual turnover has been expanded to include the violation of transparency obligations.

When calculating the sanction, more points were added for the intentional or negligent nature of the violation, attempts at mitigation, and if similar violations have already been sanctioned.

Common specifications

The approach followed on common specifications was aligned with the Regulation on machinery. In particular, the common standards would be repealed once harmonized standards were adopted.

Read the whole article here

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.