Cybersecurity Requirements for Connected Products – the Cyber Resilience Act Proposal

Cybersecurity Requirements for Connected Products – the Cyber Resilience Act Proposal

Cybersecurity is one of the European Commission’s top priorities and a cornerstone of the digital and connected Europe. An increase of cyber-attacks during the coronavirus crisis has shown how important it is to protect hospitals, research institutions and other important areas of the infrastructure.

In parallel with the EU legislative efforts within Responsible AI and the upcoming AI Act, the European Commission is now moving ahead with a proposal for a new Cyber Resilience Act to protect consumers and businesses from products with inadequate security features. The proposal is based on the New Legislative Framework for EU product legislation and aims to safeguard consumers and businesses buying or using products or software with a digital component. The Cyber Resilience Act would target inadequate security features with the introduction of mandatory cybersecurity requirements for manufacturers and retailers of such products, with the protection extending throughout the product lifecycle.

In the words of Margrethe Vestager, Executive Vice-President for a Europe Fit for the Digital Age:
“We deserve to feel safe with the products we buy in the single market. Just as we can trust a toy or a fridge with a CE marking, the Cyber Resilience Act will ensure the connected objects and software we buy comply with strong cybersecurity safeguards. It will put the responsibility where it belongs, with those that place the products on the market.”

Similar to what we see within Responsible AI and the AI Act, the Cyber Resilience Act is likely to become an international point of reference, beyond the EU’s internal market. EU standards based on the Cyber Resilience Act will facilitate its implementation and will be an asset for the EU cybersecurity industry in global markets.

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.