Get All Access for $5/mo

Europe Needs a New Approach to Regulating Artificial Intelligence The European Commission is currently working on a set of guidelines to determine how humans and AI interact which it has promised to publish by the end of the year.

By Yann Leretaille Edited by Dan Bova

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur Europe, an international franchise of Entrepreneur Media.

Shutterstock

When many people think about artificial intelligence, it's in a science fiction, Terminator, Her or 2001, scenario. Groundbreaking sci-fi author Isaac Asimov's "laws of robotics" are often trotted out as a possible template for future legislation.

Related: Why Europe Will Come Out on Top in the Tech Race Between the U.S. and China

AI can be broadly defined as technology that mimics human-like intelligence. However, while we are a long way from the artificial sentience of Hollywood and pulp fiction blockbusters, Asimov's first law isn't actually a bad place to start: "A robot may not injure a human being or, through inaction, allow a human being to come to harm." Indeed, European Parliament member and author of the European Parliament's report on artificial intelligence, Mady Delvaux, noted these laws in her original report.

This has led many approaches to regulating AI to focus on liability and consumer protection, which are of course hugely important, but are not the whole story.

While it is a step in the right direction that Europe is focusing on creating these guidelines, our ethics-in-tech organization, the Good Technology Collective (GTC), believes that we also need to focus on measures of accountability that no one is yet talking about. For instance, what use are regulations if there is no real accountability? Some international technology companies find it easy to evade or avoid regulations that they consider onerous. Lawmakers need to start asking how they can ensure that these companies are held accountable.

Technology is already creating a lot of issues today, even before we throw widespread AI into the mix, including, but not limited to: tax evasion, consumer protection, monopolistic structures, gambling and negative social impact, as we are seeing with fake news and elections. We need to consider the broader impact of technology on human life, before it becomes a human rights issue.

If I build a bridge that is not safe, I'm held accountable. Journalists and doctors are self-governed by a strong set of ethical and moral codes that they are supposed to adhere to. But, that doesn't seem to be the case for tech. Regulation is already lagging decades behind. Tackling the problem of a theoretical robot uprising in the far future doesn't fix the present.

Related: European Businesses Need to Embrace Automation Before It's Too Late

The GTC believes that Europe needs to think about pushing for regulation that is global in scope, and ethics must play an integral part of that -- why not conceive of a UN charter for the interaction of humans and technology, or an international "ethics-in-tech" judicial organization like the International Criminal Court?

Of course, we believe that good regulation can help. For example, GDPR has already had a large impact in exposing security breaches, massive leaks and data selling data practices. But, we need to think bigger.

In a report published in September, the European Economic and Social Committee (EESC) called on the Commission to "define the relationship between humans and machines, how autonomous the latter can be, and how they will complement the work of human beings," and urged it to address these questions in the ethical guidelines.

"The EU needs to ensure that the AI revolution does not endanger the quality of work in Europe. Interactions between workers and machines must be regulated in such a way that humans never become underlings to machines," the EESC said.

Of course, no one is realistically arguing that AI, machine learning and decision-making algorithms don't have the potential for huge benefits to society: AI can relieve workers of boring, repetitive and dangerous tasks; speed up medical diagnosis; cut out redundancies in manufacturing and even enhance our social time just by suggesting the right song.

But, there are wider dangers, too -- and not just the risk of being hit by an autonomous car or malfunctioning drone. Applying transparent algorithms to sensitive decisions such as sentencing, recruitment, access to health care, immigration or asylum questions runs the risk of amplifying discrimination.

Related: Busting Myths About Europe's Tech Sector

Back in May, Access Now and Amnesty launched a joint declaration on human rights and AI at RightsCon 2018. The declaration focuses on the right to equality and non-discrimination and is also endorsed by Human Rights Watch and the Wikimedia Foundation.

Estelle Massé, senior policy analyst at Access Now, explained that both the public and private sectors have to work to promote and respect human rights in the digital age particularly in light of "growing evidence of the discriminatory harms by machine learning systems."

The declaration aims to establish itself as a widely accepted global ethics framework that is legally binding, full-blown legal instruments are lagging behind in most European countries. There is one good and clear reason for this: Most politicians do not understand AI or machine learning. The big tech companies have, with some validity, argued that governments should not rush to regulate something they -- and increasingly even the technologists -- don't understand.

The European Commission, for its part, wants the EU to become the world leader in ethically responsible AI. It sees AI as having huge potential to boost the EU economy if it can be "put ... at the service of European citizens and boost Europe's competitiveness, while guaranteeing highest European standards for personal data protection."

But that costs money. In April, Vice President for the Digital Single Market Andrus Ansip said the EU needs to invest at least €20 billion by the end of 2020. Putting its money where its mouth is, the Commission is increasing its investment to €1.5 billion for the period 2018-2020 under the Horizon 2020 research and innovation program. This investment is expected to trigger an additional €2.5 billion of funding from existing public-private partnerships, for example on big data and robotics.

The EU does not have same level of funding as the United States or China and this needs to be rectified, pointed out Delvaux.

Related: Why Europe Is Facing a Digital Skills Crisis

The Commission says it is encouraging member states to modernize their education and training systems and support labor market transitions, building on the European Pillar of Social Rights.

"As with any transformative technology, artificial intelligence may raise new ethical and legal questions, related to liability or potentially biased decision-making. New technologies should not mean new values. The Commission will present ethical guidelines on AI development by the end of 2018, based on the EU's Charter of Fundamental Rights, taking into account principles such as data protection and transparency, and building on the work of the European Group on Ethics in Science and New Technologies," said the Commission.

However, Pekka Ala-Pietilä, former president of Nokia and tech entrepreneur, who is heading up the Commission's expert group on AI, is known to take the view that no regulation is better than bad regulation, the guidelines will be just that -- guides for how to behave, not fixed rules with penalties.

But, AI is a general-purpose technology, and its development already falls under other legislative proposals. The GDPR already sets out the rules for data-driven innovation, and the Commission has also vowed to update the interpretation of the Product Liability Directive by mid-2019 in the light of technological developments.

"The EU has to grasp immediately to ensure it is setting the global standard, not following it. We are already seeing different member states across Europe adopting national legislation which has endangered our cohesion and risks fracturing our market. The European Commission has promised to come forward with legislative packages next year, but this is already getting too late," warned Delvaux.

The European Commission has identified several key consumers areas such as the risk of discrimination, lack of information and control of data.

European consumer rights group BEUC has called on the Commission to implement some specific rules including: "A set of transparency obligations to make sure consumers are informed when using AI-based products and services, particularly about the functioning of the algorithms involved and rights to object automated decisions; specific safety standards for AI products and adequate powers for market surveillance authorities so that unsafe or potentially insecure AI products or services are not placed on the market; and updated rules on product liability to ensure that consumers are better protected if they suffer damage or harm because of products running on AI technology."

But, if the EU really wants to become a global leader, it needs to start fixing the issues that we already face with tech giants today. We must address existing problems before we start thinking about the future. And whatever solutions we come up with, they must be evidence-based. Otherwise, we will end up with regulation that doesn't make sense, supports tech giants in their unrestrained growth, and cripples innovation.

Yann Leretaille

Co-Founder and CTO of 1aim

Yann Leretaille is a co-founder and the CTO of 1aim, a full-stack AI networking platform. He is also a founding member of the Good Technology Collective, a European think tank addressing pressing issues at the intersection of frontier technologies and society.
Growing a Business

6 Ways to Enhance Collaboration and Communication With Workflow Automation

How establishing specific objectives and using workflow automation can enhance client satisfaction and overall business success.

Leadership

Use This 'Simple Yet Timeless' Career Advice That Will Change Your Outlook on Career Advancement

Whether you are trying to advance your career, take a step back or focus on other things, I've found this advice helpful in navigating my life.

Growing a Business

Want to Scale? Streamline Your Decision-Making Process in 3 Steps

Scaling a business hinges on streamlined decision-making. Leaders must be agile, making accurate and timely decisions to ensure success.

Franchise

An Iconic McDonald's Treat Is About to Get a Makeover — Here's What to Expect

The company is changing the packaging of one popular treat to meet consumer demands around sustainability.

Starting a Business

5 Proven Strategies for Turning Your Knowledge into Income

This article explores practical strategies to monetize your expertise by focusing on building authority, creating digital products and leveraging content to unlock new opportunities.

Side Hustle

The Side Hustle She Worked on in a Local Starbucks 'Went From Nothing to $1 Million.' Now It Will Make Over $30 Million This Year.

Melinda Spigel transformed a simple jewelry-making hobby into a lucrative full-time business with multimillion-dollar annual sales.