Luke Robinson on Synth-ethics: What happens when AI technology company know us better than we know ourselves? This power can be used against us!

Autor:

  • Karla Juničić

25.10.2019.

Zagreb, 111019.
Kras auditorium.
Intervju Luke Robinson, akademski strucnjak i znanstvenik na podrucju Eticke umjetne inteligencije.
Foto: Srdjan Vrancic / CROPIX
Srdjan Vrancic / CROPIX

Luke Robinson, research fellow at Cambridge and Oxford, partner at AI company Post Urban Ventures

Synth-ethics, a popular name for the application of ethics and morals in new innovations and artificial intelligence (AI), is gaining importance as the human aspect and their privacy and control rights are a necessary part of the development of new technologies.

Luke Robinson, an academic expert and current partner at  an artificial intelligence technology development company Post Urban Ventures, believes that artificial intelligence has huge potential to give us greater autonomy, to give us a new degree of freedom. On the other hand, it can distance us further in sociological terms, so we need to be careful about how we use it.

- As people give ever more private data to companies and become ever more dependent on support from AI driven technologies.... power over people and groups is likely to concentrate in business with access to the most people and data - Robinson said in an interview with Euractiv.hr.

- But we know this power can be used against us!

1. What are the different ethical issues in AI? Can you describe them?

Ethical dilemmas often occur when things we value get set against each other. These values could be individual good vs collective good, short term vs long term value, local vs global values, new vs old and returns vs risks.

Today using human intelligence to make decisions has a high value in the global economy but this may not always be the case. Bit by bit AI and machine learning models are being trained to reduce the reliance the economy has on expensive human intelligence in more and more business processes.  Transferring human intelligence to machines commodities human intelligence, the intelligence that gets transferred to a machine can then be replicated and served around the world at the speed of light. This massively drives down the cost of using this form of intelligence in the economy, make many things significantly cheaper and faster but mistakes will also be made and those human jobs and their skills will become redundant. The people who's skill is no longer needed will need to be retrained and this will take money and time and it will put a big stress on the economy and the social institutions.

 

There are many simple ways that AI can bring value to the economy, human health and wellbeing but already fake news and targeted advertising are already impacting our lives and we really need to think deeply about what happens when AI technology owned by companies knows us better than when we know ourselves? What happens to our independence, trust and social systems when everything is grey, when humans no longer have the ability to properly distinguish what is real and what is not?

Google search devalued remembering simple information, what happens when we don’t have to search for answers, when answers and suggestions come to us, when everything is tailored to us, when an army of algorithms are trying to meet our every need ever faster? When we give up control what is lost? When everything is personalised what things will we have in common. What are we willing to loose control of? What will the impact of dependence be?

Zagreb, 111019.
Kras auditorium.
Intervju Luke Robinson, akademski strucnjak i znanstvenik na podrucju Eticke umjetne inteligencije.
Foto: Srdjan Vrancic / CROPIX
Srdjan Vrancic / CROPIX

Luke Robinson during a lecture at the AI2FUTURE Artificial Intelligence Conference

 

2.  Hierarchy of labour is concerned primarily with automation. In what measure is job market gonna be affected by development of the AI?

The job market is going to be changed radically and it is gonna put huge pressure on education systems and the way we educate people. We will need to reskill people and governments will have to fund reskilling people because we will see huge unemployment issues obviously with automation and transfering skills from humans to AI. One of the next movement in AI will be human AI collaboration which can be understood slightly as humans training AI to do what humans do. When the economy transfers human skills to a machine we do not need humans to do that same job anymore. I do believe AI will create a huge number of new opportunities as it will create new degrees of autonomy but it will create difference between people who have resilience to change and those who have flexible mindset and can adapt but many will struggle. It will exacerbate the age gap, the wealth gap, the skills gap, and the location gap...it will put a lot of pressure on our education systems and social institutions and will exacerbate discrimination between the haves and the have nots.

3. Most people have little or no knowledge about how AI, blockchain, the Internet of Things or genetic engineering could affect their lives. How likely it is that these technological trends and transformation will pose existential challenges for humankind over the next decades?

When you go on holiday one often feels the pressure to switch off, to turn everything off...there will likely be people who will try to live a disconnect life, people who try to say 'no to AI' because they don't want to be controlled or dependent on it. It is not these people that I am worried about as this will be a choice. The people I am worried about are those that fail to keep up, those that fall out of the global economy, those that struggle to adapt and find ways to economically contribute to the economy. We know that the world of software development and data science is moving ever faster, people working in this fields have to take a continuous learning approach to their job, they know education is something they are always doing. I am not saying that everyone will have to become developers and data scientists but many people will have to learn new skills.

A big part of this wave of AI driven innovation is that having access to AI and being able to work effectively with the new technology will give the users more degrees to freedom, it will be empowering to those who can use it, those who can master will be able to create more value than those who can't. I dont think it will make a new species any time soon but these forces will likely act to drive difference between us, if we are not careful it will fragment us more than we already are, between those who are connected and can keep up and those who can't. The biggest existential risk will come from the fact that by driving us apart - by creating, income, skills and age gaps -  trust between us and our institutions will be eroded and our ability for humanity to tackle existential risks like climate change will be severely diminished.

Zagreb, 111019.
Kras auditorium.
Intervju Luke Robinson, akademski strucnjak i znanstvenik na podrucju Eticke umjetne inteligencije.
Foto: Srdjan Vrancic / CROPIX
Srdjan Vrancic / CROPIX

The power technology companies have over us and the society of which we are a part will only increase as the tools we depend on will become more sophisticated in controlling our lives

4. Why is the level of trust in new technologies AI and getting lower in users, despite the fact that privacy measures are getting higher?

More and more people are starting to get creeped out by how targeted some adverts they receive are. They are somewhat aware that the sophistication of these targeted adverts has got better very quickly, they are now aware that the companies targeting us with these adverts know a lot about us. They probably know what we like and don't like better than our friends and family. People are starting to become aware that unscrupulous companies may use this information against us and they may try to manipulating us.

We all care about privacy but for more and more people they are starting to feel some line has been crossed. Obviously people are getting more aware because privacy leaks and hacked data is now big news. Companies are now aware that a leak of their clients personal information can significantly impacting how much their clients trust them and this is now impacting companies stock prices. Companies are starting to realise that collecting ever more data on their clients is a risk and a liability. Companies are just starting to realise that they have to implement ways to reduce this liability with privacy enhancing technologies like the differential privacy and synthetic data technologies being developed by Hazy (this is one of our companies).

The complexity of the global economy and the roles people play in big companies means many people often feel far removed from what they would naturally feel creates value for society. AI may make people feel more and more alienated from what they do for work and what they really care about.

5.What are the most important measures in establishing trustworthiness between man and machine?

Pushing data rights and control back to people is important. Open source and transparency is important. Better interfaces between man and machine is also important. Model explainability  is important but this is not easy.  We must educate data scientists and machine learning engineers not just in maths skills but also in communication skills. Education in AI and Ethics is going to be critical.  New visualisation and explainability technologies are going to need to be invented. It's not going to be easy.

6. How to avoid that AI is not used for purposes with negative consequences as screening or face recognition which breaks privacy laws? For example we can recall the case of King's Cross London screening.

Privacy and trust is very important. But it is also often cultural. On the regulation point one does not always know what the impact of a technology will be for good and bad. Technologies are often used in ways it is impossible to anticipate in advance. If companies and academics have clear communicate-able ethics policies that guide their work this can help as they are at the front of innovation...regulators by definition are not. When it comes to regulation governments must employ people that understand the technologies before they regulate. Governments need to create sandpit environments were new technologies can be quickly tested and observed before being rolled out more broadly. Otherwise, their economies will fall behind. Because its hard to predict outcomes regulation should come from response to failures because its hard to anticipate what these will be in advance. Governments need to pay enough money to hire the best people so they don't make badly informed decisions that destroy the economy and create too much risk.

Zagreb, 111019.
Kras auditorium.
Intervju Luke Robinson, akademski strucnjak i znanstvenik na podrucju Eticke umjetne inteligencije.
Foto: Srdjan Vrancic / CROPIX
Srdjan Vrancic / CROPIX

'The EU should create an environment where AI can be tested in a controlled environment'

7. The EU is quite advanced in setting up regulatory processes that sometimes slow innovation. What needs to be improved at EU level in terms of regulation?

Global economic returns are now coming from the innovation economy and data science and AI technologies is at the forefront of this. The EU will fall behind if it can not find ways to grow and sustain local ecosystems around education, research, entrepreneurship and venture capital. To both mange the risk and also capture the opportunity that the AI sector represents the EU should invest in the creation of sandpits and regional real world testing environments where regulations are reduced and AI models and autonomous systems can be experimented with and benchmarked. Making such sandpit and test environments available to academics and startups is essential to understand risks and opportunities.  These designated towns, cities, medical centre, databases, airspace, roads...etc. need to be managed careful but they are absolutely essential if the EU is to stay competitive and to understand regulatory issues. It is going to be essential to build internationally competitive AI businesses. From a funding and innovation perspective EU funds should not just go to grants they should support the creation of small early stage sector focused deep tech investment funds run by experienced entrepreneurs not finance people. Europe needs to build and train a skilled entrepreneur investor class. Innovation is a bottom up process, the EU too often tries to use grants to mange it in a top down way. Good or bad you cannot know the outcome of innovation. All VC firms know that you need to invest in many failures to find the unicorn.

8. What are AI advances risks?

As AI gets smarter many new jobs and business will get created but many business will fail to innovate and will close and many jobs will be replaced. This economic shock, the loss or reduced importance if traditional businesses will be one the biggest challenges our societies will have to face. Our education and social support systems are not ready for it. Consider the German car industry. What will happen to their economy and social systems if their automotive industry fails to respond in time to the new crop of innovative automotive startups like Tesla. If economies are not designed to support safe innovation and deal with disruption they will fall behind.

As people give ever more private data to companies and become ever more dependent on support from AI driven technologies.... power over people and groups is likely to concentrate in business with access to the most people and data. We know this power can be used against us, as is seen in targeted advertising and was seen in Cambridge Analytica’s impact on the democratic processes in many countries. The power that these companies have over us and the society we are part of will only increase as the tools we depend on get ever more sophisticated at knowing, helping, managing and controlling our lives.