Events News

What should the ethics of artificial intelligence be?

30th January 2020
Alex Lynn
0

How are the ethics of artificial intelligence shaping our world? Can these ‘thinking machines’ be taught to be more human? Is it safe to use AI in automated diagnosis in medicine? And what are the ethical considerations of these developments? 

The ethics of artificial intelligence features heavily as part of a series of events that cover new technologies at the 2020 Cambridge Science Festival (9th to 22nd March), which is run by the University of Cambridge.

Hype around artificial intelligence, big data and machine learning has reached a fever pitch. Drones, driverless cars, films portraying robots that look and think like humans… Today, intelligent machines are present in almost all walks of life. 

During the first week of the Festival, several events look at how these new technologies are changing us and our world. In ‘AI and society: the thinking machines’ (9th March). Dr Mateja Jamnik, Department of Computer Science and Technology, considers our future and asks: What exactly is AI? What are the main drivers for the amazing recent progress in AI? How is AI going to affect our lives? And could AI machines become smarter than us? She answers these questions from a scientific perspective and talks about building AI systems that capture some of our informal and intuitive human thinking. Dr Jamnik demonstrates a few applications of this work, presents some opportunities that it opens, and considers the ethical implications of building intelligent technology. 

The ethics of artificial intelligence has also created a lot of buzz about the future of work. In ‘From policing to fashion: how the use of artificial intelligence is shaping our work’ (10th March), Alentina Vardanyan, Cambridge Judge Business School, and Lauren Waardenburg, KIN Centre for Digital Innovation, Amsterdam, discuss the social and psychological implications of AI, from reshaping the fashion design process to predictive policing. 

Speaking ahead of the event, Lauren Waardenburg said: “Predictive policing is quite a new phenomenon and gives one of the first examples of real-world ‘data translators,’ which is quite a new and upcoming type of work that many organisations are interested in. However, there are unintended consequences for work and the use of AI if an organisation doesn’t consider the large influence such data translators can have. 

“Similarly, AI in fashion is also a new phenomenon. The feedback of an AI system changes the way designers and stylists create and how they interpret their creative role in that process. The suggestions from the AI system put constraints on what designers can create. For example, the recommendations may be very specific in suggesting the colour palette, textile, and style of the garment. This level of nuanced guidelines not only limits what they can create, but it also puts pressure on their self-identification as a creative person.” 

The technology we encounter and use daily changes at a pace that is hard for us to truly take stock of, with every new device release, software update and new social media platform creating ripple effects. In ‘How is tech changing how we work, think and feel?’ (14th March), a panel of technologists look at current and near-present mainstream technology to better understand how we think and feel about data and communication. 

With Dr David Stillwell, Lecturer in Big Data Analytics and Quantitative Social Science at Cambridge Judge Business School; Tyler Shores, PhD researcher at the Faculty of Education; Anu Hautalampi, Head of social media for the University of Cambridge; and Dex Torricke-Barton, Director of the Brunswick Group and former speechwriter and communications for Mark Zuckerberg, Elon Musk, Eric Schmidt and United Nations. They discuss some of the data and trends that illustrate the impact tech has upon our personal, social, and emotional lives – as well as discussing ways forward and what the near future holds. 

Tyler Shores commented: “One thing is clear: the challenges that we face that come as a result of technology do not necessarily have solutions via other forms of technology, and there can be tremendous value for all of us in reframing how we think about how and why we use digital technology in the ways that we do.” 

The second week of the Festival considers the ethics of artificial intelligence. In ‘Can we regulate the internet?’ (16th March), Dr Jennifer Cobbe, The Trust & Technology Initiative, Professor John Naughton, Centre for Research in the Arts, Social Sciences and Humanities, and Dr David Erdos, Faculty of Law, ask: How can we combat disinformation online? Should internet platforms be responsible for what happens on their services? Are platforms beyond the reach of the law? Is it too late to regulate the internet? They review current research on internet regulation, as well as ongoing government proposals and EU policy discussions for regulating internet platforms. One argument put forward is that regulating internet platforms is both possible and necessary. 

When you think of artificial intelligence, do you get excited about its potential and all the new possibilities? Or rather, do you have concerns about AI and how it will change the world as we know it? In ‘Artificial intelligence, the human brain and neuroethics’ (18th March), Tom Feilden, BBC Radio 4 and Professor Barbara Sahakian, Department of Psychiatry, discuss the ethical concerns. 

In ‘Imaging and vision in the age of artificial intelligence’ (19th March), Dr Anders Hansen, Department of Applied Mathematics and Theoretical Physics, also examines the ethics of artificial intelligence. He discusses new developments in AI and demonstrates how systems designed to replace human vision and decision processes can behave in very non-human ways. 

Dr Hansen said: “AI and humans behave very differently given visual inputs. A human doctor presented with two medical images that, to the human eye are identical, will provide the same diagnosis for both cases. The AI, however, may on the same images give 99.9% confidence that the patient is ill based on one image, whereas on the other image (that looks identical) give 99.9% confidence that the patient is well. 

“Such examples demonstrate that the ‘reasoning’ the AI is doing is completely different to the human. The paradox is that when tested on big data sets, the AI is as good as a human doctor when it comes to predicting the correct diagnosis.

“Given the non-human behaviour that cannot be explained, is it safe to use AI in automated diagnosis in medicine, and should it be implemented in the healthcare sector? If so, should patients be informed about the ‘non-human behaviour’ and be able to choose between AI and doctors? 

A related event explores the possibilities of creating AI that acts in more human ways. In ‘Developing artificial minds: joint attention and robotics’ (21st March), Dr Mike Wilby, lecturer in Philosophy at Anglia Ruskin University, describes how we might develop our distinctive suite of social skills in artificial systems to create ‘benign AI’. 

“One of the biggest challenges we face is to ensure that AI is integrated into our lives, such that, in addition to being intelligent and partially autonomous, AI is also transparent, trustworthy, responsive and beneficial,” Wilby said. 

He believes that the best way to achieve this would be to integrate it into human worlds in a way that mirrors the structure of human development. “Humans possess a distinctive suite of social skills that partly explains the uniquely complex and cumulative nature of the societies and cultures we live within. These skills include the capacity for collaborative plans, joint attention, joint action, as well as the learning of norms of behaviour.”

Based on recent ideas and developments within Philosophy, AI and Developmental Psychology, Dr Wilby examines how these skills develop in human infants and children and suggests that this gives us an insight into how we might be able to develop ‘benign AI’ that would be intelligent, collaborative, integrated and benevolent. 

Further related ethics of artificial intelligence events include: 

  • Harnessing big clinical data in medicine: can AI improve breast cancer screening? (9th March). 2.2 million women are screened for breast cancer each year in the UK. Can artificial intelligence identify women at most risk of cancer, improve the performance of the radiologists reading the mammograms or even replace the readers?
  • AI narratives (10th March). Researchers from the Leverhulme Centre for the Future of Intelligence discuss the importance of stories, from R.U.R (1920) to The Terminator (1984) to Big Hero 6 (2014) in understanding AI.
  • AI myth-busting: separating science fact from fiction (11th March), Dr Jennifer Cobbe, Department of Computer Science and Technology, and Dr Christopher Markou, Faculty of Law, debunk some of the biggest myths around AI today.
  • Smart building, smart construction (14th March). Researchers from the Cambridge Centre for Smart Infrastructure and Construction and Laing O’Rourke Centre host hands-on demonstrations with Microsoft HoloLens, and acoustic and fibre-optic sensors to showcase how they are using technology to make infrastructure smart.
  • Innovative drones for environmental and agricultural monitoring (19th March). Professor Roland Siegwart, ETH Zurich, discusses humankind’s future with robots as they replace most of the unsophisticated but physically demanding jobs in agriculture and supply chain processes.
  • Why robots are not going to take over all the factories (...yet) (21st March). Professor Tim Minshall, Institute for Manufacturing, explores the role of robots and how they are good at some tasks but can never be as good as humans.
  • Secrets and lights (21st March). Researchers explain the threats and opportunities that quantum technology poses for secure communication and discuss the materials that will enable the technologies of the future.
  • Centre for digital built Britain: constructing our future (21st March). A showcase of how construction is going digital by harnessing the power of robotics, artificial intelligence, machine learning and new technologies, the future of how we build schools, homes and everything around us is being transformed.
  • Technologies to challenge your brain (22nd March). The NIHR Brain Injury MedTech Co-operative and a team of innovators have developed gaming apps and virtual and augmented reality technologies for assessing how healthy our brains are and stimulating memory, concentration and cognitive responses.

Featured products

Product Spotlight

Upcoming Events

No events found.
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier