Artificial Intelligence

3 Ways to Improve the Ethics of Artificial Intelligence

3 ways to improve the ethics of AI

The workplace is experiencing an ever-changing technological revolution heavily impacted by developments within artificial intelligence (AI). Customers are given financial advice by chatbots, personal data is collected by mobile apps, and medical judgements are delivered by AI, reducing the need to interact solely with another human. With more information about customers collected and analysed every day, this data can be processed by AI technology faster and more accurately, making commerce and business more efficient and objective.

Nonetheless, with new technologies come new problems, and the rapidity with which these technologies are being applied often mean these complications go unnoticed until serious ethical and legal concerns emerge. So, in this blog, we assess three ethical issues of artificial intelligence and how they can be improved.

1. Overcoming bias in artificial intelligence

Despite the variety of benefits that artificial intelligence can be trained to bring, such as a better accuracy and faster delivery of everyday tasks, their systems are built by humans which can make them at risk of replicating the world’s pre-existing inequalities. One of the most predominant issues this is causing is a bias against demographic characteristics such as age, gender, class, or race.

One example of bias in artificial intelligence was a glitch in Amazon’s automated recruitment tool. The tool mimicked patterns in the resume’s and hiring of job candidates over a ten-year period that favoured men over women, discriminating against gender by automatically rejecting female applicants.

Meanwhile, in Microsoft’s Face Detect tool, results showed that the application had a 0% error rate for light-skinned males but a 20.8% error rate for dark-skinned females because of a lack of testing with those from underrepresented groups. Without correct awareness and regulation of artificial intelligence, this can have a damaging impact on business practices and reinstate historic biases that we’ve fought to conquer.

To overcome this, there is an increasing focus from businesses to instill diversity into workforces who create and manage AI algorithms. For example, Amazon are striving to learn from their mistakes by launching a ‘Future Engineers’ programme which aims to bring 1.6 million young people from underrepresented groups into computer science. Subsequently, as systems are trained and programmed by a diverse group of employees, some of the current discriminative issues and biases of artificial intelligence will be minimised and over time eliminated.

2. Taking responsibility for AI

Another ethical consideration of artificial intelligence is defining who the responsibility of an AI’s actions sits with. This is not only beneficial in overcoming biases of AI, but it’s also particularly prevalent in the health and safety of automated machines and vehicles.

In the UK, Transport for London are exploring the roll-out of driverless Tube trains by 2030 with centralised monitoring and control, to meet increasing capacity demand with faster and more reliable journeys. The tubes will continue to have an operator on board, but as they develop, they will be designed and built to be capable of fully automatic operation which is raising concerns around whose responsibility it is in the case of an accident.

This was the case in early 2021 when a self-driving car with no operator crashed and took the lives of two people in Houston, Texas. In addition, government agency Innovate UK, predict 90% of motorway HGV’s to be autonomous by 2050, and with the investment into autonomous vehicles and technologies ever-growing, this is raising further confusion over the responsibility of artificial intelligence.

To overcome this confusion, laws are continuously being designed and developed amidst changes in artificial intelligence such as The Automated and Electric Vehicles Act 2018, which has recently been amended to assume responsibility for the control of an automated vehicle. Alongside this, AI manufacturers are also required to ensure their tools meet legislative and ethical standards including the Ethics, Transparency and Accountability Framework for Automated Decision-Making built to establish trust in the regulation and development of AI.

3. Improving reliability of AI applications

As a result of the growing implementation of artificial intelligence, the ethical need to improve reliability in AI applications is also on the rise to ensure they are fit for purpose and operate as expected.

For businesses who are seeking to gain a competitive advantage through the early adoption of artificial intelligence, improving its reliability by reducing the risk of producing bad outcomes is critical to maintaining a positive brand image and ensuring it functions as required. One infamous example is Microsoft’s automated Twitter chatbot Tay, which was programmed to converse with other social media users using AI. However, the public soon took advantage of its vulnerable filters as the bot communicated profane and offensive language which led to its cancellation and a corporate apology issued by the company.

To improve the reliability of artificial intelligence, there is an increased emphasis on robust testing programmes such as quality assurance processes. By undertaking such testing, AI technologies are proved to be ready and effective before they are rolled out or used in everyday business practices.

Study AI online with the University of Leeds

Integrating ethics when working with artificial intelligence is crucial for organisations to ensure that processes are well maintained, fair and safe. If you want to learn skills that employers are actively seeking within the AI development sector, from building skills in Python programming to the fundamental techniques of text analytics, you should consider studying our specialised online Masters in Artificial Intelligence. Our module in the ethics of artificial intelligence will provide you with the analytical and theoretical tools to engage with the ethical questions that artificial intelligence raises, before analysing, making, and defending arguments to a leadership level.

two students analysing artificial intelligence algorithms

Did you enjoy this blog? Here's some related artificial intelligence content that you may be interested in:

Want to learn more about our online Artificial Intelligence course?

Preview of the University of Leeds Artificial Intelligence programme page

Check out the course content and how to apply.

Online postgraduate courses - University of Leeds