Home
MSOE Machine Learning Blog
Ethical Considerations in Developing AI Technologies

Ethical Considerations in Developing AI Technologies

Programmers engrossed in deep collaboration, diligently working together to solve complex problems and develop innovative AI technology.

Artificial intelligence (AI) is leaving its mark on virtually every part of society. From the predictive algorithms used during the COVID-19 pandemic for contact tracing to behavioral analyses studying the purchasing habits of expecting women, AI is already shaping the landscape for industries as diverse as healthcare, retail, entertainment, housing and more.1, 2

Despite the innovation it drives, AI technology is not without its share of ethical complications. Questions surrounding how much data organizations should be allowed to collect on their consumers are one part of the conversation, while uncertainty as to how algorithms reach their conclusions also raise questions of transparency and trust. Algorithmic bias has exacerbated existing social inequities for underrepresented groups, and the massive amount of electricity required to power the systems behind AI has raised sustainability concerns as well.

These are just several issues that industries face, and as the speed of AI innovation continues to accelerate, you can also expect new issues that have yet to be imagined to arise. A thorough treatment of AI ethics exceeds the scope of this blog post, but here’s a look at the primary AI concerns that the world faces today.

What Are the Main Ethical Considerations for AI?

There are five main categories of ethical considerations that most AI bodies take into account: fairness and bias, trust and transparency, accountability, social benefit, and privacy and security.3

Fairness and Bias

Describing how inappropriate data sets can cause AI to harm individuals or treat them unequally, this ethical issue overlaps with currently existing social injustices. When algorithms are trained based on incomplete data sets, such as those that inadequately represent certain demographics, the AI tools they power end up disproportionately neglecting or targeting certain segments of the population.

For example, hospitals and insurance companies used an AI algorithm to highlight individuals in need of a “high-risk care management program” providing trained nurses and primary-care monitoring to chronically ill patients.4 The algorithm based its conclusions partly on healthcare spending costs for each person, which ultimately resulted in the exclusion of Black individuals. This was because chronically ill Black Americans spent approximately the same amount on healthcare costs as healthy White Americans, but the underlying reason was in part due to their lower income and subsequent inability to pay for the healthcare services they needed. As a result, the algorithm ended up not only neglecting the healthcare needs of the Black population but also proved to be counterproductive to its original purpose.

Trust and Transparency

AI technologies are powered by highly advanced algorithms featuring code that can be difficult to discern. The result is a “black box” system in which even the most fluent AI experts don’t necessarily understand how the algorithms reach their conclusions. Trust and transparency seek to address that problem by emphasizing explainability (the ease by which an algorithm’s inner workings may be understood) and fostering greater trust in the process. Other benefits of explainable AI include reduced algorithmic bias, clearer insights and more precise data modeling, to name a few.5

For example, healthcare algorithms have been used at hospitals to help triage patients based on data from chest X-rays.6 One algorithm made several errors in its diagnoses, including identifying images of children instead of images of healthy chests. If the algorithm had possessed better explainability, it would have been easier for researchers to understand the errors it had made, and they could have resolved the issue sooner.

Accountability

AI-driven products often require a great deal of collaboration in their design. This raises questions of liability when incidents such as malfunctions take place, and accountability attempts to address these concerns.

For example, while autonomous vehicles may cause fewer automotive accidents than their human-operated counterparts once the technology is perfected, it’s unlikely that an accident will never occur. When it does, questions will arise as to which party would be responsible. Will responsibility fall on the automotive manufacturer, the software company that designed the AI system, the hardware company that designed certain components or some other party?7

Social Benefit

Businesses and countries alike are increasingly stipulating that AI technologies must serve the interests of the common good. This portion of AI ethics describes the technology’s social benefit. As customers continue to base their purchasing habits on which companies share their values, social benefit will play an increasingly important role in profitability.

The Fair Credit Reporting Act (FCRA) and Equal Opportunity Reporting Act (EORA) both prohibit organizations from discriminating against individuals on the basis of race, gender or disability, among other factors, even if an accidental algorithmic bias is the cause.8 Fines and other financial penalties may ensue if such discrimination is detected, and given that 82% of consumers will base their shopping habits on their values, businesses that fail to take this into consideration may miss out on profits.9

Privacy and Security

Arguably the most common ethical concern regarding AI technology, privacy and security refer to how the data used to drive an AI algorithm is stored, processed, deleted or safeguarded from threat actors. This issue goes beyond handling sensitive data such as account or credit card information, Social Security or identification numbers or personal health information (PHI). It also encompasses the appropriate use of computer vision software for facial recognition, and whether organizations are allowed to share the data they gather with third parties or not.

For example, IBM agreed to sunset its facial recognition algorithms, citing concerns over improper use by law enforcement.7 Governments have also adopted legislation such as the California Consumer Protection Act (CCPA) and the European Union's General Data Protection Regulation (GDPR) in an effort to regulate data collection, storage, sales and use.10,11

Step Up as an Ethical Leader in Machine Learning With MSOE Online

As AI innovation continues to accelerate, expect the breadth and scope of the corresponding ethical concerns to grow with it. A wide number of organizations are launching efforts to address the current ethical landscape and to anticipate future issues that may arise, but one of the most important ways to tackle the ethical challenges of the future will be to train a generation of machine learning experts who have their ethical compass pointing to true north.

Conversations around AI and machine learning ethics are woven into the curriculum of both the online Master of Science in Machine Learning and the online Graduate Certificate in Applied Machine Learning programs. In the master’s program, you will also take a dedicated course—AI Ethics and Governance—on this subject.

While there are many post-baccalaureate programs that can introduce you to machine learning concepts and tools, MSOE’s program takes these lessons a step further by focusing on the ethics of this technology, the application of machine learning to industrial problems and the development and deployment of machine learning-based products.

Take the next step in your career today. Schedule a call with an admissions outreach advisor to learn more. Or, if you are ready, get started on your application.

Discover Your Next Step

This will only take a moment.

By clicking "Get Program Brochure" and submitting this form, I agree to receive text messages, emails and other communication regarding educational programs and opportunities, and to be contacted by Milwaukee School of Engineering and Everspring, its authorized representative. Message and data rates may apply. Message frequency varies. Reply HELP for help and STOP to cancel. View our privacy policy and disclosures.

MSOE and You: Better Together

Earn your master’s or certificate in machine learning online with MSOE. Complete the form to get a program details sheet for the program of your choosing—Master of Science in Machine Learning or Graduate Certificate in Applied Machine Learning—delivered to your inbox.

Admissions Dates and Deadlines

Aug
1
Priority Deadline
August 1
Fall 2024
Aug
12
Application Deadline
August 12
Fall 2024
Sep
3
Start Date
September 3
Fall 2024

Milwaukee School of Engineering has engaged Everspring, a leading provider of education and technology services, to support select aspects of program delivery.