UX and AI

uxandai

#HumanRights #AIandUX 

  • 10-15 minutes read

Human Rights and AI

As a UX specialist, I am constantly mindful of the human aspect of any innovation. In today’s world, where artificial intelligence (AI) has become integrated into every facet of our lives, from the moment we wake up until we go to bed at night, it is becoming increasingly feasible to achieve the level of sophistication depicted in the movie “Her.” AI already helps us make decisions, saves time, and simplifies our lives. However, with great power comes great responsibility. It is crucial to consider the ethical implications of AI and ensure that these systems are equitable and impartial, which is where responsible AI comes into play. In this article, I aim to explore some fundamental aspects of human rights and AI:

Human Privacy:

As it is widely recognized, implementing AI systems that gather, analyze, and retain personal data may result in a severe infringement of privacy. Unconsented monitoring and tracking by governments and corporations through AI technologies can compromise people’s privacy rights. Despite the existence of a few regulations addressing the protection of personal information in the context of machine learning and AI, additional legislation is expected to emerge in the future. Currently, the GDPR is one of the laws instructing companies on how to communicate with individuals about decisions made by machines, yet its applicability is limited to EU-based businesses. In order to ensure individuals’ data security, three crucial concepts need to be taken into account: anonymity, pseudonymization, and differential privacy.

Pseudonymization

Picture credit : Chino.io

Pseudonymization is a data protection technique where personal information is replaced with a pseudonym, such as a random number or symbol, to prevent the direct identification of individuals. However, this method could be better, as the pseudonym can still be linked to the individual’s identity through a matching list. 

In addition, the security of the list must be maintained to prevent unauthorized access, which has been a challenge in light of the increasing number of large-scale data breaches in recent years.

Anonymized data

Anonymized data refers to data that has been altered in a way that prevents individuals from being identified. Despite being modified, this data can still offer valuable insights, such as the geographic distribution of the customers. On the other hand, synthetic data is computer-generated data that can substitute for actual data. However, it is still possible to identify individuals from anonymized or manufactured data. We can apply differential privacy to conceal personal information by adding fabricated data to address this issue. Ensuring the confidentiality of user data requires collecting data for specific purposes, protecting it from unauthorized access, and providing clear information to individuals about how their data will be used.

Differential privacy

This method enables researchers and analysts to extract valuable insights while preserving personal information. Differential privacy quantifies privacy loss instead of merely indicating whether someone’s data has been exposed. Each time data is processed, the risk of exposure increases. For an algorithm to be considered differentially private, its output should not disclose whether an individual’s data was used in the original dataset. Also, the algorithm must be consistent even if someone joins or leaves the dataset. Differential privacy involves adding sufficient statistical noise to protect privacy without sacrificing prediction accuracy.

When we add statistical noise to personal data, it becomes more difficult to identify specific individuals. For instance, consider an individual who resides at 90965 Main Avenue. By adding noise, it may seem as if they reside at a different address entirely.

Three fundamental guidelines must be followed to protect user privacy:

  • Collect data only for legitimate purposes, rather than collecting everything and hoping to uncover insights later.
  • Process personal data with security and confidentiality measures in place.
  • Be transparent about your intentions and methods of collecting personal data, and inform individuals accordingly.

The potential misuse and unintended outcomes of AI systems

Picture credit : techiecivil.com

The harmful effects of AI often stem from its misuse and unintended use. For instance, deep fake technology has been utilized to mimic individuals and create deceptive videos, despite developers not intending this use. As a result, machine learning tools have been leveraged in ways that go beyond their intended purpose, such as deep fakes and audio reproduction technology, to create fake news videos featuring famous people. Also, AI has frequently been misused to control and manipulate people through propaganda and by exploiting personal data for financial gain.

Identifying the misuse of AI can be challenging because the purpose of specific models is often uncertain, whether they are open-source or proprietary. There are cases where organizations create machine learning models for their own use but later decide to sell them to other companies to generate additional revenue. Unfortunately, this can lead to using the models without the necessary expertise, resulting in misuse or unintended consequences.

In real-world scenarios, companies often purchase an AI API, integrate it into their own products, and later discover that it is making harmful decisions. As a result, these companies often turn to the API supplier with concerns, fearing legal or financial consequences. Typically, the terms of AI sales absolve the seller of any responsibility for end-user outcomes, leaving the buyer to address these kinds of issues with limited knowledge. Next, customers who use these ML APIs now have access to tools they created without all the knowledge they collected on the way there. 

One way of remediating this issue is to establish documentation workflows and maintain repositories of known issues and critical decisions. This information is typically stored on platforms like Jira and is linked to the key decisions backlog, which documents the findings and trade-offs made by the product team, core data ethics team, and executives that guide the development of AI systems. Internal stakeholders should raise their concerns and add them to the issues backlog for further investigation as well.

Given this, it is crucial to consider how AI’s misuse and unintended use can impact its outcomes. In light of this, what modifications should companies introduce in the development process of AI systems?

To ensure the responsible development of AI models, companies should take at least two steps. 

First, they should clearly document the reasons for developing the model and conduct fair evaluations to mitigate potential harm. 

Second, it is crucial to work with security analysts to identify vulnerabilities. Developing teams should create a data sheet for every data set used in the AI system’s training and a model card for every ML model developed. These documentation tools require teams to think intensely about their intentions and outline potential misuses that companies who may buy access to their API can rely on.

These steps are necessary for any large-scale model to monitor its function and identify issues that hackers may exploit.

Unethical business use cases 

As AI’s popularity has grown in Silicon Valley, more companies have been convinced that using AI is essential to generate revenue or prevent profit loss. However, some reasons why AI is employed can lead to bias in AI systems. These are not new issues, as big data has been used to oppress people for years. Various business cases are prevalent in the industry but contradict the aim of creating a just AI system. For instance, human biases against race and gender significantly determine who gets hired in HR. However, AI-powered hiring tools have been shown to have high levels of disparate impact, rendering them unethical. Even though the most prominent companies selling AI in HR use historical data to train AI models, they need to spend more time studying, re-weighting, or investigating the data, leading to biased models. Additionally, automated HR tools have shown dramatic biases similar to those without automation. Therefore, using historical data to train AI in HR to rank, select, or match potential employees with jobs is currently unethical, given human and machine bias prevalence.

Mortgage lending is another area that requires addressing algorithmic bias. However, these biases are not necessarily a reflection of the technical team’s proficiency or intentions but rather of industries that are plagued by inequality that cannot be rectified through pattern recognition system training. Even if the individuals responsible for developing the algorithms aim to establish a just system, AI systems still disproportionately affect minority borrowers.

The use of AI in surveillance is another area that raises ethical concerns. The practice of surveillance has a long-standing history of being biased, unfairly targeting specific communities, and engaging in dubious consent practices. With AI, the process of surveilling populations becomes faster and easier without proper regulation and can be done in ways that are easy to conceal from the public. Often, surveillance companies market their flawed practices as a way to improve neighborhood safety. However, this violates the privacy and security of many subjected to surveillance. Whether it’s for monitoring employee internet activity or conducting student online exam surveillance, the objectives behind AI-powered surveillance tools almost always involve the manipulation or marginalization of particular groups of people.

To build a model that works, we need to ensure that our concept is valid. If we use an invalid concept, like trying to tell if someone is distracted by looking at their eyes, the model will fail. Some issues of invalidity aren’t necessarily harmful or bad, but many are unethical. Whether something is unethical depends on the situation. For example, trying to guess someone’s gender from their photo is unethical and impossible. Doing this makes harmful assumptions about gender. 

A business use case may be unethical for various reasons, including the deployment context, implicit or explicit objectives, or who the use case enables. For instance, a crisis text line used messages from people in need to train a machine learning model for businesses to use. These caused a lot of backlashes, as users weren’t aware their information would be used for business purposes. We should also consider whether using certain types of machine learning is ethical in some more personal issues like mental health services.

As illustrated, unfortunately, there are numerous instances of unethical engagement of AI in the corporate world.

Autonomous systems

People take part in a demonstration as part of the “Stop Killer Robots” campaign in front of the Brandenburg Gate in Berlin on March 21, 2019. WOLFGANG KUMM/DPA/AFP VIA GETTY IMAGES

People often react to autonomous systems unexpectedly, with concerns ranging from the fear of job loss to skepticism regarding surveillance. With many autonomous devices having cameras in public spaces, there is a significant risk of privacy violations. People may need to learn how their data is being stored, who has access to it, or how to revoke that access. These issues raise the question of how we can develop autonomous systems that avoid reinforcing these fears and instead prioritize fairness, privacy, and accuracy. It is essential to carefully consider the potential value and potential harms of any new technology, especially in the early design stages, and to seek diverse perspectives, including those from overpoliced populations and those with no stake in the success of autonomous systems. 

The lack of policies and frameworks guiding organizations is one of the reasons for the absence of standards in this field. Various harmful technologies have emerged in response to global events, such as an escalation of violence. While there are ongoing debates about whether or not these systems should be utilized, better to explore and generate alternative solutions to robot cop dogs and surveillance drones. Deploying such systems can lead to a decline in public trust, even if you believe they are acting in good faith.

Furthermore, it is crucial to be transparent about why these systems are deployed in public spaces and to consider low-tech solutions that have proven results rather than relying solely on AI and machine learning models. By prioritizing the development of low-tech solutions and performing studies to understand their impact, we can scale successful interventions to address societal issues without resorting to harmful technologies.

Now that you have some ideas of the theories and principles think about the actions you can take that are under your control to create more responsible AI systems, even if you don’t have the power to address bias issues. When you notice these issues, please speak up and suggest alternative solutions to prevent user harm.

If you’re interested in learning more, these resourceful books can help you grasp the concepts more deeply to turn ethical aspects into actionable mitigations. 

-Data Conscience by Dr. Brandeis Marshall

-Automating inequality by Virginia Eubanks

-Responsible Data Science by Grant Fleming and Peter C. Bruce

-Design Justice by Sasha Costanza-Chock

-Invisible Women: Data Bias in a World Designed for Men Caroline Criado Perez

How can designers get new ideas from AI tools?

How Technology is Transforming UX and UI Design.Minimalist design is a style that originated in the 1960s and is characterized by a few key features, including...

Beyond Reality:How AI and UX are Collaborating to Create Seamless AR and VR Experiences

Scroll to Top