UHG
Search
Close this search box.

Generative AI Is Biased. But Researchers Are Trying to Fix It

Machine learning algorithms are increasingly being deployed in high-stake applications. Nevertheless, fairness in ML remains a problem

Share

An unpopular opinion: 2022 is the year of generative tech.

In September, Jason Allen from Colorado won a $300 prize with his artwork titled ‘Théâtre D’opéra Spatial’. The painting was a combination of classical opera and space that wasn’t created by Allen, but was generated using the AI software Midjourney. This is one such instance that proves that with new innovations in the technological space, also comes a new level of human-machine partnership. 

Deep learning engines have turned into collaborators generating new content and ideas almost like any human would. CALA, the world’s first fashion and lifestyle operating system, plans to use DALL·E to generate new visual design ideas from natural text descriptions. 

Having called it ‘Generative AI’, AI still remains half of the equation. With AI models that lie at the base layers of the stack, the top layer includes thousands of applications. Although we collectively didn’t have a name for it until a month ago, generative tech is about what humans can do with AI as their partner. 

With advancements come complexities. Machine learning algorithms have achieved dramatic progress and are increasingly being deployed in high-stake applications. Nevertheless, fairness in ML still remains a problem.

Ensuring fairness in high-dimensional data

Since its conception at the Dartmouth conference in 1956, the field of AI is yet to witness a unifying theory to capture the fundamentals for creating intelligent machines.

At present, the generative tech sector has undoubtedly witnessed a boom, which has been validated by high valuations and revenue. For instance, GPT-3 creator Open AI is reportedly raising capital at a valuation of billions of dollars. Moreover, the image-generating system Stability AI raised $101 million in a funding round this month. The more human AI becomes, the more one can understand how a human brain actually works.

In between this discovery process, researchers at DeepMind have identified a new way to design algorithms in ways that monitor safety and ensure fairness. 

Deep learning models are increasingly deployed in critical domains like face detection, credit scoring, and crime risk assessment, with the decisions of the model leaving wide-ranging impacts on the society. Unfortunately, the models and datasets employed in these settings are biased, raising concerns about their usage. This also causes regulators to hold organisations accountable due to their discriminatory effects.

To counter this, researchers at Google AI have introduced ‘LASSI’, one of the first representation-learning methods used to certify individual fairness of high-dimensional data. In the paper ‘Latent Space Smoothing for Individually Fair Representations’ the method is used to leverage recent advancements in generative modeling to capture the group of similar individuals in the generative latent space. 

Fair representation learning enables transforming user data into a representation that is fair, regardless of downstream applications. However, the challenge of learning individually fair representations in high-dimensional settings of computer vision still remains. 

Source: Faces on a Path Between Two GAN Generated Faces

Google claims that the users will now be able to learn individually fair representations mapping similar individuals together, which in turn minimizes the distance between them. This leverages the local robustness verification of the downstream application in an end-to-end fairness certification. 

How can it be applied in generative models? 

Quality generated text, speech, images, and codes curated by deep learning models have achieved state-of-the-art performance attracting attention from various academia and industry. The researchers claim that the model mainly leverages two recent advancements – one is the emergence of powerful generative models which defines image similarity for individual fairness, and the other is scalable certification of deep models which allows proving the individual fairness.

Source: Twitter

After evaluating the model,  researchers found that LASSI enforces individual fairness with a high accuracy. Moreover, the model handles various sensitive attributes and attribute vectors, with its representations transferred to unseen tasks.

LASSI was trained on two datasets, where one of the dataset (named CelebA) consisted of 202,599 cropped and aligned face images of celebrities in the real world. The paper reads, “The images were annotated with the presence or absence of 40 face attributes with various correlations between them. As CelebA is highly imbalanced, we also experimented with FairFace. It is balanced on race and contains 97,698 released images (padding 0.25) of individuals from 7 race and 9 age groups.”

Encoding human biases in generative AI

LASSI broadly defines image similarity with respect to a generative model via attribute manipulation. This will allow users to capture complex image transformations which include changing the age or skin color, which are otherwise difficult to characterize. 

Furthermore, with the help of randomized smoothing-based techniques, the team was able to scale certified representation learning for individual fairness to high-dimensional datasets in the real-world. “Our extensive evaluation yields promising results on several datasets and illustrates the practicality of our approach,” read the paper. 

The team claims that the method trains individually fair models, but it does not guarantee that models satisfy other fairness notions of ‘group fairness’. While individual fairness is a well-studied research domain, the paper argues that it does not qualify as a valid fairness notion which is insufficient to guarantee fairness in certain instances such as risks in encoding implicit human biases. 

Usually, filtering training data can often amplify biases. OpenAI believes that fixing biases in the original dataset is complex, and this is a field that is still under research. However, it seems to be addressing it by amplifying biases caused specifically by data filtering. 

It’s evident that the model can inherit some of the biases from the data collected through millions of images. For example, here’s what the AI gives you when asked to generate images of an entrepreneur:

Source: DALL-E

Meanwhile, the results for ‘school teacher’ was this:

Source: DALL-E

However, there was an interesting result that did not seem biased. 

Source: DALL-E

OpenAI is aware that the DALL-E 2 generates results that exhibit gender and racial bias. The firm states this in the ‘Risks and Limitations’ document, summarizing the risks and mitigations for the AI- generative system. Several attempts were made by the OpenAI researchers to resolve bias and fairness problems. But rooting out these problems effectively is difficult, as different results lead to different trade-offs. 

Meanwhile, a user tweeted:

Source: Twitter

📣 Want to advertise in AIM? Book here

Picture of Bhuvana Kamath

Bhuvana Kamath

I am fascinated by technology and AI’s implementation in today’s dynamic world. Being a technophile, I am keen on exploring the ever-evolving trends around applied science and innovation.
Related Posts
19th - 23rd Aug 2024
Generative AI Crash Course for Non-Techies
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord-icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.