Blog Post #3: Artificial intelligence and reducing bias
The article for
the Wall Street Journal titled “Rise of AI puts the spotlight on Bias in algorithms”
written by Isabelle Bousquette explores the recent hype around generative AI
such as chat GPT which has emphasized the ongoing challenges that businesses
all around the world are facing in order to keep bias out of the algorithms their
AI uses.
Bias has been a persistent issue, often in part because the AI has been trained
on skewed information by using non-representative data sets since these AI
algorithms are built by people who have their own biases that are reflected in
their work.
While many companies are investing large sums of money to reduce bias in their algorithms,
they are often only backing systems, processes and tools that do so proactively.
The costs of retroactively limiting bias for established systems are quite high
and the situation is less than ideal as it’s significantly more difficult to
remove bias later on from an AI algorithm than it is to address the bias from
the start. Currently, companies haven’t built controls for AI bias into their
software development cycles the same way that cyber security does.
As noted earlier in order to efficiently address bias, it is important to embed
controls and governance at the start of the algorithm build rather than
deploying control algorithms and assessing what damage has been done since
there are significantly more challenges in understanding why the algorithm
generated a particular response after the fact and is even more difficult to
understand with more complex deep learning AI algorithms. “Companies must
address bias from the beginning, said Rajat Taneja, President of Technology at
Visa Inc. Before any model is deployed at Visa, it is assessed by a model risk
management organization and a team that tests for potential unintended impacts,
ensuring the model adheres to Visa’s principles of responsible and ethical AI.”
(Bousquette, 2023) Better guardrails and standardized frameworks could be part
of the solution, including a governance layer meant to ensure transparency and
visibility and reduce algorithmic bias. “PepsiCo Inc.’s Chief Strategy and Transformation Officer, Athina Kanioura,
said that PepsiCo has been working with other large companies to establish this
type of industry framework. She also mentioned that PepsiCo chooses not to use
AI for certain things because the risk of bias is so high, including hiring
decisions.”
In the end, better tools for tracking and assessing bias in algorithms could be
helpful in addressing bias and currently, several startup companies are offering
AI management solutions that can help businesses address this issue.
This article applies to any workplace or
business that’s currently using or planning to use AI in the future. AI is
already being used for a wide range of tasks including in the hiring practices
at some companies, the concern as noted by PepsiCo’s Chief Strategy and Transformation
Office Athina Kanioura is that there is a high level of risk in using AI in hiring
practices and it’s just not worth it to the company at this point in time.
This article relates to me because at some
point, maybe even now during my current job search, my resume and cover letter
may be assessed by HR AI software as many companies do utilize a software
filter to weed out unqualified candidates to save HR people time in recruiting.
The creator of the HR software may have a bias against my gender, age or work history
that could impact my ability to secure an interview for the job all because of the
AI developer-built bias into their software.
Bousquette, I. (2023, March 9). Rise of AI Puts Spotlight on Bias in Algorithms. WSJ. https://www.wsj.com/articles/rise-of-ai-puts-spotlight-on-bias-in-algorithms-26ee6cc9
Comments
Post a Comment