What’s up squad, as some of you may or may not know I started a doctorate program at the beginning of this semester so I’m about two courses down towards Doctor Martin. So I’m doing my doctorate in cybersecurity with a focus on cybersecurity education and artificial intelligence because these are a couple of areas where I feel need the most in terms of research to be a net positive impact on the black community in the future.
So on January 23rd the National Institute of Standards and Technology, NIST, launched an AI Risk Management Framework. So you’ve probably heard about the Risk Management framework 800-53 but this is a specific framework for AI. SO here are the key takeaways.
The NIST AI RMF is a framework to better manage risks to individuals, organizations, and society associated with Artificial Intelligence. The Framework is intended to help add trustworthiness into the design, development, use and evaluation of AI products, services and systems.
So you may be saying, well tennisha, another risk management framework, why is this newsworthy. It’s newsworthy because in press release the press release acknowledges something that many frameworks don’t, and that is the negative impacts to individuals, groups, communities, organizations and society.
Thre framework is divided into two parts, the first which is included to help frame the risks and outline the characteristics of trustworthy AI systems, and the second is your standard risk management framework boiler plate: govern map, measure, and manage.
So what they’re not saying, is that the bias that exists AI systems can cause discrimination and bias that may be intractable. The National Fair Housing Alliance points this out with respect to housing discrimination. What’s missing from the statements collected from the private and sector, is the explicit mention of how bias within AI impacts women, specifically women of color. A new York times article release in July stated that AI systems used to generate pictures amplified stereotypes about race and gender and perpetuated a bias that people with lighter skin tones hold higher paying jobs with darker skin tones labeled as dishwasher and housekeeper.
It's giving very much the help. So this framework is notable because they’ve included a whole part of the framework to analyzing and framing the risks associated with biases such as these that often are trained on systems with inherent bias. This is a field to look out for in 2024, especially when the OpenAI reunion pics on twitter last week were super white. My money is on a diversity driven hiring focus within the AI space in the near future. CITATION NIS231 \l 1033 (NIST, 2023)
BIBLIOGRAPHY NIST. (2023, January 26).
NIST Risk Management Framework Aims to Improve Trustworthiness of Artificial Intelligence. Retrieved from NIST.gov: https://www.nist.gov/news-events/news/2023/01/nist-risk-management-framework-aims-improve-trustworthiness-artificial