Quantcast
Channel: OpenID Foundation
Viewing all articles
Browse latest Browse all 44

AI for Identity Standards

$
0
0

The OpenID Foundation recently convened a panel on Artificial Intelligence and Identity Standards. The panelists included:

  • Nancy Cam-Winget, Cisco
  • Kaelig Deloumeau-Prigent, Netlify
  • Mike Kiser, SailPoint
  • Geraint Rogers, Daon
  • Gail Hodges, OpenID Foundation (moderator)

This summary draws out general themes and comments are not attributed to individuals or their organizations.

Technology is Never Neutral

“Technology is neither good nor bad; nor is it neutral”

-Melvin Krantzberg

People and organizations have been working to build artificial intelligence (AI) and machine learning (ML) tools for many years now. For decades, companies have been using predictive AI for a range of purposes, including fraud detection, security, etc. When the pandemic hit, Moderna used AI to speed up the development of the vaccine.

On the flip side, such models can (either inadvertently or by design) deepen disparities, promote bias, or create other harms. After all, if Moderna can apply these tools for public health, bad actors can use similar models to create biological weapons.

New Tools, New Harms

AI has been present for some time, but with the emergence of Generative AI (coupled with vast increases in computing power now available), there are far more tools available – and to a much wider audience. 

The panelists agree that this expanding audience, while it fosters inclusion and democratizes technology, creates meaningful new risks. With OpenSource AI LLMs now available (e.g. via HuggingFace) would-be bad actors have extremely powerful tooling at their fingertips. Each and every benefit of AI might be mirrored in illicit industries. 

Identity

Nachricht von Ella | Without Consent

One of the most insidious risks that the panel discussed is AI’s impact on Identity – not just Digital Identity, but Identity itself. Synthetic content is becoming pervasive and the threat of viable impersonation can no longer be ignored. The use of quantum-resistant digital signing to validate provenance becomes a priority for any content generated today.

Gen AI has changed the game. It’s a way of life in business operations now. Barriers to entry are so low – and this has had amazing benefits but they are mirrored in the dark world. 

AI Development

Coding is now largely an AI-facilitated activity. As developers come to rely more and more upon LLMs to support their work, there is a risk that expertise – particularly in cybersecurity – may wane at the exact time that organizations like CISA are calling for more focus. Furthermore, research suggests that AI-assisted code is substantially more insecure  – but that developers using AI wrongly feel that their code is more secure (see “Do Users Write More Insecure Code with AI Assistants” out of Cornell) With more than 50% of code being AI generated today, there is substantial risk that bad code will enter the supply chain via AI.

It is in this context that there is a risk to the OIDF and other standards bodies that build safe and interoperable protocols. When novice developers, unfamiliar with the specifications, seek LLM guidance on OIDF standards, they may receive poor guidance as to how to implement or bad code itself.

Outsourced Ethics

As more and more thought gets outsourced to LLMs, there is a risk that humans will miss important opportunities to think critically about the ethical implications of any given work product. This may happen by virtue of efficiency-seeking, negligence, or decreasing levels of capability as specialisms shift to AI. Since all technologies are value-laden either by intention or by outcome (e.g. as a result of the training data and other system inputs), there’s a risk that human values will be replaced by algorithmic values if AI facilitated innovation isn’t kept in check. 

What Can We Do?

There are many organizations and regulators now grappling with the challenges that emerging AI presents on a societal level. An audience poll revealed that the majority of OIDF members present agreed that the Foundation needs to keep a watchful eye on AI and act, now or in the future, to mitigate the risks posed by AI. Such mitigations might include:

  • Standards Development: ensure that threat models consider the latest attacks and that standards evolve to meet given the new risks that LLMs present for real people with regards to asserting and protecting their identities online. 
  • LLM Accuracy: explore methods by which OIDF can help ensure that LLMs produce guidance that is aligned to the latest specifications and best-practice configurations.

We welcome your thoughts and contributions in the comments or to director@oidf.org

About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate.
 
Find out more at openid.net.

The post AI for Identity Standards first appeared on OpenID Foundation.


Viewing all articles
Browse latest Browse all 44

Trending Articles