The Digital Gates: AI, IAM, and the Future of Enterprise Security

Hi, I'm Tak@, a system integrator. I'm passionate about developing new AI-driven services every day, and through this work, I've deeply realized the critical importance of "keys" and "gatekeepers" in the digital world.

It's astonishing, but we're already in an era where a single access mistake by a small or medium-sized business can lead to hundreds of millions of yen in losses.

This indicates that cybersecurity is no longer just an issue for specific experts; it's a pressing concern directly linked to our daily operations and, ultimately, the very survival of our companies.

How AI is Changing the Landscape of Authentication and Access Management

As digitalization progresses, we access countless systems and pieces of information. Essential to securing this access is "Authentication and Access Management," commonly known as IAM (Identity and Access Management).

What is IAM: The "Gatekeeper" of the Digital Society

IAM refers to the organizational and technical mechanisms for managing who (Identity) can access what (Access Rights) under what circumstances. Think of it as a "gatekeeper" in the digital world, ensuring that only those with the correct "key" (authentication) can enter permitted "rooms" (systems and information).

This gatekeeper doesn't just manage individual login information. It's responsible for properly granting and revoking access rights throughout an employee's entire lifecycle—from hiring, to transfers, to termination—managing those change histories, and centrally controlling access to various systems, applications, and network resources within the organization.

Traditional IAM primarily focused on such static, rule-based management.

The Benefits AI Brings to IAM: The Emergence of the "Smart Gatekeeper"

However, with the advent of AI, this "gatekeeper" is becoming exponentially smarter. AI analyzes vast amounts of data, uncovering patterns and contexts that humans often overlook, thereby bringing new value to the IAM process.

For instance, AI can assist in determining whether an access request or update is truly appropriate. It supports more confident decision-making by considering context, such as whether the requested permissions are already granted to other employees in the same department or with similar duties, or if there's a history of the request being denied.

I see immense potential in AI's ability to provide this "contextual understanding."

Furthermore, when new hires or transferred employees wonder "which systems they should access," AI can infer the necessary permissions from their job descriptions and create personalized recommendation lists. This allows employees to access the tools they need without confusion, significantly reducing the burden on IT departments.

Of particular note is the use of Generative AI. The process of requesting permissions is often complex and unclear. It's common for specialized jargon and acronyms to be used, and business roles may not be adequately described.

This is where Generative AI excels. As a chatbot, it can ask employees in natural language what permissions they need, suggest appropriate options, or narrow down choices by asking further questions if there are too many.

And, once the correct permission is found, the chatbot can even submit the request directly on behalf of the employee. When I first encountered PKI and truly felt the importance of security, I couldn't have imagined such a future.

Generative AI can also rephrase unclear permission descriptions, such as business role explanations, into more understandable and consistent language for the target audience. This is expected to reduce misunderstandings between IT and business departments, leading to smoother operations.

AI also possesses the ability to detect unusual access patterns and behaviors by analyzing existing data. This is crucial for early detection and response to unauthorized system access or insider threats. It's like a seasoned gatekeeper noticing suspicious movements from the daily flow of visitors and their behavior.

These AI applications streamline IAM processes, transform risk management into a more proactive endeavor, and enhance document management efficiency, among many other benefits.

Digital Delegation of Trust and Responsibility: Coexistence with AI Agents

As AI becomes deeply integrated into our lives and work, "AI agents" are gaining attention. These are AI systems capable of autonomously pursuing complex goals and acting on behalf of users under limited instructions. They are already active in diverse fields, including code generation, system troubleshooting, and data analysis.

What are AI Agents: The Birth of Your "Digital Twin"

AI agents act like our "digital twins," performing specific tasks on our behalf. However, the fact that these "digital twins" act for us fundamentally redefines the nature of "trust" and "responsibility" in the digital world.

For example, if an AI agent interacts with a web service or conducts a transaction on our behalf, it becomes crucial whether its actions truly align with our intentions, and who is accountable if something goes wrong.

"Delegated Authentication" – A New Challenge

This is where the concept of "delegated authentication" emerges. This refers to a mechanism that allows a human to authorize an AI agent to perform specific actions on their behalf.

A research paper from arXiv proposes a framework to extend the existing OpenID Connect (OIDC) framework, introducing AI agent-specific credentials to enable secure delegation.

This framework primarily utilizes three types of "tokens":

  • User ID Token: This is the same type of token we typically use when logging in, indicating the identity of the human user.
  • Agent ID Token: This token identifies the AI agent itself. It may include the agent's unique identifier, capabilities, and limitations as metadata.
  • Delegation Token: This is the most crucial new element. This token explicitly permits the delegating human to an AI agent to "act on my behalf for the following actions." It references both the User ID Token and the Agent ID Token, detailing the agent's purpose, scope of action, and validity period, and is digitally signed by the delegator to prevent forgery. This allows third-party services accessed by the AI agent to verify that the agent has legitimate authority.

Such a token-based framework can also be linked with the W3C's "Verifiable Credentials (VC)" standard. VCs offer a flexible mechanism for securely exchanging identity and delegation data in a decentralized environment, unbound by specific transport protocols.

Locus of Responsibility and "Contextual Integrity"

As AI agents gain the ability to act autonomously, the question arises of who should bear responsibility if their actions lead to unintended consequences.

In a recent Air Canada case, the airline argued it wasn't responsible for information provided by its online chatbot, but the court ruled the chatbot was part of the airline's digital infrastructure and held the company responsible.

This is a critically important precedent, suggesting that companies may be liable for the actions of their AI agents. Through such cases, I've keenly felt how closely legal frameworks and technical mechanisms need to be intertwined.

Here, the concept of "contextual integrity" becomes vital. This refers to considering whether the information flow context (who is involved, what information is shared, under what conditions it's shared, and within what social norms) is appropriately aligned when an AI agent acts.

It requires clearly distinguishing the scope within which AI agents can make autonomous decisions from situations requiring human oversight or intervention, and ensuring transparency. The discussions in this field truly feel like a collaborative effort between law and technology.

New Challenges in Permissioning: Merging Natural Language and Structured Rules

When instructing an AI agent, "Please perform this task on my behalf," we often instinctively use natural language. However, this seemingly convenient approach harbors significant challenges.

The Difficulty of Granting Permissions to "Imperfect" AI

The scope of an AI agent's actions is incredibly flexible, and precisely defining the "scope limitations" (how much to permit) for them is an extremely difficult challenge. While natural language instructions are easy for humans to understand, they often retain ambiguity for AI, potentially leading to misunderstandings.

For example, even an instruction like "Allow read and write access to the directory for Project Alpha, but don't allow access to the finance folder" isn't easy for AI to accurately understand which "Project Alpha" or "finance folder" is being referred to, and then translate that into a system-executable form.

More seriously, there's the risk of AI actions being exploited through "prompt injection attacks." Malicious third parties could send cleverly crafted natural language commands to an AI, causing it to perform unintended actions. This can also lead to threats like tampering with insufficiently digitally signed AI IDs or "instance impersonation" where legitimate IDs are misused.

These issues demonstrate that natural language alone cannot be a reliable security tool. The actions of AI agents must be controlled in a clear and auditable manner.

The Necessity of a Hybrid Approach: Balancing Security and Usability

As a solution to this challenge, research proposes a "hybrid approach." This method combines familiar natural language instructions with robust, machine-readable structured authorization languages (like XACML).

The basic idea is as follows:

  1. Natural Language Instructions: Users convey the purpose and scope of the task to the AI agent in natural language, which is easy for humans to understand.
  2. AI Structurization: The AI system, or the AI agent itself, converts these natural language instructions into unambiguous, structured authorization rules, like XACML. I feel this conversion process requires meticulous precision, like solving a puzzle.
  3. User Review and Approval: The converted structured rules are reviewed, potentially modified, and finally approved by the user. This prevents misinterpretations or misuse by AI and ensures that human intent is accurately reflected.

The greatest advantage of this approach is its focus on "resource scope." By granting AI agents permissions for specific resources, such as "access to this file" or "read this database," their possible task range is implicitly limited. This reduces the risk of AI agents accessing unauthorized resources and enhances defense against malicious prompt injection.

This hybrid approach is critically important for balancing security and usability. By leveraging the flexibility of natural language while guaranteeing final access control with strict rules, the path to safely and effectively utilizing AI agents will open up.

Secrets to a Successful IAM Program in the AI Era

An IAM program isn't just an IT project. It's a complex undertaking that forms the core of an organization's business processes, managing access rights according to changes in relationships from an employee's onboarding to their departure, role changes, and transfers. Due to this complex nature, many IAM programs falter midway or fail to deliver expected results.

Treating IAM as a "Program," Not a "Project"

One of the biggest reasons IAM programs fail is treating them as one-off "projects." Projects have clear beginnings and ends, completed upon delivery of a finished product. However, IAM should be driven as a "program" that continuously adapts to evolving business needs and technological changes.

This means a long-term perspective and a multi-phase roadmap are essential. Without a clear roadmap, stakeholders in various departments might become frustrated, unsure when their needs will be met. This often leads to the introduction of individual "point solutions" to solve immediate problems. This creates a vicious cycle of redundant IT infrastructure investment, increased operational costs, and future large-scale system replacements.

We should view IAM not merely as a technology implementation, but as a "foundation supporting organizational growth." It's crucial to establish a clear roadmap and prioritize based on the organization's pressing challenges, technical complexity, and the business benefits delivered. This prevents wasteful investment and generates value in the most effective way.

Establishing Strong "Sponsorship"

Strong "sponsorship" within the organization is indispensable for an IAM program's success. Since IAM is a cross-functional initiative spanning many departments—not just IT, but also HR, accounting, compliance, etc.—budgets are typically drawn from multiple departments. Therefore, prioritizing and coordinating among various departmental requirements is necessary.

In such situations, the presence of a strong "executive sponsor" acts as the compass for the IAM program. The sponsor must be a respected and influential leader within the organization, deeply committed to the program's success. They mediate inter-departmental discussions, build consensus, communicate program progress across the organization, and secure necessary resources.

I've seen countless times how the sponsor's enthusiasm determines the success or failure of a project. If sponsors are disengaged or don't prioritize IAM, the program will be marginalized, stakeholders won't cooperate when needed, and as a result, progress will stagnate. We, who drive IAM programs, believe it's essential to engage sponsors not just as funders, but as catalysts for organizational transformation.

Pursuing Convenience and Security: The Importance of User Experience (UX)

No matter how robust a security system is, it will become ineffective if it's difficult to use. In IAM programs, end-user experience (UX) is extremely important. Complex password requirements, cumbersome login processes, and confusing permission request flows not only reduce user productivity but can also induce behaviors that bypass security policies.

Successful IAM programs actively involve end-users from the planning stage. By understanding their daily work patterns and challenges and designing with usability in mind, security measures can be adopted without resistance. It's crucial to conduct thorough User Acceptance Testing (UAT) and to recognize that any inconvenient points can become "blockers" that hinder progress.

Furthermore, modern IAM tools often incorporate best practices based on years of accumulated knowledge. Therefore, instead of excessively customizing existing business processes, companies should consider aligning with the tool's standard functionalities as much as possible. Unnecessary customization increases system complexity, raises operation and maintenance costs, and makes future upgrades difficult. It's necessary to carefully evaluate the cost-effectiveness of truly automating something. I always believe that "simplicity is best."

Multi-Layered Security: IAM and Related Technologies

IAM doesn't function in isolation; it builds a stronger defense posture by collaborating with other security technologies.

For example, the integration of IAM and MDM (Mobile Device Management) is particularly important. While IAM verifies "who" can access, MDM ensures "which device" is secure. When both are linked, even if a device is lost or compromised, access from that device can be immediately blocked, preventing information leakage. Failing to link these means devices with access rights are unmanaged, directly leading to risks of immense damage.

Furthermore, AI/ML is expected to be utilized in API (Application Programming Interface) security. APIs act as "bridges" connecting data and functions between different systems, and their security is extremely crucial. AI/ML can analyze API access patterns and token metadata (e.g., expiration date, issuance time, user roles) to detect abnormal behavior in real-time, helping defend against unauthorized API usage.

However, authentication protocols like OAuth can also be misused by threat actors. According to a Microsoft report, attackers are exploiting OAuth applications to gain unauthorized access to organizational email systems and automate financially driven attacks like Business Email Compromise (BEC). To counter this, mechanisms for detailed monitoring of application creation or modification, sign-in activities, and email sending history are essential to identify anomalies early. IAM programs must function as part of a comprehensive defense strategy, collaborating with these diverse security layers.

Data Privacy and Ethics: Building Trust in the AI Era

As AI learns from vast amounts of data and becomes deeply involved in our lives, data privacy and the ethical challenges posed by AI have become unavoidable and crucial topics.

Recognizing Ethical Challenges

The effectiveness of AI systems improves with greater reliance on data, but there's always a trade-off where individual privacy might be sacrificed. How to strike this balance is a major challenge today.

In particular, AI algorithms can inadvertently inherit and amplify existing biases and discrimination present in training data. To avoid unfair outcomes based on attributes like gender, race, or age, it's essential to embed "fairness" and "non-discrimination" from the AI system's design phase.

Moreover, many AI systems have "black box" decision-making processes, making it difficult to understand why they reached certain conclusions. To build trust in AI and promote responsible AI development, "transparency" and "accountability" are crucial.

Above all, individuals should have the right to "consent" to and "control" their own data. This is a fundamental ethical principle that respects individual autonomy and choice.

Practicing Best Practices

To address these ethical challenges and ensure data privacy in AI systems, several best practices are required:

  1. Data Minimization: Collect and process only the data necessary for the AI system. Minimizing the collection and retention of unnecessary data reduces privacy risks.
  2. Consent and Transparency: Obtain explicit and informed consent from individuals for the collection and use of their personal data. Provide clear and understandable information about how data will be processed, its purpose, and potential risks.
  3. Access and Control: Guarantee individuals the right to access, correct, and delete their personal data. Also, provide options to opt-out of data use by AI systems.
  4. Privacy by Design: Incorporate privacy principles and safeguards from the early stages of AI system design and development. Security and privacy should be considered essential elements from the outset, not as an afterthought. Standards like ISO 42001 provide a framework to ensure safety, consistency, and accountability in AI.
  5. Anonymization and Pseudonymization: Apply techniques to remove or obscure personally identifiable information while maintaining the data's utility for the AI system.
  6. Ethical AI Development: Adhere to ethical principles such as fairness, accountability, transparency, and respect for human rights throughout the entire AI development and deployment process.
  7. Continuous Monitoring and Auditing: Regularly monitor and audit AI systems to ensure compliance with data privacy regulations and best practices. Promptly address any identified issues or vulnerabilities. Obtaining third-party certifications like ISO 27001 is also effective in demonstrating a commitment to data privacy and security.

Through these efforts, companies can not only comply with legal requirements but also demonstrate their commitment to ethical practices, build trust with stakeholders, and contribute to the responsible development of AI technology. I deeply feel that handling data requires not just technical skill, but also a profound sense of ethics.

Recommendations and Conclusion for the Future

AI holds immeasurable potential to transform the field of authentication and access management (IAM). It's not merely a tool for efficiency; it has the power to redefine security itself and fundamentally change corporate business.

As we've seen, AI is evolving into a "smart gatekeeper" that assists IAM decision-making, automates permission granting, presents complex information clearly, and even detects suspicious behavior.

Meanwhile, the new concept of "delegated authentication" for AI agents indicates the indispensable need for a robust technical and legal framework to build a chain of digital trust and responsibility. The "hybrid approach," which circumvents security risks arising from natural language ambiguity and balances usability with security, can be seen as one concrete solution.

However, for these technologies to truly deliver value, organizations must view IAM not merely as an IT project but as an ongoing "program" deeply linked to business strategy, driven by strong leadership. Pursuing end-user convenience and fostering multi-layered collaboration with related security technologies are also essential. And above all, data handled by AI must always be accompanied by ethical considerations, such as individual privacy and social fairness.

Authentication and access management in the AI era will not only secure systems but also enhance overall organizational trustworthiness, create new business opportunities, and enrich our own ways of working.

Now, based on today's insights, how do you feel you want to contribute to the future of authentication and access management that AI is opening up? And how will you prepare for the "invisible threats" around you?

Follow me!

photo by:Raimond Klavins