Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

NSTIC Pilot Common Considerations 5: An Identity Ecosystem Functional Model for the Modern Market

In this fifth blog in our series highlighting some of the common questions and observations that have emerged as the NSTIC pilots continue to move forward, we build upon the previous blogs on terminology and trust frameworks, to discuss an identity ecosystem functional model. As the NSTIC pilots have developed, we note that what we initially thought was a “terminology disconnect” was, in most cases, caused by the fact that in today’s market, functions can be implemented by different participants in the identity ecosystem.  This was confusing the terminology regarding actors and their functional roles. We believe that an understanding of the implementation characteristics of identity systems’ functions leads to a clear understanding of their adherence to the NSTIC Guiding Principles. We are not alone in this belief – and note that at the recent IDESG Plenary meeting at MIT, several Committees engaged in a similar dialog.  Building on those discussions, this blogpost follows up with a discussion of an identity ecosystem functional model that recognizes the different roles ecosystem participants can and do play in the current market. This blog focuses on the functions in an identity system, and how the implementation of such functions by various participants—such as users, identity providers, attribute providers, attribute verifiers, intermediaries, and relying parties—affects the overall system characteristics. Identity Ecosystem Functional Model Two related Common Considerations discussed in this blog are:
  1. Within existing identity systems and trust frameworks there has historically been a lack of recognition of the  separated functions of identity proofing and authentication. More recently, intermediary technology components between identity services and relying parties have evolved as additional identity system participants, and have similarly not yet been formally recognized by identity trust frameworks.
  2. The above observation on identity proofing and authentication functions, and intermediary technology components, is symptomatic of a general lack of a clear functional model for identity ecosystems, which distinguishes between the various participants and functions.
Functional Model: Recent Trends The NSTIC pilots encompass a wide range of technologies and capabilities, and support a number of different use cases. Across the pilots it is becoming clear that a few functions can be used to support a wide range of identity system use cases, but the nature of such functions continues to evolve. As an example, identity proofing and authentication are two key functions that in some cases are being separated into “atomic” functions (compared to a “traditional” Credential Service Provider model) that support identity systems. This functional atomization has been driven largely by vendor specialization and commercial forces, but also provides additional architectural flexibility (with some attendant challenges) to support security, privacy, interoperability, and ease of use. Such an architectural capability has been recognized by Trust Framework Providers such as Kantara Initiative, although it has not yet been incorporated into their certification scheme. As a further example of functional atomization in the pilots, we have observed that attribute provision and verification are used to support identity proofing and authentication functions, as discussed in a previous blog in this series. In analyzing this trend of identity function atomization, we gravitate towards an analysis of the binding mechanisms between the functions, as has been discussed by the Kantara Initiative and Anil John. In addition to this binding mechanism, another consequence of functional atomization is the growing use of an intermediary technology layer to orchestrate transactions between the various identity services and relying parties. These topics are discussed further in the following sections. Identity System Functions and Participants We can reduce the operations in an identity ecosystem to a few basic functions Identity Proofing Determination of the underlying confidence that a set of attributes ties a user to their identity. Authentication Determination of the level of confidence that the user is the rightful owner of a credential. Binding The linking between the identity proofing and authentication functions.
Figure 1. Basic overview of identity systems.

With reference to Figure 1 above and to the previous blog on terminology, authentication and identity proofing together support the degree of confidence in an individual’s identity at a time of entitlement provision. The binding between these functions can be seen in the broader context of identity and credential lifecycle management. For example, NIST SP 800-63-1 discusses separated functions, and reiterates that the level of assurance in any overall sequence of identity functions is equal to the lowest level of assurance – the so-called “low watermark” – of any one function or binding. In this blog we focus on how these functions, and how the binding function by a participant (e.eg. CSP, RP, intermediary) has several consequences, such as transaction linkability, consent flows, credential branding, and interoperability, which influence how a system adheres to the NSTIC Guiding Principles.
a)

 
b)

 
c)

 
d)

 
e)

Figure 2 – Various identity architectures: a) “classic” Credential Service Provider (CSP) model; b) Relying Party performs identity proofing and binding; c) intermediary is used to produce “blind” operations between the CSP and the RP; d) intermediary is used along with relying party identity proofing; and e) intermediary performs identity proofing and binding. In all cases, the dashed line in the figure depicts a boundary that incorporates the participant(s) (indicated in red font) that conduct(s) identity proofing and binding. Figure 2 above depicts five different scenarios that are currently being deployed. Figure 2a depicts the “traditional” credential service provider model where identity proofing and authentication are delivered by the same provider and so the binding is inherent in the service. Figure 2b depicts the case where each relying party performs its own identity proofing and also maintains the binding to the authentication service being used. Figure 2c depicts the model in which an intermediary is used between the CSP and RP services. This architecture allows RPs to interface with a number of CSPs without the effort and cost of integrating each of them, and is the basis for the U.S. government’s upcoming Federal Cloud Credential Exchange (FCCX). Architectures using such intermediary layers can also be used to render the operations between participants blind – in such a case, the CSPs and the RPs don’t know who is performing an authentication or transaction, respectively. Figure 2d depicts the scenario where an intermediary can be used to provide an abstraction to a number of different authentication means, but each relying party still performs its own identity proofing. This architecture forms the basis for the Canadian Cyber Authentication Renewal Project in conformance with the Canadian federal Privacy Act. The ability for RP’s to perform identity proofing allows them to either “know their customer” in accordance with legislative requirements, or to use compensating factors to enhance the authentication process prior to the provision of a service.  Lastly, Figure 2e) depicts a system in which an intermediary provides both authentication and identity proofing – this scenario is applicable when an intermediary wishes to offer a range of identity services in an “a la carte” manner and/or use compensating controls to create “enhanced” credentials. Note that these five scenarios are intended to be illustrative only, and they do not represent an exhaustive set.  For example, it was noted previously that the identity proofing and authentication functions can be implemented at a more atomic level using attribute provision and attribute verification. It is interesting to note how the different scenarios dictate how an identity system adheres to the NSTIC Guiding Principles – this is particularly influenced by operation of the binding function. Identity Ecosystem Evaluation and the NSTIC Guiding Principles The functional model of any given identity solution can greatly influence its adherence to the NSTIC’s four Guiding Principles. A successful transition to the identity ecosystem envisioned in the NSTIC will depend on models that enable an appropriate balance of, and an increased adherence to, all four NSTIC Guiding Principles over time. NSTIC Guiding Principle: Privacy-Enhancing and Voluntary From a privacy perspective, a functional model allows an evaluator to consider how the FIPPs may be best implemented by illuminating how personal data flows or resides among the different components. For example, as intermediaries take on the role of orchestrating various authentication and identity proofing functions, it is critical to understand considerations such as user interface flow, consent flow, redress, and the principles of anonymity, unobservability, and unlinkability. NSTIC Guiding Principle: Secure and Resilient From a security perspective, the functional model clearly articulates the “target of evaluation” and delineates for each component interface where vulnerabilities could be present. The security evaluation methodology as part of the IDESG Security Committee work charter will likely be based on such a functional model, and a working group in the IDESG is currently investigating notional functional models. In addition, recent efforts to determine the underlying confidence of atomized functions, such as attribute verification, will help to quantify how such functions can be appropriately combined. NSTIC Guiding Principle: Interoperable The various scenarios depicted in Figure 2 above illustrate how the binding operation between identity proofing and authentication can be implemented by different participants. This has a profound effect on credential interoperability. For example, a fully-integrated credential as depicted in Figure 2a offers full credential interoperability (assuming technical compatibility), whereas the resultant credential created in Figure 2b is interoperable in terms of reducing the number of credentials a user holds, but the increased strength of the identity proofing conducted by a Relying Party may not be portable to other Relying Parties. We are seeing the scenario depicted in Figure 2b being used in some of the pilots due to a reluctance by relying parties to trust the identity proofing by other participants (or due to a mandate to do their own identity proofing).  Lastly, the scenario of Figure 2e facilitates the “enhancement” of credentials by an intermediary, but such credentials are “owned” by the intermediary that created them. NSTIC Guiding Principle: Cost Effective and Easy to Use As identity functions continue to be atomized, it is clear that users and consumers will be offered more choice in light of a more competitive environment. This should help to drive costs down. The usability capabilities of an identity system are impacted by how a user is re-directed across various components, in terms of user interface continuity, continuity of branding across the ecosystem, and ease of use. Increased modularity of any system should also increase the ability of innovators to adhere to standards, increasing the prevalence of solutions that are responsive to consumers’ tastes. Functional Model and Certification Understanding the consequences of various functional models on the guiding principles is a key step towards an accreditation scheme that aligns with the NSTIC. This is especially true when identity proofing and authentication are separated, or when intermediaries are used. The boundaries between functions and participants are critical and, as discussed above, the binding between them is an important consideration that relates to all four of the NSTIC Guiding principles. A clear definition of a functional model will allow an accreditation scheme to be developed based on ownership and data flow relating to credentials, and that is inclusive of all aspects of security and privacy and a clear understanding of interoperability and ease-of-use. As noted in previous blogs, such accreditation schemes should consider all actors in the identity ecosystem, including relying parties. Operators of identity trust frameworks should: 1) consider all aspects of an identity solution’s participants and functional model; 2) base their system on mutually-recognized implementations of all functional components, and 3) determine the optimal scenario to be used based on their requirements for adherence to the NSTIC Guiding Principles. Note that the inclusion of mutually-recognized implementations of functional components is intended to anticipate scenarios where functional components which are accredited under different Trust Frameworks can be interoperable. As mentioned in the introduction, the importance of functional models has been highlighted and actively discussed by the IDESG at the fourth and fifth Plenary meetings. Furthermore, we anticipate that the ongoing analysis of Use Cases by the IDESG Standards Committee will reduce functionality down to a basis set of functional components, as contemplated here. This functional model can then be evaluated “through the lens” of security, privacy, and usability, as well as analyzed for interoperability. We hope that this blog post will catalyze further discussion around these topics within the IDESG, and that the IDESG will look to define some recommendations to attempt to resolve these challenges.  Initial questions that we’d offer to stakeholders include:
  • Should accreditation schemes include all components within an identity ecosystem, such as Relying Parties and intermediaries?
  • Should accreditation schemes be based more on functions than follow the more traditional focus on actors/roles?
  • Should the IDESG TFTM Committee play an active role in defining functional models, along with Committees such as Standards, Privacy and Security?
  • How can the architectures proposed by the IDESG Use Cases be reduced to functional models and evaluated relative to the NSTIC guiding principles of Security, Privacy, Interoperability and Usability?
  • Will better standardization of identity proofing lead to greater credential interoperability?
  • Will improved credential interoperability lead to broader RP adoption and therefore a clearer business case?
  • Should the IDESG contemplate creating reference implementations of functional models that clearly address the trade-offs between security, privacy, interoperability, and ease of use so that identity framework operators can make appropriate decisions?
We have established a forum for further discussion regarding these and other related topics at the following location: https://www.idecosystem.org/content/functional-model-1

Comments

The Kantara analysis of Binding Order mechanisms has been moved to: http://kantarainitiative.org/confluence/display/idassurance/Trusted+Ide… The original work from Fall 2012 that considered how different roles in a Federated Identity pattern could be be modelled is forming the basis of new analysis on how a Trust Framework in the mode of FICAM/800-63 can be made 'modular'. Modularity is effectively a decomposition of certifiable functions, which are assigned to specific organizations, with the attendant accountabilities. Kantara Initiative Identity Assurance Working Group aims to describe 'modularity' of function, then to profile our Identity Assurance Framework to enable certification of the modules rather than monolithic certification of IDP+CM+RA functions.
Enormously important and interesting work- in coming from a financial institution background (banking) I can see that there are so many parallels and capabilities to be built upon. For example the check clearing mechanisms for the payments industry are a classic example of federated identity management within a common set of Operating Rules that all parties are bound by. These ecosystems worldwide tend to be bound to national boundaries and are therefore an example of roles and liability management that appertain to that chapter of Human endeavour- the challenge now being to translate such a model into the largely instantaneous and borderless electronic environment of the Internet.
The following was meant to be a table of functions, what they are and how they are to be trusted, but this format seems to be incapable of that: Function Name = Function performed :: Assurance level Identity Provider = Provides identifier for entity :: Identity proofing at enrollment Attribute Provider = Links attribute to identifier :: Credential proofing at enrollment Privacy Enablement = Determines user Intent :: Verification of user intent Device Integrity = Identifies entity delegate :: Is device trustworthy? Authentication = Binding of user device to user :: How many factors are present Attribute Verification = Device sends claim to be verified :: Proofing at enrollment Relying on Claims = Check chain of trust :: As appropriate for risk involved Before it is decided how functions are split between entities I think we need a taxonomy of functions. Here is a start. Several of the functions can be put into different entities. For example the Privacy Enablement function could be a part of FCCX as described in the part 5, or it could be in the user's device where the user can expect more control over the release of claims to the relying party. I will try to post more details on each function over time and talk about the natural entities to host that function.

Add new comment

CAPTCHA
Image CAPTCHA
Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Please be respectful when posting comments. We will post all comments without editing as long as they are appropriate for a public, family friendly website, are on topic and do not contain profanity, personal attacks, misleading or false information/accusations or promote specific commercial products, services or organizations. Comments that violate our comment policy or include links to non-government organizations/web pages will not be posted.