In this fourth blog in our series
highlighting some of the common questions and observations that have emerged as the NSTIC pilots have moved forward, we focus on the use and management of underlying attributes to support identity services.
We have encountered the following three related Common Considerations around attributes:
- Existing trust frameworks do not adequately consider or incorporate attribute providers/verifiers.
- There are some instances where a party requesting attribute provision/verification does not require all of the information provided by an attribute provider/verifier – but the requesting party has no way to request or receive less. This has presented challenges as pilot participants work to adhere to the NSTIC’s guiding principle on privacy, which emphasizes the importance of data minimization. Pilots instead are encountering an ecosystem which seems to encourage what we’ve dubbed “data promiscuity” – where attribute providers/verifiers overshare as a default.
- In attribute based systems, there is often not a clear “flow-down” of consent requirements to end-users, or a standardized mechanism to accommodate redress of user data.
The identity functions of proofing and authentication have been historically carried out sequentially
in a single enterprise, as part of series of steps (see, for example, NIST SP 800-63
). However, the relative timing of identity proofing (binding to an individual) and authentication (establishing confidence that a user making a valid claim) is evolving in response to relying party needs. For example, in some cases, a relying party may seek only to re-affirm the validity of a user based on a sub-set of the full set of attributes required to uniquely identify them. The sub-set chosen will depend on the transactional type and risk and may provide sufficient authentication that the user is who they claim to be, without incurring the cost of the complete attribute set query (cost here can mean both the financial cost, and the “privacy cost” in terms of superfluous personal information distribution). The use of supplementary attributes may also be used in cases where credentials have been corrupted and a bypass operation is required, or when a critical emergency situation has arisen. In other cases, attributes may augment the degree of confidence required in an authentication transaction, by temporarily “strengthening” an existing credential, to facilitate the authorization of a “higher-risk” transaction.
Thus, relying parties’ demand for different combinations of attributes is increasing, some of which may be supplied by the traditional identity provider, but some of which may need to be sourced from specialized and authoritative sources. In either case, attributes are typically managed in one of two ways: attribute provision or attribute verification. Attribute provision is the case where a certain set of attributes is requested and received by a party, and the requesting/receiving party applies logic to determine the validity of a user’s claim. On the other hand, attribute verification is simply a yes/no confirmation that a submitted set of attributes is valid. As attribute verification is typically based on a user-driven assertion of attributes, there may be more transparency built into this system, addressing some of the privacy concerns, however, as discussed later, actual implementations usually require closer examination to ensure that user oversight flows with the data. Note that the verification of an attribute set establishes that it is a valid set, but does not necessarily link them to an individual. To do so, some form of knowledge-based or in-person confirmation is required at some point in the identity process.
As stated above, attribute management has traditionally been encapsulated within the functions of identity proofing or authentication and was not exposed an “API level”. As the operations of attribute provider and verifier move to the level of API exposure, we believe that it is critical that guidance be given regarding the incorporation of attribute services in identity architectures, to ensure that appropriate security and privacy safeguards are in place. To date, existing Trust Framework Providers, such as Kantara and OIX have not yet formally incorporated attribute management in their architectures – although both these examples have active working groups in this domain, such as Kantara’s Attributes in Motion Working Group
and OIX’s Online Attribute Exchange Trust Framework Working Group
. Some of the questions that these groups are asking include:
- How can various attributes be combined in a quantitative way?
- What pre-requisites are required for an attribute source to be authoritative? How can this be quantified?
- How much more confidence should there be in receiving information from an organization that included an in-person step in its credential issuance versus a completely online process?
- If there are three independent sources for an attribute, does that necessarily make it more authoritative? How much more?
- How should the “freshness” of attribute be characterized?
We believe that the IDESG should also be contemplating such questions, to ensure that the NSTIC Guiding Principles are upheld as the use of attributes within identity ecosystems continues to develop.
As a couple of indicators of interest in the IDESG on these topics, we note that the Security Committee has a work item on Attribute Assurance and Confidence Levels, which would “provide guidance on the types and quality of attributes used for authentication/re-authentication, and to securely and efficiently access and share information.” In addition, there has recently been significant discussion at the joint Security and Standards Committee meetings on the definition of attribute and related terms such as identity and authentication.
Another fundamental consideration regarding the use of attributes in identity systems is that there is sometimes a disparity between a relying party’s attribute requirements and the sets of attributes that a service provides. In a blog at IDManagement.gov
, the concept of determining the “minimal set of attributes (attribute bundles) needed to uniquely identify a person” in order to map them to a user account in a particular context is discussed. Of course, different contexts may require different attribute bundles and this may lead to a relying party (or intermediary) receiving more personal information than they requested or need, if there is a disparity between the requested attribute bundle and the attribute sets supported by an attribute provider. In this case, the challenge becomes how to operationally deal with the varying bundles of personal data without losing alignment with the Fair Information Practice Principles
(FIPPs). In particular, the NSTIC
calls for organizations to “only collect PII that is directly relevant and necessary to accomplish the specified purpose(s) and only retain PII for as long as is necessary to fulfill the specified purpose(s),” as well as to use claims for authorization that can avoid collection of specific attributes when possible (e.g. an individual is over 18 rather than the actual birth date). Attention to this principle will likely require some iterative changes in the commercial practices of attribute providers and other identity ecosystem participants.
Along with the flexibility of finer grain control of attributes, there is an attendant requirement to understand the implications of continued alignment with the FIPP’s, particularly in regards to consent and redress. One of the challenges of attribute management is establishing a flow-down of consent throughout the attribute handling process, as well as maintaining traceability to facilitate redress, in the event that a user needs to challenge the attribute provider/verifier’s response.
One idea of establishing traceability with regards to user consent and redress regarding attributes was discussed at the FTC roundtable privacy events in 2009/2010
: the potential “lifecycle tagging” of personally identifiable information or attributes so that their history can be traced. This would allow characteristics such as “strength”, source, and freshness to be identified, as well as offer the user the ability for redress with the attribute source.
We hope that this blog post will catalyze further discussion around these topics within the IDESG, and that the IDESG will look to define recommendations to attempt to overcome these challenges. Initial questions that we’d offer to stakeholders include:
1) Should the certification of attribute providers/verifiers be included in any future IDESG accreditation program?
2) Can confidence levels be assigned to individual attributes? How are authoritative sources determined?
3) How and where is individual consent obtained and maintained in a system that relies upon a series of attribute steps? How is it ensured that an individual is notified if an error occurs? Is there a security risk (i.e. possible gaming) to the individual being notified that a claim was rejected? How should an individual seek redress if he/she suspects that data are corrupted?
4) Are there methods of “tagging” attributes (a.k.a. attributes of attributes) that can be considered?
5) Should the IDESG Privacy Committee develop best practices for attribute management, to encourage continuous alignment with FIPPs – and reduce “data promiscuity”?
The IDESG established a forum for further discussion regarding these and other related topics here: https://www.idecosystem.org/content/attributes