FreeIPA Identity Management planet - technical blogs

August 14, 2017

Fraser Tweedale

Installing FreeIPA with an Active Directory subordinate CA

FreeIPA is often installed in enterprise environments for managing Unix and Linux hosts and services. Most commonly, enterprises use Microsoft Active Directory for managing users, Windows workstations and Windows servers. Often, Active Directory is deployed with Active Directory Certificate Services (AD CS) which provides a CA and certificate management capabilities. Likewise, FreeIPA includes the Dogtag CA, and when deploying FreeIPA in an enterprise using AD CS, it is often desired to make the FreeIPA CA a subordinate CA of the AD CS CA.

In this blog post I’ll explain what is required to issue an AD sub-CA, and how to do it with FreeIPA, including a step-by-step guide to configuring AD CS.

AD CS certificate template overview

AD CS has a concept of certificate templates, which define the characteristics an issued certificate shall have. The same concept exists in Dogtag and FreeIPA except that in those projects we call them certificate profiles, and the mechanism to select which template/profile to use when issuing a certificate is different.

In AD CS, the template to use is indicated by an X.509 extension in the certificate signing request (CSR). The template specifier can be one of two extensions. The first, older extension has OID and allows you to specify a template by name:

CertificateTemplateName ::= SEQUENCE {
   Name            BMPString

(Note that some documents specify UTF8String instead of BMPString. BMPString works and is used in practice. I am not actually sure if UTF8String even works.)

The second, Version 2 template specifier extension has OID and allows you to specify a template by OID and version:

CertificateTemplate ::= SEQUENCE {
    templateID              EncodedObjectID,
    templateMajorVersion    TemplateVersion,
    templateMinorVersion    TemplateVersion OPTIONAL

TemplateVersion ::= INTEGER (0..4294967295)

Note that some documents also show templateMajorVersion as optional, but it is actually required.

When submitting a CSR for signing, AD CS looks for these extensions in the request, and uses the extension data to select the template to use.

External CA installation in FreeIPA

FreeIPA supports installation with an externally signed CA certificate, via ipa-server-install --external-ca or (for existing CA-less installations ipa-ca-install --external-ca). The installation takes several steps. First, a key is generated and a CSR produced:

$ ipa-ca-install --external-ca

Directory Manager (existing master) password: XXXXXXXX

Configuring certificate server (pki-tomcatd). Estimated time: 3 minutes
  [1/8]: configuring certificate server instance
The next step is to get /root/ipa.csr signed by your CA and re-run /sbin/ipa-ca-install as:
/sbin/ipa-ca-install --external-cert-file=/path/to/signed_certificate --external-cert-file=/path/to/external_ca_certificate

The installation program exits while the administrator submits the CSR to the external CA. After they receive the signed CA certificate, the administrator resumes the installation, giving the installation program the CA certificate and a chain of one or more certificates up to the root CA:

$ ipa-ca-install --external-cert-file ca.crt --external-cert-file ipa.crt
Directory Manager (existing master) password: XXXXXXXX

Configuring certificate server (pki-tomcatd). Estimated time: 3 minutes
  [1/29]: configuring certificate server instance
  [29/29]: configuring certmonger renewal for lightweight CAs
Done configuring certificate server (pki-tomcatd).

Recall, however, that if the external CA is AD CS, a CSR must bear one of the certificate template specifier extensions. There is an additional installation program option to add the template specifier:

$ ipa-ca-install --external-ca --external-ca-type=ms-cs

This adds a name-based template specifier to the CSR, with the name SubCA (this is the name of the default sub-CA template in AD CS).

Specifying an alternative AD CS template

Everything discussed so far is already part of FreeIPA. Until now, there is no way to specify a different template to use with AD CS.

I have been working on a feature that allows an alternative AD CS template to be specified. Both kinds of template specifier extension are supported, via the new --external-ca-profile installation program option:

$ ipa-ca-install --external-ca --external-ca-type=ms-cs \

(Note: huge OIDs like the above are commonly used by Active Directory for installation-specific objects.)

To specify a template by name, the --external-ca-profile value should be:


To specify a template by OID, the OID and major version must be given, and optionally the minor version too:


Like --external-ca and --external-ca-type, the new --external-ca-profile option is available with both ipa-server-install and ipa-ca-install.

With this feature, it is now possible to specify an alternative or custom certificate template when using AD CS to sign the FreeIPA CA certificate. The feature has not yet been merged but there an open pull request. I have also made a COPR build for anyone interested in testing the feature.

The remainder of this post is a short guide to configuring Active Directory Certificate Services, defining a custom CA profile, and submitting a CSR to issue a certificate.

Appendix A: installing and configuring AD CS

Assuming an existing installation of Active Directory, AD CS installation and configuration will take 10 to 15 minutes. Open Server Manager, invoke the Add Roles and Features Wizard and select the AD CS Certification Authority role:


Proceed, and wait for the installation to complete…


After installation has finished, you will see AD CS in the Server Manager sidebar, and upon selecting it you will see a notification that Configuration required for Active Directory Certificate Services.


Click More…, and up will come the All Servers Task Details dialog showing that the Post-deployment Configuration action is pending. Click the action to continue:


Now comes the AD CS Configuration assistant, which contains several steps. Proceed past the Specify credentials to configure role services step.

In the Select Role Services to configure step, select Certification Authority then continue:


In the Specify the setup type of the CA step, choose Enterprise CA then continue:


The Specify the type of the CA step lets you choose whether the AD CS CA will be a root CA or chained to an external CA (just like how FreeIPA lets you create root or subordinate CA!) Installing AD CS as a Subordinate CA is outside the scope of this guide. Choose Root CA and continue:


The next step lets you Specify the type of the private key. You can use an existing private key or Create a new private key, the continue.

The Specify the cryptographic options step lets you specify the Key length and hash algorithm for the signature. Choose a key length of at least 2048 bits, and the SHA-256 digest:


Next, Specify the name of the CA. This sets the Subject Distinguished Name of the CA. Accept defaults and continue.

The next step is to Specify the validity period. CA certificates (especially root CAs) typically need a long validity period. Choose a value like 5 Years, then continue:


Accept defauts for the Specify the database locations step.

Finally, you will reach the Confirmation step, which summarises the chosen configuration options. Review the settings then Configure:


The configuration will take a few moments, then the Results will be displayed:


AD CS is now configured and you can begin issuing certificates.

Appendix B: creating a custom sub-CA certificate template

In this section we look at how to create a new certificate template for sub-CAs by duplicating an existing template, then modifying it.

To manage certificate templates, from Server Manager right-click the server and open the Certification Authority program:


In the sidebar tree view, right-click Certificate Templates then select Manage.


The Certificate Templates Console will open. The default profile for sub-CAs has the Template Display Name Subordinate Certification Authority. Right-click this template and choose Duplicate Template.


The new template is created and the Properties of New Template dialog appears, allowing the administrator to customise the template. You can set a new Template display name, Template name and so on:


You can also change various aspects of certificate issuance including which extensions will appear on the issued certificate, and the values of those extensions. In the following screenshot, we see a new Certificate Policies OID being defined for addition to certificates issued via this template:


Also under Extensions, you can discover the OID for this template by looking at the Certificate Template Information extension description.

Finally, having defined the new certificate template, we have to activate it for use with the AD CA. Back in the Certification Authority management window, right-click Certificate Templates and select Certificate Template to Issue:


This will pop up the Enable Certificate Templates dialog, containing a list of templates available for use with the CA. Select the new template and click OK. The new certificate template is now ready for use.

Appendix C: issuing a certificate

In this section we look at how to use AD CS to issue a certificate. It is assumed that the CSR to be signed exists and Active Directory can access it.

In the Certification Authority window, in the sidebar right-click the CA and select All Tasks >> Submit new request…:


This will bring up a file chooser dialog. Find the CSR and Open it:


Assuming all went well (including the CSR indicating a known certificate template), the certificate is immediately issued and the Save Certificate dialog appear, asking where to save the issued certificate.

by ftweedal at August 14, 2017 06:04 AM

July 11, 2017

Fraser Tweedale

Implications of Common Name deprecation for Dogtag and FreeIPA

Or, ERR_CERT_COMMON_NAME_INVALID, and what we are doing about it.

Google Chrome version 58, released in April 2017, removed support for the X.509 certificate Subject Common Name (CN) as a source of naming information when validating certificates. As a result, certificates that do not carry all relevant domain names in the Subject Alternative Name (SAN) extension result in validation failures.

At the time of writing this post Chrome is just the first mover, but Mozilla Firefox and other programs and libraries will follow suit. The public PKI used to secure the web and other internet communiations is largely unaffected (browsers and CAs moved a long time ago to ensure that certificates issued by publicly trusted CAs carried all DNS naming information in the SAN extension), but some enterprises running internal PKIs are feeling the pain.

In this post I will provide some historical and technical context to the situation, and explain what we are are doing in Dogtag and FreeIPA to ensure that we issue valid certificates.


X.509 certificates carry subject naming information in two places: the Subject Distinguished Name (DN) field, and the Subject Alternative Name extension. There are many types of attributes available in the DN, including organisation, country, and common name. The definitions of these attribute types came from X.500 (the precursor to LDAP) and all have an ASN.1 representation.

Within the X.509 standard, the CN has no special interpretation, but when certificates first entered widespread use in the SSL protocol, it was used to carry the domain name of the subject site or service. When connecting to a web server using TLS/SSL, the client would check that the CN matches the domain name they used to reach the server. If the certificate is chained to a trusted CA, the signature checks out, and the domain name matches, then the client has confidence that all is well and continues the handshake.

But there were a few problems with using the Common Name. First, what if you want a certificate to support multiple domain names? This was especially a problem for virtual hosts in the pre-SNI days where one IP address could only have one certificate associated with it. You can have multiple CNs in a Distinguished Name, but the semantics of X.500 DNs is strictly heirarichical. It is not an appropriate use of the DN to cram multiple, possibly non-hierarchical domain names into it.

Second, the CN in X.509 has a length limit of 64 characters. DNS names can be longer. The length limit is too restrictive, especially in the world of IaaS and PaaS where hosts and services are spawned and destroyed en masse by orchestration frameworks.

Third, some types of subject names do not have a corresponding X.500 attribute, including domain names. The solution to all three of these problems was the introduction of the Subject Alternative Name X.509 extension, to allow more types of names to be used in a certificate. (The SAN extensions is itself extensible; apart from DNS names other important name types include IP addresses, email addresses, URIs and Kerberos principal names). TLS clients added support for validating SAN DNSName values in addition to the CN.

The use of the CN field to carry DNS names was never a standard. The Common Name field does not have these semantics; but using the CN in this way was an approach that worked. This interpretation was later formalised by the CA/B Forum in their Baseline Requirements for CAs, but only as a reflection of a current practice in SSL/TLS server and client implementations. Even in the Baseline Requirements the CN was a second-class citizen; they mandated that if the CN was present at all, it must reflect one of the DNSName or IP address values from the SAN extension. All public CAs had to comply with this requirement, which is why Chrome’s removal of CN support is only affecting private PKIs, not public web sites.

Why remove CN validation?

So, Common Name was not ideal for carrying DNS naming information, but given that we now have SAN, was it really necessary to deprecate it, and is it really necessary to follow through and actually stop using it, causing non-compliant certificates that were previously accepted to now be rejected?

The most important reason for deprecating CN validation is the X.509 Name Constraints extension. Name Constraints, if they appear in a CA certificate or intermediate CA certificate, constrain the valid subject names on leaf certificates. Various name types are supported including DNS names; a DNS name constraint restricts the domain of validity to the domain(s) listed and subdomains thereof. For example, if the DNS name appears in a CA certificate’s Name Constraints extension, leaf certificates with a DNS name of or could be valid, but a DNS name of could not be valid. Conforming X.509 implementations must enforce these constraints.

But these constraints only apply to SAN DNSName values, not to the CN. This is why accepting DNS naming information in the CN had to be deprecated – the name constraints cannot be properly enforced!

So back in May 2000 the use of Common Name for carrying a DNS name was deprecated by RFC 2818. Although it deprecated the practice this RFC required clients to fall back to the Common Name if there were no SAN DNSName values on the certificate. Then in 2011 RFC 6125 removed the requirement for clients to fall back to the common name, making this optional behaviour. Over recent years, some TLS clients began emitting warnings when they encountered certificates without SAN DNSNames, or where a DNS name in the CN did not also appear in the SAN extension. Finally, Chrome has become the first widely used client to remove support.

Despite more than 15 years notice on the deprecation of this use of Common Name, a lot of CA software and client tooling still does not have first-class support for the SAN extension. Most tools used to generate CSRs do not even ask about SAN, and require complex configuration to generate a request bearing the SAN extension. Similarly, some CA programs does not do a good job of issuing RFC-compliant certificates. Right now, this includes Dogtag and FreeIPA.

Subject Alternative Name and FreeIPA

For some years, FreeIPA (in particular, the default profile for host and service certificates, called caIPAserviceCert) has supported the SAN extension, but the client is required to submit a CSR containing the desired SAN extension data. The names in the CSR (the CN and all alternative names) get validated against the subject principal, and then the CA would issue the certificate with exactly those names. There was no way to ensure that the domain name in the CN was also present in the SAN extension.

We could add this requirement to FreeIPA’s CSR validation routine, but this imposes an unreasonable burden on the user to "get it right". Tools like OpenSSL have poor usability and complex configuration. Certmonger supports generating a CSR with the SAN extension but it must be explicitly requested. For FreeIPA’s own certificates, we have (in recent major releases) ensured that they have contained the SAN extension, but this is not the default behaviour and that is a problem.

FreeIPA 4.5 brought with it a CSR autogeneration feature that, for a given certificate profile, lets the administrator specify how to construct a CSR appropriate for that profile. This reduces the burden on the end user, but they must still opt in to this process.

Subject Alternative Name and Dogtag

Until Dogtag 10.4, there were two ways to produce a certificate with the SAN extension. One was the SubjectAltNameExtDefault profile component, which, for a given profile, supports a fixed number of names, either hard coded or based on particular request attributes (e.g. the CN, the email address of the authenticated user, etc). The other was the UserExtensionDefault which copies a given extension from the CSR to the final certificate verbatim (no validation of the data occurs). We use UserExtensionDefault in FreeIPA’s certificate profile (all names are validated by the FreeIPA framework before the request is submitted to Dogtag).

Unfortunately, SubjectAltNameExtDefault and UserExtensionDefault are not compatible with each other. If a profile uses both and the CSR contains the SAN extension, issuance will fail with an error because Dogtag tried to add two SAN extensions to the certificate.

In Dogtag 10.4 we introduced a new profile component that improves the situation, especially for dealing with the removal of client CN validation. The CommonNameToSANDefault will cause any profile that uses it to examine the Common Name, and if it looks like a DNS name, it will add it to the SAN extension (creating the extension if necessary).

Ultimately, what is needed is a way to define a certificate profile that just makes the right certificate, without placing an undue burden on the client (be it a human user or a software agent). The complexity and burden should rest with Dogtag, for the sake of all users. We are gradually making steps toward this, but it is still a long way off. I have discussed this utopian vision in a previous post.

Configuring CommonNameToSANDefault

If you have Dogtag 10.4, here is how to configure a profile to use the CommonNameToSANDefault. Add the following policy directives (the policyset and serverCertSet and index 12 are indicative only, but the index must not collide with other profile components):

policyset.serverCertSet.12.constraint.class_id=noConstraintImpl Constraint
policyset.serverCertSet.12.default.class_id=commonNameToSANDefaultImpl Common Name to Subject

Add the index to the list of profile policies:


Then import the modified profile configuration, and you are good to go. There are a few minor caveats to be aware of:

  • Names containing wildcards are not recognised as DNS names. The rationale is twofold; wildcard DNS names, although currently recognised by most programs, are technically a violation of the X.509 specification (RFC 5280), and they are discouraged by RFC 6125. Therefore if the CN contains a wildcard DNS name, CommonNameToSANDefault will not copy it to the SAN extension.
  • Single-label DNS names are not copied. It is unlikely that people will use Dogtag to issue certificates for top-level domains. If CommonNameToSANDefault encounters a single-label DNS name, it will assume it is actually not a DNS name at all, and will not copy it to the SAN extension.
  • The CommonNameToSANDefault policy index must come after UserExtensionDefault, SubjectAltNameExtDefault, or any other component that adds the SAN extension, otherwise an error may occur because the older components do not gracefully handle the situation where the SAN extension is already present.

What we are doing in FreeIPA

Updating FreeIPA profiles to use CommonNameToSANDefault is trickier – FreeIPA configures Dogtag to use LDAP-based profile storage, and mixed-version topologies are possible, so updating a profile to use the new component could break certificate requests on other CA replicas if they are not all at the new versions. We do not want this situation to occur.

The long-term fix is to develop a general, version-aware profile update mechanism that will import the best version of a profile supported by all CA replicas in the topology. I will be starting this effort soon. When it is in place we will be able to safely update the FreeIPA-defined profiles in existing deployments.

In the meantime, we will bump the Dogtag dependency and update the default profile for new installations only in the 4.5.3 point release. This will be safe to do because you can only install replicas at the same or newer versions of FreeIPA, and it will avoid the CN validation problems for all new installations.


In this post we looked at the technical reasons for deprecating and removing support for CN domain validation in X.509 certificates, and discussed the implications of this finally happening, namely: none for the public CA world, but big problems for some private PKIs and programs including FreeIPA and Dogtag. We looked at the new CommonNameToSANDefault component in Dogtag that makes it easier to produce compliant certs even when the tools to generate the CSR don’t help you much, and discussed upcoming and proposed changes in FreeIPA to improve the situation there.

One big takeaway from this is to be more proactive in dealing with deprecated features in standards, APIs or programs. It is easy to punt on the work, saying "well yes it is deprecated but all the programs still support it…" The thing is, tomorrow they may not support it anymore, and when it was deprecated for good reasons you really cannot lay the blame at Google (or whoever). On the FreeIPA team we (and especially me as PKI wonk in residence) were aware of these issues but kept putting off the work. Then one day users and customers start having problems accessing their internal services in Chrome! 15 years should have been enough time to deal with it… but we (I) did not.

Lesson learned.

by ftweedal at July 11, 2017 03:25 AM

June 26, 2017

Fraser Tweedale

Wildcard SAN certificates in FreeIPA

In an earlier post I discussed how to make a certificate profile for wildcard certificates in FreeIPA, where the wildcard name appeared in the Subject Common Name (CN) (but not the Subject Alternative Name (SAN) extension). Apart from the technical details that post also explained that wildcard certificates are deprecated, why they are deprecated, and therefore why I was not particularly interested in pursuing a way to get wildcard DNS names into the SAN extension.

But, as was portended long ago (more than 15 years, when RFC 2818 was published) DNS name assertions via the CN field are deprecated, and finally some client software removed CN name processing support. The Chrome browser is first off the rank, but it won’t be the last!

Unfortunately, programs that have typically used wildcard certificates (hosting services/platforms, PaaS, and sites with many subdomains) are mostly still using wildcard certificates, and FreeIPA still needs to support these programs. As much as I would like to say "just use Let’s Encrypt / ACME!", it is not realistic for all of these programs to update in so short a time. Some may never be updated. So for now, wildcard DNS names in SAN is more than a "nice to have" – it is a requirement for a handful of valid use cases.


Here is how to do it in FreeIPA. Most of the steps are the same as in the earlier post so I will not repeat them here. The only substantive difference is in the Dogtag profile configuration.

In the profile configuration, set the following directives (note that the key serverCertSet and the index 12 are indicative only; the index does not matter as long as it is different from the other profile policy components):

policyset.serverCertSet.12.constraint.class_id=noConstraintImpl Constraint
policyset.serverCertSet.12.default.class_id=subjectAltNameExtDefaultImpl Alternative Name Extension Default

Also be sure to add the index to the directive containing the list of profile policies:


This configuration will cause two SAN DNSName values to be added to the certificate – one using the CN from the CSR, and the other using the CN from the CSR preceded by a wildcard label.

Finally, be aware that because the subjectAltNameExtDefaultImpl component adds the SAN extension to a certificate, it conflicts with the userExtensionDefault component when configured to copy the SAN extension from a CSR to the new certificate. This profile component will have a configuration like the following:

policyset.serverCertSet.11.constraint.class_id=noConstraintImpl Constraint
policyset.serverCertSet.11.default.class_id=userExtensionDefaultImpl Supplied Extension Default

Again the numerical index is indicative only, but the OID is not; is the OID for the SAN extension. If your starting profile configuration contains the same directives, remove them from the configuration, and remove the index from the policy list too:



The profile containing the configuration outlined above will issue certificates with a wildcard DNS name in the SAN extension, alongside the DNS name from the CN. Mission accomplished; but note the following caveats.

This configuration cannot contain the userExtensionDefaultImpl component, which copies the SAN extension from the CSR to the final certificate if present in the CSR, because any CSR that contains a SAN extension would cause Dogtag to attempt to add a second SAN extension to the certificate (this is an error). It would be better if the conflicting profile components somehow "merged" the SAN values, but this is not their current behaviour.

Because we are not copying the SAN extension from the CSR, any SAN extension in the CSR get ignored by Dogtag – but not by FreeIPA; the FreeIPA CSR validation machinery always fully validates the subject alternative names it sees in a CSR, regardless of the Dogtag profile configuration.

If you work on software or services that currently use wildcard certificates please start planning to move away from this. CN validation was deprecated for a long time and is finally being phased out; wildcard certificates are also deprecated (RFC 6125) and they too may eventually be phased out. Look at services and technologies like Let’s Encrypt (a free, automated, publicly trusted CA) and ACME (the protocol that powers it) for acquiring all the certificates you need without administrator or operator intervention.

by ftweedal at June 26, 2017 12:48 PM

June 21, 2017

Rob Crittenden

IPA configuration files and context

There are times when you may want more information out of the IPA server logs. I’ve seen people suggest adding debug = True to /etc/ipa/default.conf. This is fine (and it works) but it enables debugging in both the client and the server which can be annoying for command-line users.

What I do instead is create /etc/ipa/server.conf containing:

debug = True

The context that is set during initialization drives what configuration files are loaded so only the server will load this so the client remains quiet by default.

When the context is set during api.initialize it sets api.env.context. The original idea is this could drive different code paths depending on the context but in reality it hasn’t been used all that often. Being able to load context-specific configuration files is pretty neat though.

by rcritten at June 21, 2017 07:06 PM

June 12, 2017

Red Hat Blog

Migrating from third party Active Directory integration solutions

As predicted in one of my earlier posts, more and more customers are starting to seriously evaluate and move off of third party Active Directory integration solutions. They want to use or at least consider leveraging identity management technologies available in Red Hat Enterprise Linux.

In the calls and face to face meetings as well as during customer presentations at Red Hat Customer Convergence events, Red Hat Summit, Defence in Depth and other conferences I get a lot of questions about such migration. As it is becoming a common theme, I decided to consolidate some of the thoughts, ideas, and best practices on the matter in a single blog post.

Why do organizations consider migration?

There are several crucial factors that lead people to the path of migration. They fall into the following categories:


One of main technical reasons customers start to consider a different solution is because the solution they use stops to meet their technical needs. This usually happens when customers outgrow the solution they use. Inadvertently they start to push the technical limits of their deployment. The stress reveals architectural limitations as well as puts extra pressure on other aspects of the deployment like performance, scalability and manageability.  

In general this is a natural progression when more modern solutions take into the account limitations and pitfalls of the earlier attempts and make things temporarily better until they are in turn replaced by the next generation of technologies.


Every customer’s identity management landscape is unique. However, we see some patterns again and again.

In some cases the whole identity management space is significantly controlled and influenced by the Active Directory side of the house. In other cases there is a clear organizational divide between Active Directory folks and the Linux team. One of the main attractions of the identity management technologies in Red Hat Enterprise Linux is the ability to provide a clear separation of responsibilities between the two parts of the organization.

This works really well in the cases where there are two teams, but it does not 100% resonate with Active Directory centric deployments where one team mostly from Windows side drives the whole deployment. I talked about this in some of my earlier blogs. Here I would just mention that migration from a third party solution is a good time to consider how you structure the organization to enable growth of your business for the next decade.


Identity Management technologies are included with Red Hat Enterprise Linux and are covered by the platform subscription. There is no extra cost.

Third party solutions are priced per client. If you have ten thousand clients and each costs at least $50 per year you have a half million cost right there. This money can be better spent elsewhere, for example on building a better support of the line of business requirements and moving to a more agile (develop-deploy) IT model.


In many cases, business drives the need for a change. Current infrastructure just does not scale enough to meet the needs of the modern enterprise. If every payload you create in cloud requires a separate entitlement from a third party, it is quite cumbersome.

How long would you be able to deal with such arrangement before your IT processes reach the limits and start to fall behind the needs of the ever evolving business models? Keeping your environment lean and removing any obstacles would allow it to keep up with the challenging requirements of today and pave the way into tomorrow.

So the move is imminent but slow. Identity management is part of the core fabric that can’t be disrupted without a major impact on the whole organization. It must be made taking a lot of precautions and factoring in all sorts of different considerations. It needs to be a well planned move. It might be that you are not yet ready, but it might very well be that we are not ready yet for you with the solutions we offer. Well, let us work together to make sure what we offer would better suit your needs. Comments and suggestions are always welcome!

What options do I have?

Red Hat Enterprise Linux offers two main types of Active directory integration: direct where Linux systems are directly connected to Active Directory and indirect when we recommend deploying an Identity Management server as a gateway between your Active Directory and Linux/UNIX environments. These solutions are well covered in other blogs in this series. Here it is important to mention that indirect solution is the one that we recommend for migration from the third party solutions. The reason is that it much better addresses technical, organizational, economic and business drivers described above. But as I mentioned, it does not fit all and direct integration is still a good option for some of the situations.

How would I get there?

Let’s take a look at strategies to migrate from third party to Red Hat Enterprise Linux’s Identity Management (IdM).

Getting from a third party solution to a direct integration solution

The main component of Linux that provides direct integration is System Security Services Daemon (SSSD). The strategy would be to start deploying SSSD for your new payloads and cycling away your old payloads that leverage third party solution over time.

SSSD has a lot of capabilities but there are several challenges that you might face in this migration. If you only manage authentication, access control, and identity information — and your POSIX data is in Active Directory — SSSD will be a sufficient option.

However if you also manage sudo or SSH, or need to have sophisticated mapping of the POSIX identities for different subsets of clients, the direct integration solution is not sufficient. Also Red Hat, as well as Microsoft, does not provide means to manage POSIX data in Active Directory using Microsoft Management Console (MMC).

There is a web interface in Active Directory where after manual configuration POSIX attributes can be exposed, but it is unclear whether this would meet your management workflows and requirements. Bottom line is that direct integration is not a one-to-one replacement of what you have with the third party solution. This is why an indirect integration using Identity Management in Red Hat Enterprise Linux server is better and is a recommended option.

Getting from a third party solution to an indirect integration solution

In this case we are talking about deploying IdM in Red Hat Enterprise Linux as a replacement of your third party solution. It is an architectural change. There are several important differences that come with this change.

  • Identity Management server is a domain controller for Linux/UNIX environments. It is a combination of Kerberos, LDAP, PKI and DNS components some of which are optional (PKI, DNS). The solution assumes that you transition your Linux infrastructure into this domain and connect this Linux domain to the Active Directory via a cross-forest trust. With the trust in place users from Active Directory would be able to access resources and systems managed in Linux domain but all the policies for such users will be managed in IdM. POSIX data can come from Active Directory but it can also be overridden in IdM via a feature called ID Views.
  • With the introduction of the domains, the challenges related to DNS names come into play. This has been discussed in details on one of my earlier blogs.
  • Another important difference is that with the introduction of domains and trusts you would need to start using fully qualified domain names for users meaning that instead of user “foo” you will have to type “”. In some cases this is quite a challenge. It is a known hassle and there are some ideas how this can be addressed so hopefully you will see some improvements in this area down the road.

With the variety of vendors you might migrate from, variety of features that you would want to implement, and different constraints you have in the current environment it is so far very hard to determine any specific patterns and provide effective tools to help with migration.

If you want to perform migration yourself the following outline would help you with your effort:

  • Understand current environment, including architecture, and paying attention to numbers of users and systems. Note operating systems and versions. See how they are distributed around different datacenters. Record workflows that your users follow when they interact with your Linux systems as well as resources and applications running on them.
  • Formulate your goals, and make sure you clearly understand the reasons why you want to migrate. I hope that some information in this article will be helpful. Spell out the criteria which a final solution must satisfy. Make sure to factor in your growth. Projects take time and requirements change. You want to avoid getting into the situation when by the time you finish the migration your architecture does not scale to the needs of the business.
  • Become familiar with a solution you consider to migrate to. If you are interested in a Red Hat solution please contact your Technical Account Manager or sales representative. If you do not have either, open a support case with Red Hat support requesting some consulting about Red Hat IdM solution. If you are not a Red Hat customer yet, you might start by dropping a comment here or sending an email to a community mailing list IdM is based in the FreeIPA community project so resources available on the FreeIPA web site might be of a value for you. Finally, there are a lot of shows and conferences where this technology is presented and promoted. You might want to consider attending one of those and get your questions answered first hand.
  • Once you know where you are, where you want to be and you are familiar with the offering you can make a decision whether the solution Red Hat offers would meet your short and long term needs. If it does the next step will be to build the architecture using the solution.
  • Defining architecture includes but not limited to understanding your DNS setup, firewalls, layout of data centers and mapping of servers to those datacenters as well defining replication topology and which components should be running on which servers. This requires a lot of knowledge and understanding of the details of the solution. The best way to gain this knowledge is to run a POC deployment in your lab mimicking a real world environment as much as possible. Once you are comfortable with it you will be able to draft an implementation plan.
  • Implementation plan would be a set of step by step actions that would move you from where you are to where you want to be. The important part of such plan is that each action should be concise and atomic. That would reduce the amount of disruption to your current environment that needs to continue function while you are preparing for the switch.

As you can see, migration is not a walk in the park. It is complex and hard. It requires a good coordination of internal teams and deep knowledge of the tools.  In some cases it is hard to do it on your own. In these situations Red Hat can help. Again, talk to your TAM or sales representative about how Red Hat can help in your effort.

Such migration projects render tools and scripts that would benefit others. If you are interested in sharing your work and getting some of the tools and ideas benefit others, you can contribute them to the common repository.

The best way to start contributing would be to get engaged on list and people there will help you with the next steps. By sharing your solutions you not only help others, but also help the FreeIPA project to better understand your challenges and for Red Hat to work with FreeIPA developer community to provide a better set of tools that would improve manageability and thus make your life easier.

Let us work together and make this migration path as smooth as possible.


by Dmitri Pal at June 12, 2017 03:49 PM

June 02, 2017

Florence Blanc-Renaud

Troubleshooting: mapping between a SmartCard certificate and an IdM user

Authentication with a SmartCard may fail when the SmartCard certificate is not linked to any IdM user, or to a user different from the one specified on the console.

In order to find which user is associated to a given SmartCard certificate, you can run the following command:

ipaclient$ ipa certmap-match cert.pem
1 user matched
 User logins: demosc1
Number of entries returned 1

If the result is not what you were expecting, you need first to check how certificates are mapped to users.

By default, a certificate is associated to a user when the user entry contains the full certificate in its usercertificate attribute. But this behavior can be modified by defining certificate mapping rules:

ipaclient$ ipa certmaprule-find
1 Certificate Identity Mapping Rule matched
 Rule name: rulesmartcard
 Mapping rule: (ipacertmapdata=X509:<I>{issuer_dn}<S>{subject_dn})
 Matching rule: <ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG
 Enabled: FALSE
Number of entries returned 1


Mapping with full certificate content

When the mapping is based on the full certificate content, you can check if the user entry contains the certificate:

root@ipaclient$ ipa user-show demosc1
 User login: demosc1
Certificate: MIIC...

If it is not the case, associate the certificate with the user entry using:

ipaclient$ CERT=`cat cert.pem | tail -n +2 | head -n -1 | tr -d '\r\n'`
ipaclient$ ipa user-add-cert demosc1 --certificate $CERT

Once this is done, you may need to clear sssd cache to force SSSD to reload the entries before retrying ipa certmap-match:

ipaclient$ sudo sss_cache -E


Flexible mapping with certificate identity mapping rule

When the mapping is based on certificate mapping rules, the same tool ipa certmap-match can be used to check which user entry is associated to a certificate. When the result is not what you expect, you can enable sssd domain logs by adding the following in /etc/sssd/sssd.conf on IdM master:

debug_level = 9

then restart sssd with

root@ipaserver$ systemctl restart sssd

The logs will be located in /var/log/sssd/sssd_ipadomain.log.


Check that the certificate identity mapping rules are properly loaded

When sssd is restarted, it reads the mapping rules and should print the following in /var/log/sssd/sssd_ipadomain.log:

[sssd[be[]]] [sss_certmap_init] (0x0040): sss_certmap initialized.
[sssd[be[]]] [ipa_certmap_parse_results] (0x4000): Trying to add rule [rulesmartcard][-1][<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG][(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})].

If the rule has an invalid syntax, you will see instead:

[sssd[be[]]] [sss_certmap_init] (0x0040): sss_certmap initialized.
[sssd[be[]]] [ipa_certmap_parse_results] (0x4000): Trying to add rule [rulesmartcard][-1][<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG][(ipacertmapdata=X509:<I>{issuer_dn!x509}<S>{subject_dn})].
[sssd[be[]]] [parse_template] (0x0040): Parse template invalid.
[sssd[be[]]] [parse_ldap_mapping_rule] (0x0040): Failed to add template.
[sssd[be[]]] [parse_mapping_rule] (0x0040): Failed to parse LDAP mapping rule.
[sssd[be[]]] [ipa_certmap_parse_results] (0x0020): sss_certmap_add_rule failed for rule [rulesmartcard], skipping. Please check for typos and if rule syntax is supported.
[sssd[be[]]] [ipa_subdomains_certmap_done] (0x0040): Unable to parse certmap results [22]: Invalid argument
[sssd[be[]]] [ipa_subdomains_refresh_certmap_done] (0x0020): Failed to read certificate mapping rules [22]: Invalid argument

The log shows that the rule named rulesmartcard is invalid. Check the rule (see man page for sss-certmap for the supported syntax) and fix if needed:

ipaclient$ ipa certmaprule-show rulesmartcard
 Rule name: rulesmartcard
 Mapping rule: (ipacertmapdata=X509:<I>{issuer_dn!x509}<S>{subject_dn})
 Matching rule: <ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG
 Enabled: TRUE
ipaclient$ ipa certmaprule-mod rulesmartcard --maprule '(ipacertmapdata=X509:<I>{issuer_dn}<S>{subject_dn})'


Check that the matching rule corresponds to the certificate

When SSSD tries to associate the certificate to a user, it starts by finding which rule can be applied based on the matching rule (for instance rulesmartcard applies to all certificates issued by CN=Smart Card CA,O=EXAMPLE.ORG because its matching rule is <ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG).

If no matching rule applies to the certificate, SSSD will not be able to associate the certificate with a user, and will display the following in /var/log/sssd/sssd_ipadomain.log:

[sssd[be[]]] [sss_cert_derb64_to_ldap_filter] (0x0040): Certificate does not match matching-rules.

In this case, you need to create or modify an identity mapping rule, so that the match rule applies to your certificate. See sss-certmap man page for the supported syntax of the –matchrule option of ipa certmaprule-add command.

Check that the expected certificate identity mapping rule is used

When SSSD tries to find the user associated to the certificate, you will see the following logs in /var/log/sssd/sssd_ipadomain.log:

[sssd[be[]]] [dp_get_account_info_handler] (0x0200): Got request for [0x14][BE_REQ_BY_CERT][cert=MII..]
[sssd[be[]]] [sdap_search_user_next_base] (0x0400): Searching for users with base [cn=accounts,dc=ipadomain,dc=com]
[sssd[be[]]] [sdap_print_server] (0x2000): Searching
[sssd[be[]]] [sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with [(&(ipacertmapdata=X509:<I>O=EXAMPLE.ORG,CN=Smart Card CA<S>CN=test,O=EXAMPLE.ORG)(objectclass=posixAccount)(uid=*)(&(uidNumber=*)(!(uidNumber=0))))][cn=accounts,dc=ipadomain,dc=com].
[sssd[be[]]] [sdap_search_user_process] (0x0400): Search for users, returned 0 results.

The logs show the LDAP search performed by SSSD: IP address of the LDAP server, base and search filter. Carefully review this information and compare with what you would expect.

Check that the mapping rule defines a valid search filter

If the rule cannot be transformed to a valid search filter, you will see in /var/log/sssd/sssd_ipadomain.log:

[sssd[be[]]] [sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with [(&(ipacertmapdata=X509:<I>O=EXAMPLE.ORG,CN=Smart Card CA<S>CN=test,O=EXAMPLE.ORG(objectclass=posixAccount)(uid=*)(&(uidNumber=*)(!(uidNumber=0))))][cn=accounts,dc=ipadomain,dc=com].
[sssd[be[]]] [sdap_get_generic_ext_step] (0x0080): ldap_search_ext failed: Bad search filter

If it is the case, you need to fix the certmap rule using

ipaclient$ ipa certmaprule-mod rulesmartcard –maprule …



by floblanc at June 02, 2017 02:59 PM

Troubleshooting: authentication to the system console or Gnome Desktop Manager of an IdM host with a SmartCard

IdM allows to authenticate to an IdM enrolled-host by providing a SmartCard certificate instead of a username/password. The below steps are based on system console authentication but the process is similar when using Gnome desktop  login authentication.

When the authentication fails, the issue usually comes from a wrong configuration of the IdM system for SmartCard, or of PKINIT.


Configuration of the IdM host for SmartCard authentication

If the console does not even prompt for the SmartCard PIN, chances are high that the system was not properly configured for SmartCard authentication.

SSSD configuration for smart card

Check that /etc/sssd.conf contains

pam_cert_auth = True

If you need to update the file, do not forget to restart sssd with

root@ipaclient$ systemctl restart sssd


SmartCard CA must be trusted

Check that the SmartCard CA is trusted in the /etc/pki/nssdb database:

root@ipaclient$ certutil -L -d /etc/pki/nssdb/

Certificate Nickname Trust Attributes     SSL,S/MIME,JAR/XPI

SmartCardCA                               CT,C,C

If the CA is not present, add it using:

root@ipaclient$ certutil -A -d /etc/pki/nssdb -n SmartCardCA -t CT,C,C -i ca.pem


IdM host PKCS#11 module

Check that the IdM host is properly configured for Smart Cards. The opensc package must be installed, the the SmartCard daemon must be running, and the PKCS#11 module must be loaded

root@ipaclient$ dnf install opensc
root@ipaclient$ systemctl start pcscd.service pcscd.socket
root@ipaclient$ modutil -dbdir /etc/pki/nssdb -add "OpenSC" -libfile /usr/lib64/


Configuration for PKINIT

If the console prompts for the SmartCard PIN but displays

ipaclient login: demosc1
Pin for PIV Card:
Login incorrect

it is possible that the authentication fails trying to acquire a Kerberos ticket with PKINIT. In this case, login with username/password to the IdM host and try to manually perform kinit in order to get more information:

root@ipaclient$ kinit -X X509_user_identity='' demosc1


If  the command outputs the following:

kinit: Pre-authentication failed: Failed to verify own certificate (depth 1): self signed certificate in certificate chain while getting initial credentials

then check the content of /etc/krb5.conf on the IdM host. The realms section must contain a configuration for ipadomain with pkinit_anchors:

 pkinit_anchors = FILE:/var/lib/ipa-client/pki/kdc-ca-bundle.pem
 pkinit_pool = FILE:/var/lib/ipa-client/pki/ca-bundle.pem


The file defined in pkinit_anchors must exist, be readable and contain the certificate of the CA which signed the SmartCard certificate. If it is not the case, run the following commands on any IPA server:

root@ipaserver$ ipa-cacert-manage install -n SmartCardCA -t CT,C,C -p $DM_PWD ca.pem
root@ipaserver$ ipa-certupdate

And run the ipa-certupdate command on all IdM hosts in order to download the certificate.

If the kinit command output the following:

kinit: Certificate mismatch while getting initial credentials

check that the SmartCard certificate is associated to the username provided in the console (see mapping between a SmartCard certificate and an IdM user).

by floblanc at June 02, 2017 01:55 PM

Troubleshooting: ssh to an IdM host with a SmartCard

IdM allows to perform ssh from a non-enrolled host into an IdM enrolled host, using Smart Card authentication instead of ssh authorized keys. The ssh command would be the following to log as demosc1 into the host

localuser@localhost$ ssh -I /usr/lib64/ -l demosc1
Enter PIN for 'PIV_II (PIV Card Holder pin)':

The -I option specifies a PKCS#11 shared library, and -l the username on the remote host.


Configuration of the local host

First check that the local host is properly configured for Smart Cards. The opensc package must be installed, and the the SmartCard daemon must be running.

localuser@localhost$ sudo dnf install opensc
localuser@localhost$ sudo systemctl start pcscd.service pcscd.socket


Configuration of the remote (IdM) host

When IdM is properly configured, ssh will prompt for the SmartCard PIN and authenticate the user. If there is an issue with the certificate, ssh will revert to another authentication type (private keys or username/password).

In this case, enable debug logs for ssh authentication on IdM host. Edit /etc/sssd/sssd.conf and add the following line in [ssh] section:

debug_level = 9

then restart sssd using

root@ipaclient$ systemctl restart sssd

The logs will be located on the IdM host in /var/log/sssd/sssd_ssh.log.


The Smart Card CA is not trusted by SSSD

If you see the following in /var/log/sssd/sssd_ssh.log:

[sssd[ssh]] [cert_to_ssh_key] (0x0020): CERT_VerifyCertificateNow failed [-8179].
[sssd[ssh]] [get_valid_certs_keys] (0x0040): cert_to_ssh_key failed, ignoring.

then it means that the CA that signed the Smart Card certificate is not trusted. The trusted certs are stored in /etc/pki/nssdb and can be found using:

root@ipaclient$ certutil -L -d /etc/pki/nssdb

Certificate Nickname Trust Attributes     SSL,S/MIME,JAR/XPI

SmartCardCA                               CT,C,C


If the CA cert is missing, add it using

root@ipaclient$ certutil -A -d /etc/pki/nssdb -n SmartCardCA -t CT,C,C -i ca.pem


The user is not an IdM user

If the ssh operation does not log any line in /var/log/sssd/sssd_ssh.log, it probably means that the supplied user name is not a user defined in IdM. You can check with:

root@ipaclient$ ipa user-find demosc1
0 users matched
Number of entries returned 0

Check that you provided the right user name, or define an IdM user and associate the SmartCard certificate with this user.


The certificate is not mapped to the IdM user

If you see the following in /var/log/sssd/sssd_ssh.log:

Found 1 entries in domain

but the authentication fails, check that the SmartCard certificate is associated to the provided username (refer to mapping between a SmartCard certificate and an IdM user)

by floblanc at June 02, 2017 12:59 PM

FreeIPA: troubleshooting SmartCard authentication

RHEL 7.4 beta is now available, delivering a new version of IPA which contains the support for Smart Card authentication. This feature allows to use a certificate contained in a SmartCard in order to login to IdM WebUI, to ssh to an IdM-enrolled host, or to login to the console or Gnome Desktop Manager of an IdM-enrolled host.

This feature is really powerful but may also seem difficult to troubleshoot. I will explain where to look for additional information when authentication fails, and how to fix the most common issues.

The information is split into posts specific to each authentication method:



by floblanc at June 02, 2017 12:12 PM

May 30, 2017

Striker Leggette

Deploy Windows 2016 AD and Fedora 25 IPA with a One-way Trust


For the purpose of this post, the two machines I used for these instructions are VMs running atop a Fedora 25 hypervisor which was configured as outlined here:

Configuring Fedora 25 as a Hypervisor using Virtual Machine Manager

Note: Make sure that when deploying IPA and AD that you do so on separate domains.  Otherwise, IPA clients will be querying the AD server directly when they dig the domain for ldap records.

INCORRECT: IPA Server: | AD Server:
CORRECT: IPA Server: | AD Server:
CORRECT: IPA Server: | AD Server:
CORRECT: IPA Server: | AD Server:
CORRECT: IPA Server: | AD Server:

Deploying Windows 2016 AD

  1. On my first VM, I booted using a Trial ISO of Windows Server 2016:
  2. Begin the installation with Windows Server 2016 Standard Evaluation (Desktop Experience).
  3. After the machine boots from installation, configure the Hostname:
    1. Server Manager – Local Server.
    2. Click on the machine’s current hostname.
    3. Click Change and change the hostname to your preference.
      • Example: win16ad01
  4. Configure Active Directory and DNS:
    1. Server Manager – Dashboard.
    2. Add Roles and Features.
    3. For Installation Type, choose Role-based or feature-based installation.
    4. For Server Roles, click Active Directory Domain Services and DNS Server.
    5. Within Server Manager, go to AD DS and click on More.
    6. Click on Promote this server to a domain….
    7. In the next window, choose Add a new Forest.
      • Here, set the full DN of your Forest.
        • Example:

Deploying Fedora 25 IPA

  1. For the second VM, I booted using the HTTP link to Fedora 25 Server:
  2. During pre-installation:
    1. Choose Minimal at Software Selection.
    2. In Network & Host Name, set the full hostname of the machine.
      • Example:
    3. Make sure to give /var a large amount of space, as this is where the IPA Database and Logs will be stored.
  3. After installation and reaching a root prompt:
    1. Install the IPA packages and the RNG package:
      • dnf install ipa-server ipa-server-dns ipa-server-trust-ad rng-tools -y
      • The RNG daemon will generate free entropy to be used during the certificate database creation, otherwise that process can take a very long time to complete.
    2. Open the correct ports that IPA will use:
      • firewall-cmd --add-service=freeipa-ldap --add-service=freeipa-ldaps --add-service=freeipa-trust --permanent
      • firewall-cmd --reload
    3. Start the RNG Daemon:
      • systemctl start rngd
    4. Configure the IPA instance:
      • ipa-server-install --setup-dns
        1. For Server host name, press Enter (Hostname was set during pre-install).
        2. For Domain name and Realm name, press Enter.
        3. Press Enter when prompted for DNS forwarders.
        4. For Enter an IP address for a DNS forwarder, enter the IP Address of your Windows 2016 AD.
        5. Type yes and press Enter to finalize the pre-configuration and begin installation.

Configure the One-way Trust

  1. From the Fedora root prompt, prepare IPA for the trust:
    • ipa-adtrust-install
    • All options should be default.
  2. Configure and Verify the trust:
    1. ipa trust-add --type=ad --admin Administrator --password
      • Example: ipa trust-add --type=ad --admin Administrator --password
    2. id

Get involved and ask questions

You can get in touch with the IPA community by joining the #freeipa and #sssd channels on Freenode and the freeipa-users and sssd-users mailing lists.

by Striker at May 30, 2017 05:38 AM

Powered by Planet