FreeIPA Identity Management planet - technical blogs

September 20, 2017

Rob Crittenden

Setting up AD for winsync testing

It had literally been years since I had to setup an AD test environment to do basic winsync testing. I found some scraggly notes and decided to transcribe them here for posterity. They were written for AD 2003 and things for 2008 are a bit different but I still found it fairly easy to figure out (in 2008 there is less need to go to the Start menu).

I don’t in fact remember what a lot of these notes do so don’t kill the messenger.

Start with an AD 2008 instance by following

Once that is booted:

  1. Change the hostname
  2. My Computer -> right click -> Properties -> Computer Name -> Change = win2003
  4. Manage your Server
    1. Add or remove a role -> Next [Preliminary Steps]
    2. Custom -> Domain Controller
    3. Domain controller for a new domain
    4. Domain in a new forest
    5. Fill DNS name for new domain:
  5. If conflict select Install and Configure DNS on this server
  7. Start -> Control Panel -> Add or Remove Programs
    1. Add/Remove Windows Components
    2. Certificate Services, yes to the question
    3. Next
    4. Enterprise root CA
    5. AD CA for the common name
    6. Accept other defaults
    7. Ok about IIS
  8. REBOOT (or wait a little while for certs to issue)
  9. Start -> Admin Tools -> Certificate Authority
    1. Certificate Authority -> AD CA -> Issued Certificates
    2. Select the cert, double click
    3. Certificate Path
    4. Select AD CA, view certificate
    5. Details
    6. Copy to file
    7. Base 64-encoded x509 (.cer)
  10. Install WinSCP
  11. Copy cert to IPA

Now on the IPA master the agreement can be created:

# ipa-replica-manage connect –winsync –cacert=/home/rcrit/adca.cer -v –no-lookup –binddn ‘cn=administrator,cn=users,dc=example,dc=com’ –bindpw <AD pw> –passsync <something>

As I recall I tended to put the AD hostname into /etc/hosts (hence the –no-lookup).

by rcritten at September 20, 2017 12:46 PM

September 18, 2017

Red Hat Blog

Evaluating Total Cost of Ownership of the Identity Management Solution

Increasing Interest in Identity Management

During last several months I’ve seen a rapid growth of interest in Red Hat’s Identity Management (IdM) solution. This might have been due to different reasons.

  • First of all IdM has become much more mature and well known. In the past you come to a conference and talk about FreeIPA (community version of IdM) and IdM and you get a lot of people in the audience that have never heard about it. It is not the case any more. IdM, as a solution, is well known now. There are thousands of the deployments all over the world both using Red Hat supported and community bits. Many projects and open source communities implemented integration with it as an identity back end. There is no surprise that customers who are looking for a good, cost effective identity management solution are now aware of it and start considering it. This leads to questions, calls, face-to-face meetings and presentations.
  • Another reason is that IdM/FreeIPA project has been keeping an ear to the ground and was quick to adjust its plans and implement features in response to some of the tightening regulations in different verticals. Let us, for example,  consider the government space. Over the last couple of years, the policies became more strict requiring a robust solution for two-factor-authentication using CAC and PIV smart cards. IdM responded by adding support for smart cards based authentication making it easy to achieve compliance with the mentioned regulations.
  • Yet another reason is that more and more customers realize that moving to a modern Identity Management system is going to enable them to more quickly and easily transition into the age of hybrid cloud, taking advantage of both public and on premises clouds like OpenStack, and as well as to the world of containers and container management platforms like OpenShift.

Software Costs

One of the main questions people ask when they hear about the IdM solution is: Is Identity Management in Red Hat Enterprise Linux free? It is. Identity Management in Red Hat Enterprise Linux is a component of the platform and not a separately licensable product. What does this mean? This means that you can install IdM on any Red Hat Enterprise Linux server system with a valid subscription and get support from Red Hat.

There are many solutions on the market that build business around identity management services and integration with Active Directory that are not free. They require extra cost and dip into your IT budget.  Red Hat’s IdM solution is different. It is available without extra upfront cost for the software itself.

Total Cost of Ownership

People who have done identity management projects in their lives would support me in the claim that Identity Management should not be viewed as a project. It should be viewed as a program. There can be different phases, but the mindset and budgeting should assume that Identity Management is an ongoing endeavor. And it is actually quite reasonable if you think about it. Identity Management software connects to actual people and workforce dynamics. As the workforce evolves, the Identity Management software reflects the changes: growth, re-orgs, acquisitions and spin-offs. No two identity management implementations are the same. The solution has to adapt to a long list of use cases and be capable of unique requirements of every deployment. On one hand, the solution has to work all the time, and on the other hand, its limits are constantly stretched.

During my visits, I also help to architect a solution if customers are interested in quick “on the fly” white-boarding suggestions. Such designs need to be taken with a grain of salt as drive-by architecture usually considers the main technical requirements outlined during the discussion but does not consider hidden challenges and roadblocks that each organization has. So the suggested architecture should be viewed as a very rough draft and something to start thinking about rather than a precise blueprint that can be followed to a letter. After the first conversation it is recommended to read various publicly available materials. Red Hat documentation and man pages are good sources of information as well as the community project wikis for FreeIPA and SSSD. Identity Management documentation is very well maintained and regularly updated to reflect new changes or address reported issues.

In addition to reading documentation one can engage Red Hat professional services to help with a proof-of-concept or production deployment. Those services are priced per engagement. There are different pre-packaged offerings with the predefined results that you can purchase from Red Hat – just get in touch with your sales representative or technical account manager.

No matter what software you choose for your identity management solution, it makes sense to have someone on the vendor side who will be there to help with any issues and challenges you face, to connect you to experts and to reduce your downtime. Red Hat offers multiple tiers of support. One level includes a Technical Account Manager. More about TAM program can be read here. Since Identity Management should be viewed as an ongoing process and effort it makes sense to consider a TAM or the equivalent service from your vendor. Is it an extra cost? Yes but it is independent from the solution you choose. It is just a good risk mitigation strategy that makes your money work for you with the best possible return.

As always your comments and feedback are very welcome.

by Dmitri Pal at September 18, 2017 02:19 PM

September 11, 2017

Florence Blanc-Renaud

Troubleshooting FreeIPA: pki-tomcatd fails to start

When performing the upgrade of FreeIPA, you may encounter an issue with pki-tomcatd failing to start. At first this issue looks related to the upgrade, but often reveals a latent problem and gets detected only because the upgrade triggers a restart of pki-tomcatd.

So how to troubleshoot this type of issue?


Upgrade logs

The upgrade is using /var/log/ipaupgrade.log and may contain a lot of useful information. In this specific case, I could see:

[...] DEBUG The ipa-server-upgrade command failed,
exception: ScriptError: CA did not start in 300.0s
[...] ERROR CA did not start in 300.0s
[...] ERROR The ipa-server-upgrade command failed. See
/var/log/ipaupgrade.log for more information


CA debug logs

The first step is to understand why pki-tomcatd refuses to start. This process is launched inside Tomcat and corresponds to the CA component of FreeIPA. It is logging into /var/log/pki/pki-tomcat/ca/debug:

[...][localhost-startStop-2]: ============================================
[...][localhost-startStop-2]: ===== DEBUG SUBSYSTEM INITIALIZED =======
[...][localhost-startStop-2]: ============================================
[...][localhost-startStop-2]: CMSEngine: restart at autoShutdown? false
[...][localhost-startStop-2]: CMSEngine: autoShutdown crumb file path? /var/lib/pki/pki-tomcat/logs/autoShutdown.crumb
[...][localhost-startStop-2]: CMSEngine: about to look for cert for auto-shutdown support:auditSigningCert cert-pki-ca
[...][localhost-startStop-2]: CMSEngine: found cert:auditSigningCert cert-pki-ca
[...][localhost-startStop-2]: CMSEngine: done init id=debug
[...][localhost-startStop-2]: CMSEngine: initialized debug
[...][localhost-startStop-2]: CMSEngine: initSubsystem id=log
[...][localhost-startStop-2]: CMSEngine: ready to init id=log
[...][localhost-startStop-2]: Creating RollingLogFile(/var/lib/pki/pki-tomcat/logs/ca/signedAudit/ca_audit)
[...][localhost-startStop-2]: Creating RollingLogFile(/var/lib/pki/pki-tomcat/logs/ca/system)
[...][localhost-startStop-2]: Creating RollingLogFile(/var/lib/pki/pki-tomcat/logs/ca/transactions)
[...][localhost-startStop-2]: CMSEngine: restart at autoShutdown? false
[...][localhost-startStop-2]: CMSEngine: autoShutdown crumb file path? /var/lib/pki/pki-tomcat/logs/autoShutdown.crumb
[...][localhost-startStop-2]: CMSEngine: about to look for cert for auto-shutdown support:auditSigningCert cert-pki-ca
[...][localhost-startStop-2]: CMSEngine: found cert:auditSigningCert cert-pki-ca
[...][localhost-startStop-2]: CMSEngine: done init id=log
[...][localhost-startStop-2]: CMSEngine: initialized log
[...][localhost-startStop-2]: CMSEngine: initSubsystem id=jss
[...][localhost-startStop-2]: CMSEngine: ready to init id=jss
[...][localhost-startStop-2]: CMSEngine: restart at autoShutdown? false
[...][localhost-startStop-2]: CMSEngine: autoShutdown crumb file path? /var/lib/pki/pki-tomcat/logs/autoShutdown.crumb
[...][localhost-startStop-2]: CMSEngine: about to look for cert for auto-shutdown support:auditSigningCert cert-pki-ca
[...][localhost-startStop-2]: CMSEngine: found cert:auditSigningCert cert-pki-ca
[...][localhost-startStop-2]: CMSEngine: done init id=jss
[...][localhost-startStop-2]: CMSEngine: initialized jss
[...][localhost-startStop-2]: CMSEngine: initSubsystem id=dbs
[...][localhost-startStop-2]: CMSEngine: ready to init id=dbs
[...][localhost-startStop-2]: DBSubsystem: init() mEnableSerialMgmt=true
[...][localhost-startStop-2]: Creating LdapBoundConnFactor(DBSubsystem)
[...][localhost-startStop-2]: LdapBoundConnFactory: init
[...][localhost-startStop-2]: LdapBoundConnFactory:doCloning true
[...][localhost-startStop-2]: LdapAuthInfo: init()
[...][localhost-startStop-2]: LdapAuthInfo: init begins
[...][localhost-startStop-2]: LdapAuthInfo: init ends
[...][localhost-startStop-2]: init: before makeConnection errorIfDown is true
[...][localhost-startStop-2]: makeConnection: errorIfDown true
[...][localhost-startStop-2]: TCP Keep-Alive: true
[...][localhost-startStop-2]: SSLClientCertificateSelectionCB: Setting desired cert nickname to: subsystemCert cert-pki-ca
[...][localhost-startStop-2]: LdapJssSSLSocket: set client auth cert nickname subsystemCert cert-pki-ca
[...][localhost-startStop-2]: SSL handshake happened
Could not connect to LDAP server host port 636 Error netscape.ldap.LDAPException: Authentication failed (49)
 at com.netscape.cmscore.ldapconn.LdapBoundConnFactory.makeConnection(
 at com.netscape.cmscore.ldapconn.LdapBoundConnFactory.init(
 at com.netscape.cmscore.ldapconn.LdapBoundConnFactory.init(
 at com.netscape.cmscore.dbs.DBSubsystem.init(
 at com.netscape.cmscore.apps.CMSEngine.initSubsystem(
 at com.netscape.cmscore.apps.CMSEngine.initSubsystems(
 at com.netscape.cmscore.apps.CMSEngine.init(
 at com.netscape.certsrv.apps.CMS.init(
 at com.netscape.certsrv.apps.CMS.start(
 at com.netscape.cms.servlet.base.CMSStartServlet.init(
 at javax.servlet.GenericServlet.init(
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
 at java.lang.reflect.Method.invoke(
 at Method)
 at org.apache.catalina.core.StandardWrapper.initServlet(
 at org.apache.catalina.core.StandardWrapper.loadServlet(
 at org.apache.catalina.core.StandardWrapper.load(
 at org.apache.catalina.core.StandardContext.loadOnStartup(
 at org.apache.catalina.core.StandardContext.startInternal(
 at org.apache.catalina.util.LifecycleBase.start(
 at org.apache.catalina.core.ContainerBase.addChildInternal(
 at org.apache.catalina.core.ContainerBase.access$000(
 at org.apache.catalina.core.ContainerBase$

The exception shows that LDAP authentication failed with return code 49: invalid credentials.


Communication between pki-tomcatd and the LDAP server

We can see that pki-tomcatd is trying to open a LDAP connection through SSL but fails to authenticate. Within FreeIPA, pki-tomcat is storing data in the 389-ds LDAP server and needs to communicate through LDAP with this server.

The configuration of this communication channel can be read in /etc/pki/pki-tomcat/ca/CS.cfg:

internaldb.ldapauth.bindDN=cn=Directory Manager
internaldb.ldapauth.clientCertNickname=subsystemCert cert-pki-ca

The connection is using port 636 (SSL port) with SSL Client authentication (authtype=SslClientAuth). This means that pki-tomcatd provides a user certificate to the LDAP server, and the LDAP server maps a user to this certificate in order to authenticate the communications.

Note: Authtype can either be SslClientAuth or BasicAuth (authentication with username and password).

In this case, the SSL client authentication is done with the certificate named ‘subsystemCert cert-pki-ca‘ that is stored in /etc/pki/pki-tomcat/alias. So what could be causing the authentication to fail? We need to check that the certificate is available in /etc/pki/pki-tomcat/alias and that pki-tomcat is able to use the associated private key, and that the LDAP server is able to map this certificate to a user.


Check the subsystemCert cert-pki-ca

The first step consists in making sure that this certificate is present in /etc/pki/pki-tomcat/alias:

$ sudo certutil -L -d /etc/pki/pki-tomcat/alias -n 'subsystemCert cert-pki-ca'
 Version: 3 (0x2)


Then make sure that the private key can be read using the password found in /var/lib/pki/pki-tomcat/conf/password.conf (with the tag internal=…)

$ sudo grep internal /var/lib/pki/pki-tomcat/conf/password.conf | cut -d= -f2 > /tmp/pwdfile.txt
$ sudo certutil -K -d /etc/pki/pki-tomcat/alias -f /tmp/pwdfile.txt -n 'subsystemCert cert-pki-ca'
certutil: Checking token "NSS Certificate DB" in slot "NSS User Private Key and Certificate Services"
< 0> rsa 86a7fe00cc2a01ad085f35d4ed3e84e7b82ab4f5 subsystemCert cert-pki-ca

At this point we know that pki-tomcat is able to access the certificate and the private key. So the issue is likely to be on the LDAP server side.


LDAP server configuration

The LDAP configuration describes how a certificate can be mapped to a user in /etc/dirsrv/slapd-IPADOMAIN-COM/certmap.conf:

$ sudo cat /etc/dirsrv/slapd-IPADOMAIN-COM/certmap.conf 
certmap default default
default:FilterComps uid
certmap ipaca CN=Certificate Authority,O=IPADOMAIN.COM
ipaca:CmapLdapAttr seeAlso
ipaca:verifycert on

This means that when the LDAP server receives an authentication request with a certificate issued by the CA CN=Certificate Authority,O=IPADOMAIN.COM, it will look for users that contain a seeAlso attribute equal to the subject of the certificate, and the user entry must contain the certificate in the usercertificate attribute (verifycert: on).

With a default config, the ‘subsystemCert cert-pki-ca‘ is mapped to the user uid=pkidbuser,ou=people,o=ipaca. So let’s compare the user entry and the certificate:

$ ldapsearch -LLL -D 'cn=directory manager' -W -b uid=pkidbuser,ou=people,o=ipaca userCertificate description seeAlso
Enter LDAP Password: 
dn: uid=pkidbuser,ou=people,o=ipaca
userCertificate:: MIID...uwab3
description: 2;4;CN=Certificate Authority,O=IPADOMAIN.COM;CN=CA Subsystem,O=IPADOMAIN.COM
seeAlso: CN=CA Subsystem,O=IPADOMAIN.COM

$ sudo certutil -L -d /etc/pki/pki-tomcat/alias -n 'subsystemCert cert-pki-ca' -a

The certificate in the userCertificate attribute is different from the one in the NSS database! This can also be seen by comparing the serial number with the value from the ldap entry (in description: 2;<serial>;<issuer>;<subject>):

$ sudo certutil -L -d /etc/pki/pki-tomcat/alias -n 'subsystemCert cert-pki-ca' | grep Serial
 Serial Number: 1341718541 (0x4ff9000d

This explains why pki-tomcat could not authenticate to the LDAP server. The fix consists in updating the LDAP entry with the right certificate (and do not forget to update the description attribute with the right serial number!)

But we still do not know the root cause for the inconsistency between the NSS database /etc/pki/pki-tomcat/alias and the user entry for uid=pkidbuser. This could be described in another blog post (for the impatients, the automatic renewal of the certificate failed to update the LDAP server entry…)

by floblanc at September 11, 2017 03:05 PM

September 07, 2017

Red Hat Blog

Discovery and Affinity

Questions related to DNS and service discovery regularly come up during deployments of Identity Management (IdM) in Red Hat Enterprise Linux in a trust configuration with Active Directory. This blog article will shed some light of this aspect of the integration.

We will start with a description of the environment. Let us say that the Active Directory  environment consist of four servers split in pairs between two datacenters in two different geographic regions with low bandwidth and high latency network between them. For the sake of this article it does not really matter whether there are several domains or not. Of cause factoring in the hierarchy of domains and what replication agreements these servers have matters but for discovery and affinity conversation we will assume that all four mentioned servers represent one AD domain and replication is working properly so the content of the servers is the same. We will also introduce four IdM servers, two in each datacenter. There is a one-way trust relationship established between the IdM and Active Directory forests (IdM trusts AD). Finally there is a Linux client in each datacenter. This Linux client represents all the Linux clients in the datacenter assuming they are configured in the same way. The following diagram represents this setup.

The horizontal lines between AD and IdM servers denote replication between those servers and yellow arrows show one-way trust.


Communication flows

The questions about affinity and discovery arise mostly in the setups like this which is in fact quite popular. So how exactly can I configure the client to prefer local servers? The question is actually more complex than that. Let us look under the hood and see what communication is actually going on.

On the next diagram we will remove replication and trust lines and arrows to make diagram cleaner and more readable but, please, keep in mind that they are still there.


The first diagram shows Kerberos protocol communication that is used for authentication of the users as well as for authentication of the client itself. As you see clients can talk to either of the AD servers and to either of IdM servers. It is also important to understand that there is a client component that runs on IdM servers themselves. Those “embedded” clients are used to pull identity information from Active Directory so those clients also need to be able to connect to the right AD servers. 


In addition to the communication over Kerberos protocol there is a communication over LDAP protocol. The following diagram shows the details.

The difference is that the client only talks LDAP to IdM servers however the “embedded” client on the IdM server can talk to either of the AD servers.

So the task of preventing the communication to go over the low latency network to another datacenter boils down to the following list of items:

  • Make sure that client talks to local IdM servers over LDAP
  • Make sure that client talks to local IdM servers over Kerberos
  • Make sure that client talks to local AD servers over Kerberos
  • Make sure that embedded client on the IdM talks to local AD servers over Kerberos
  • Make sure that embedded client on the IdM talks to local AD servers over LDAP


So to accomplish the goal LDAP and Kerberos configuration needs to be considered on the clients and on the IdM servers.

There is yet one more factor that we have not brought up yet. Active Directory has a notion of sites. An Active Directory client usually detects the presence of site definition by looking up information in LDAP and then resolving the actual servers using DNS. If a site is configured DNS lookup will return the list of local AD servers for client to connect to. Let us assume that AD server in each of the two datacenters form a site.

There is a similar capability in IdM but only if the IdM DNS is used. Now we have all the permutations of the setup established and can jump to the configuration options.


Configuring LDAP on the client

IdM DNS is used

Lookup of the corresponding services can be done automatically based on the DNS. Clients by default will use this method unless it is explicitly overwritten during the ipa-client installation by providing failover parameters in the command line or if the SSSD configuration is modified after the installation. See next section for more details about altering the configuration and not using DNS discovery.

If the default configuration is chosen and not overridden and IdM DNS is used then the client can take advantage of the feature called DNS locations. It is similar in concept to the AD sites. The client is expected to have IdM DNS configured in the resolv.conf. If locations are configured in IdM the client will always receive the local servers in the DNS lookup for the IdM service when it asks for SRV records. For more information on how to configure this feature see Linux Domain Identity, Authentication, and Policy Guide.

IdM DNS is not used

If IdM DNS is not used the chosen DNS server would not have appropriate information about the IdM server topology and thus service discovery method can’t be used. Clients would have to be explicitly configured with the failover information. This is done by explicitly configuring SSSD on the client to use primary and secondary server configuration. One can read more about the details of this configuration in sssd-ldap man page (look for Failover section). Besides changing SSSD configuration one can use additional arguments during ipa-client installation to provide an explicit list of primary and secondary servers the client should connect to.  


Configuring Kerberos on the client

There are two parts of the Kerberos configuration. One is related to the communication with IdM servers and another to communication with AD servers.

For communication with IdM servers the client will use same information that is described in the previous section so the discovery or explicit configuration  would apply to the Kerberos communication too. However communication with AD requires special handling. There is currently no way for the client to automatically discover the right AD servers to talk to. So the only way for the clients to limit them to the specific AD servers is to alter krb5.conf configuration to add AD realm information and explicitly list the AD servers to talk to. For more information how to do it see krb5.conf man page.

There is an outstanding feature request that the SSSD project is aware about. It is on the roadmap but so far, as of the time of writing, the automatic discovery of the AD servers in the same location is not yet implemented.


Configuring embedded client on the server for Kerberos and LDAP

The embedded client on the IdM server is configured as a client of the Active Directory server. For several releases it already has been capable of discovering AD sites and using them. However to do the discovery the first lookup might hit an AD server in the remote location that it not accessible or is reachable over the slow network. In such situations for the embedded client not to give up too early the dns_resolver_timeout should be increased. For clarity the man pages were recently updated to better explain the meaning and use of this timeout. These changes did not make Red Hat Enterprise Linux 7.4 but will be included in the next minor release. There is also a plan to increase the default value of the timeout in the next community release which might be included into next Red Hat Enterprise Linux minor release.

Before 7.4 the client on the IdM server used an automated discovery algorithm to determine the right AD site and AD servers to use. In some cases autodiscovery did not return the right information. There are different reasons why wrong information might be returned. In real deployments the servers are added and removed and in some cases the discovery would return an AD server that was decommissioned or completely firewalled.

To overcome this issue a new feature has been added in recent community release and made its way into Red Hat Enterprise Linux 7.4. It allows explicit overrides of some of otherwise automatically discoverable information. The following page in the SSSD site describes the related changes in a lot of details. The corresponding changes have been made to the SSSD man pages. Unfortunately a separate chapter of the Red Hat documentation that helps with configuration of the failover and domain discovery in the complex setups like this did not make the documentation available from Red Hat web site at the moment of writing. The corresponding team is actively working on it so by the time you read this you most likely will be able to find a corresponding chapter in the Windows Integration Guide.

If you are running IdM version that is still not updated to 7.4 you can use the same failover related overrides with AD server as the ones described in the sssd-ldap man pages for IdM servers. Those settings will explicitly pin client on the IdM server to the specific set of AD servers you chose without respect to sites. This, however, might not work well in the cases when you have a complex hierarchy of domains since there is no way to express which preferred servers belong to which domain. The general recommendation would be to upgrade your IdM server to 7.4 and use ability to pin client running on the server to a specific AD site.



As you can see getting the failover configuration right is not simple. But now you have all the moving parts covered and can make sure that all the service in one datacenter only talk to the services in that same datacenter.

As always I will be looking forward to your questions and comments as well as suggestions regarding next blog topics.

by Dmitri Pal at September 07, 2017 01:05 PM

September 04, 2017

Fraser Tweedale

Running Keycloak in OpenShift

At PyCon Australia in August I gave a presentation about federated and social identity. I demonstrated concepts using Keycloak, an Open Source, feature rich identity broker. Keycloak is deployed in JBoss, so I wasn’t excited about the prospect of setting up Keycloak from scratch. Fortunately there is an official Docker image for Keycloak, so with that as the starting point I took an opportunity to finally learn about OpenShift v3, too.

This post is simply a recounting of how I ran Keycloak on OpenShift. Along the way we will look at how to get the containerised Keycloak to trust a private certificate authority (CA).

One thing that is not discussed is how to get Keycloak to persist configuration and user records to a database. This was not required for my demo, but it will be important in a production deployment. Nevertheless I hope this article is a useful starting point for someone wishing to deploy Keycloak on OpenShift.

Bringing up a local OpenShift cluster

To deploy Keycloak on OpenShift, one must first have an OpenShift. OpenShift Online is Red Hat’s public PaaS offering. Although running the demo on a public PaaS was my first choice, OpenShift Online was experiencing issues at the time I was setting up my demo. So I sought a local solution. This approach would have the additional benefit of not being subject to the whims of conference networks (or, it was supposed to – but that is a story for another day!)

oc cluster up

Next I tried oc cluster up. oc is the official OpenShift client program. On Fedora, it is provided by the origin-clients package. oc cluster up command pulls required images and brings up an OpenShift cluster running on the system’s Docker infrastructure. The command takes no further arguments; it really is that simple! Or is it…?

% oc cluster up
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.5.0 image ...
   Pulling image openshift/origin:v1.5.0
   Pulled 0/3 layers, 3% complete
   Pulled 3/3 layers, 100% complete
   Image pull complete
-- Checking Docker daemon configuration ... FAIL
   Error: did not detect an --insecure-registry argument on the Docker daemon

     Ensure that the Docker daemon is running with the following argument:

OK, so it is not that simple. But it got a fair way along, and (kudos to the OpenShift developers) they have provided actionable feedback about how to resolve the issue. I added --insecure-registry to the OPTIONS variable in /etc/sysconfig/docker, then restarted Docker and tried again:

% oc cluster up
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.5.0 image ... OK
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ... OK
-- Checking type of volume mount ...
   Using nsenter mounter for OpenShift volumes
-- Creating host directories ... OK
-- Finding server IP ...
   Using as the server IP
-- Starting OpenShift container ...
   Creating initial OpenShift configuration
   Starting OpenShift using container 'origin'
   Waiting for API server to start listening
-- Adding default OAuthClient redirect URIs ... OK
-- Installing registry ... OK
-- Installing router ... OK
-- Importing image streams ... OK
-- Importing templates ... OK
-- Login to server ... OK
-- Creating initial project "myproject" ... OK
-- Removing temporary directory ... OK
-- Checking container networking ... OK
-- Server Information ... 
   OpenShift server started.
   The server is accessible via web console at:

   You are logged in as:
       User:     developer
       Password: developer

   To login as administrator:
       oc login -u system:admin

Success! Unfortunately, on my machine with several virtual network, oc cluster up messed a bit too much with the routing tables, and when I deployed Keycloak on this cluster it was unable to communicate with my VMs. No doubt these issues could have been solved, but being short on time and with other approaches to try, I abandoned this approach.


Minishift is a tool that launches a single-node OpenShift cluster in a VM. It supports a variety of operating systems and hypervisors. On GNU+Linux it supports KVM and VirtualBox.

First install docker-machine and docker-machine-driver-kvm. (follow the instructions at the preceding links). Unfortunately these are not yet packaged for Fedora.

Download and extract the Minishift release for your OS from

Run minishift start:

% ./minishift start
-- Installing default add-ons ... OK
Starting local OpenShift cluster using 'kvm' hypervisor...
Downloading ISO ''

... wait a while ...

It downloads a boot2docker VM image containing the openshift cluster, boots the VM, and the console output then resembles the output of oc cluster up. I deduce that oc cluster up is being executed on the VM.

At this point, we’re ready to go. Before I continue, it is important to note that once you have access to an OpenShift cluster, the user experience of creating and managing applications is essentially the same. The commands in the following sections are relevant, regardless whether you are running your app on OpenShift online, on a cluster running on your workstation, or anything in between.

Preparing the Keycloak image

The JBoss project provides official Docker images, including an official Docker image for Keycloak. This image runs fine in plain Docker but the directory permissions are not correct for running in OpenShift.

The Dockerfile for this image is found in the jboss-dockerfiles/keycloak repository on GitHub. Although they do not publish an official image for it, this repository also contains a Dockerfile for Keycloak on OpenShift! I was able to build that image myself and upload it to my Docker Hub account. The steps were as follows.

First clone the jboss-dockerfiles repo:

% git clone docker-keycloak
Cloning into 'docker-keycloak'...
remote: Counting objects: 1132, done.
remote: Compressing objects: 100% (22/22), done.
remote: Total 1132 (delta 14), reused 17 (delta 8), pack-reused 1102
Receiving objects: 100% (1132/1132), 823.50 KiB | 158.00 KiB/s, done.
Resolving deltas: 100% (551/551), done.
Checking connectivity... done.

Next build the Docker image for OpenShift:

% docker build docker-keycloak/server-openshift
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM jboss/keycloak:latest
 ---> fb3fc6a18e16
Step 2 : USER root
 ---> Running in 21b672e19722
 ---> eea91ef53702
Removing intermediate container 21b672e19722
Step 3 : RUN chown -R jboss:0 $JBOSS_HOME/standalone &&     chmod -R g+rw $JBOSS_HOME/standalone
 ---> Running in 93b7d11f89af
 ---> 910dc6c4a961
Removing intermediate container 93b7d11f89af
Step 4 : USER jboss
 ---> Running in 8b8ccba42f2a
 ---> c21eed109d12
Removing intermediate container 8b8ccba42f2a
Successfully built c21eed109d12

Finally, tag the image into the repo and push it:

% docker tag c21eed109d12

% docker login -u frasertweedale
Login Succeeded

% docker push
... wait for upload ...
latest: digest: sha256:c82c3cc8e3edc05cfd1dae044c5687dc7ebd9a51aefb86a4bb1a3ebee16f341c size: 2623

Adding CA trust

For my demo, I used a local FreeIPA installation to issue TLS certificates for the the Keycloak app. I was also going to carry out a scenario where I configure Keycloak to use that FreeIPA installation’s LDAP server to authenticate users. I wanted to use TLS everywhere (eat your own dog food!) I needed the Keycloak application to trust the CA of one of my local FreeIPA installations. This made it necessary to build another Docker image based on the keycloak-openshift image, with the appropriate CA trust built in.

The content of the Dockerfile is:

FROM frasertweedale/keycloak-openshift:latest
USER root
COPY ca.pem /etc/pki/ca-trust/source/anchors/ca.pem
RUN update-ca-trust
USER jboss

The file ca.pem contains the CA certificate to add. It must be in the same directory as the Dockerfile. The build copies the CA certificate to the appropriate location and executes update-ca-trust to ensure that applications – including Java programs – will trust the CA.

Following the docker build I tagged the new image into my repository (tag: f25-ca) and pushed it. And with that, we are ready to deploy Keycloak on OpenShift.

Creating the Keycloak application in OpenShift

At this point we have a local OpenShift cluster (via Minishift) and a Keycloak image (frasertweedale/keycloak-openshift:f25-ca) to deploy. When deploying the app we need to set some environment variables:


A username for the Keycloak admin account to be created


Passphrase for the admin user


Because the application will be running behind OpenShift’s HTTP proxy, we need to tell Keycloak to use the "external" hostname when creating hyperlinks, rather than Keycloak’s own view.

Use the oc new-app command to create and deploy the application:

% oc new-app --docker-image frasertweedale/keycloak-openshift:f25-ca \
    --env KEYCLOAK_USER=admin \
    --env KEYCLOAK_PASSWORD=secret123 \
--> Found Docker image 45e296f (4 weeks old) from Docker Hub for "frasertweedale/keycloak-openshift:f25-ca"

    * An image stream will be created as "keycloak-openshift:f25-ca" that will track this image
    * This image will be deployed in deployment config "keycloak-openshift"
    * Port 8080/tcp will be load balanced by service "keycloak-openshift"
      * Other containers can access this service through the hostname "keycloak-openshift"

--> Creating resources ...
    imagestream "keycloak-openshift" created
    deploymentconfig "keycloak-openshift" created
    service "keycloak-openshift" created
--> Success
    Run 'oc status' to view your app.

The app gets created immediately but it is not ready yet. The download of the image and deployment of the container (or pod in OpenShift / Kubernetes terminology) will proceed in the background.

After a little while (depending on how long it takes to download the ~300MB Docker image) oc status will show that the deployment is up and running:

% oc status
In project My Project (myproject) on server

svc/keycloak-openshift -
  dc/keycloak-openshift deploys istag/keycloak-openshift:f25-ca 
    deployment #2 deployed 3 minutes ago - 1 pod

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

(In my case, the first deployment failed because the 10-minute timeout elapsed before the image download completed; hence deployment #2 in the output above.)

Creating a secure route

Now the Keycloak application is running, but we cannot reach it from outside the Keycloak project itself. In order to be able to reach it there must be a route. The oc create route command lets us create a route that uses TLS (so clients can authenticate the service). We will use the domain name keycloak.ipa.local. The public/private keypair and certificate have already been generated (how to do that is outside the scope of this article). The certificate was signed by the CA we added to the image earlier. The service name – visible in the oc status output above – is svc/keycloak-openshift.

% oc create route edge \
  --service svc/keycloak-openshift \
  --hostname keycloak.ipa.local \
  --key /home/ftweedal/scratch/keycloak.ipa.local.key \
  --cert /home/ftweedal/scratch/keycloak.ipa.local.pem
route "keycloak-openshift" created

Assuming there is a DNS entry pointing keycloak.ipa.local to the OpenShift cluster, and that the system trusts the CA that issued the certificate, we can now visit our Keycloak application:

% curl https://keycloak.ipa.local/
  ~ Copyright 2016 Red Hat, Inc. and/or its affiliates
  ~ and other contributors as indicated by the @author tags.
  ~ Licensed under the Apache License, Version 2.0 (the "License");
  ~ you may not use this file except in compliance with the License.
  ~ You may obtain a copy of the License at
  ~ Unless required by applicable law or agreed to in writing, software
  ~ distributed under the License is distributed on an "AS IS" BASIS,
  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  ~ See the License for the specific language governing permissions and
  ~ limitations under the License.
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">

    <meta http-equiv="refresh" content="0; url=/auth/" />
    <meta name="robots" content="noindex, nofollow">
    <script type="text/javascript">
        window.location.href = "/auth/"
    If you are not redirected automatically, follow this <a href='/auth'>link</a>.

If you visit in a browser, you will be able to log in using the admin account credentials specified in the KEYCLOAK_USER and KEYCLOAK_PASSWORD environment variables specified when the app was created. And from there you can create and manage authentication realms, but that is beyond the scope of this article.


In this post I discussed how to run Keycloak in OpenShift, from bringing up an OpenShift cluster to building the Docker image and creating the application and route in OpenShift. I recounted that I found OpenShift Online unstable at the time I tried it, and that although oc cluster up did successfully bring up a cluster I had trouble getting the Docker and VM networks to talk to each other. Eventually I tried Minishift which worked well.

We saw that although there is no official Docker image for Keycloak in OpenShift, there is a Dockerfile that builds a working image. It is easy to further extend the image to add trust for private CAs.

Creating the Keycloak app in OpenShift, and adding the routes, is straightforward. There are a few important environment variables that must be set. The oc create route command was used to create a secure route to access the application from the outside.

We did not discuss how to set up Keycloak with a database for persisting configuration and user records. The deployment we created is ephemeral. This satisfied my needs for demonstration purposes but production deployments will require persistence. There are official JBoss Docker images that extend the base Keycloak image and add support for PostgreSQL, MySQL and MongoDB. I have not tried these but I’d suggest starting with one of these images if you are looking to do a production deployment. Keep in mind that these images may not include the changes that are required for deploying in OpenShift.

by ftweedal at September 04, 2017 06:26 AM

August 30, 2017

Alexander Bokovoy

Flock 2017 day one

I’m attending Flock 2017, which is an annual conference for Fedora Project. This year it happens on a Cape Cod peninsula of the Massachusetts state in the U.S. The conference started on August 29th at a local resort and conference center in Hyannis, a town with a history, most commonly known for JFK legacy.

This year Flock is more action oriented. Many tolks are in fact collaborations where people discuss and hack together rather than being lectured. However, there is plenty of talks that allow others to digest what’s happening within fast moving projects in Fedora project universe.

While containers and modularity were topics discussed almost universally, I’d like to hilight two talks for the day one.

An entertaining talk of the day was “Fedora Legal - This is why I drink” by Tom Callaway. Tom explained past, present, and future of a life he has to live as a Fedora (para)legal person. He made clear he is not a lawyer and disclaimed almost everything you might think about, including taking the facts he is talking about as any kind of a legal adwise. Still, his review of what was done and what is expected to happen in near and far-fetched future was valuable.

Fedora was one of early distributions that went through a review and standardization of its licensing needs. It took some time to review 350 different free software and open source licenses from the packages in Fedora but Tom did build a list of them, both for good ones and bad ones. The latter are the ones which aren’t acceptable in Fedora anymore.

This work tends to go unnoticeable to users. Over years it took quite some time to also work with various opensource projects and get them to realise where their licensing was preventing them from better collaboration for their users. For example, many font designers have fun producing free fonts and giving them into hands of users but sometimes they have terrible ideas about how copyright law works. A part of Tom effort was to make it possible to package more fonts in Fedora and at the same time helping those font designers to improve their licensing. Another examples given during the talk were gradual clean up of Artistic License 1.0-licensed projects from CPAN, cleaning up TeXLive distribution, and replacement of Fedora individual contributor licensing agreement with the current Fedora Project Contributor Agreement that doesn’t require any copyright assignment.

An interesting topic Tom reflected on was consideration of the patents when deciding whether certain technology needs to be packaged in Fedora. Fedora finally has full support for MP3 now that all related known patents did expire. However, it took quite a lot of time to analyze expiration terms for many of them. It also took a lot of time to find out what ellyptic cryptography curves can be implemented and packaged in Fedora – ranging from six to ten years depending on a curve.

Tom noted that he has calendar reminders set up for some known patents whose expiration he tracks. These defined an interesting schedule for him and even set him off occasionally when a surprising alarm comes in about an expiration of a patent he forgot about.

All these achievements weren’t easy. When two lawyers cannot agree on how to interpret a text written by another lawyers, engineers have hard time navigating through all the minefield. The work behind the scenes in defining a set of rules that can be understood by mere people, written in easy to digest English, is impressive. The talk title was mostly a joke – in the end, we still don’t know why Tom thinks it could force him drinking and what exactly are the beverages. As with most of legal texts, it was left for interpretation – as well as a bottle of a local wine, gifted to Tom by another Flock attendee.

State of Fedora Server

Stephen Gallagher ran an annual review of what’s happening in Fedora Server world. Fedora Server work started after we successfully ran a focused experiment by making a Fedora Workstation as a focused product with system-wide attention to simplify typicall configuration tasks and improve usability of a Linux desktop. Fedora Server was an attempt to similarly provide an improved experience for administrators who came with non-UNIX background based on a feed back Red Hat tracks as part of its support operations.

Fedora’s server roles concept is now deprecated. Rolekit package will be removed in Fedora 28 or after it. A focus has changed to provide a better experience with Cockpit server apps. These apps are full-fledged Cockpit plugins. Rolekit-based deployment was covering only simple scenarios. Cockpit apps can and should be getting a better grasp of what is happening in the system and across a server fleet as Cockpit allows to browse and command multiple machines. We are currently working on FreeIPA cockpit app that would make it easier to decide what options to use for actual enrollment of FreeIPA domain controllers.

Another example is FleetCommander. FleetCommander cockpit app is a real deal: it combines an interface to configure desktop profiles and rules how to apply them in FreeIPA. At the same time it configures an actual virtual machine where desktop profile can be set up and tuned. FleetCommander Cockpit App gives rich experience that would not be possible with Rolekit alone.

During questions and answers session of Stephen’s talk we also attempted to find a concept that would help to understand why modularity is needed for spins like Fedora Server. A helpful concept is of a factory conveyer where multiple components assembled together to produce one of many pre-defined models that a factory is churning out. When you are owning an equipment, tools, and processes to define your factory it is relatively easy to control all the options. However, if your factory is a franchise, ability to re-define certain steps or tools, or make it possible to deviate in colors and ability to respond to a customer demand is the key. Fedora project is a community with a long story of creating spins and “deviations” from the “core”. Modularity project, while it has started as something different, is now bringing a lot of this flexibility to all community partners. Many problems were identified in the existing tooling around RPM management and distribution image composition through the years, modularity project attempts to solve them by a different view on how to attack them. It is an important initiative that gives a way to improve collaboration at the distribution core that we call Fedora Project. At the same time it gives much needed tools to make it possible to fork and (hopefully, happily) merge at exactly those stages of a distribution development and maintenance processes which were very rigid. In past any attempt to modify a spin too much typically led to a hard fork, with a new distribution project. As result, resources were unnecessarily spread out too thin in the community. I really hope we’ll get less burden on a repeatable chunk of work across all interesting spins and concentrate on more important aspects of why those spins are created: a value to their users.

August 30, 2017 03:26 PM

August 14, 2017

Fraser Tweedale

Installing FreeIPA with an Active Directory subordinate CA

FreeIPA is often installed in enterprise environments for managing Unix and Linux hosts and services. Most commonly, enterprises use Microsoft Active Directory for managing users, Windows workstations and Windows servers. Often, Active Directory is deployed with Active Directory Certificate Services (AD CS) which provides a CA and certificate management capabilities. Likewise, FreeIPA includes the Dogtag CA, and when deploying FreeIPA in an enterprise using AD CS, it is often desired to make the FreeIPA CA a subordinate CA of the AD CS CA.

In this blog post I’ll explain what is required to issue an AD sub-CA, and how to do it with FreeIPA, including a step-by-step guide to configuring AD CS.

AD CS certificate template overview

AD CS has a concept of certificate templates, which define the characteristics an issued certificate shall have. The same concept exists in Dogtag and FreeIPA except that in those projects we call them certificate profiles, and the mechanism to select which template/profile to use when issuing a certificate is different.

In AD CS, the template to use is indicated by an X.509 extension in the certificate signing request (CSR). The template specifier can be one of two extensions. The first, older extension has OID and allows you to specify a template by name:

CertificateTemplateName ::= SEQUENCE {
   Name            BMPString

(Note that some documents specify UTF8String instead of BMPString. BMPString works and is used in practice. I am not actually sure if UTF8String even works.)

The second, Version 2 template specifier extension has OID and allows you to specify a template by OID and version:

CertificateTemplate ::= SEQUENCE {
    templateID              EncodedObjectID,
    templateMajorVersion    TemplateVersion,
    templateMinorVersion    TemplateVersion OPTIONAL

TemplateVersion ::= INTEGER (0..4294967295)

Note that some documents also show templateMajorVersion as optional, but it is actually required.

When submitting a CSR for signing, AD CS looks for these extensions in the request, and uses the extension data to select the template to use.

External CA installation in FreeIPA

FreeIPA supports installation with an externally signed CA certificate, via ipa-server-install --external-ca or (for existing CA-less installations ipa-ca-install --external-ca). The installation takes several steps. First, a key is generated and a CSR produced:

$ ipa-ca-install --external-ca

Directory Manager (existing master) password: XXXXXXXX

Configuring certificate server (pki-tomcatd). Estimated time: 3 minutes
  [1/8]: configuring certificate server instance
The next step is to get /root/ipa.csr signed by your CA and re-run /sbin/ipa-ca-install as:
/sbin/ipa-ca-install --external-cert-file=/path/to/signed_certificate --external-cert-file=/path/to/external_ca_certificate

The installation program exits while the administrator submits the CSR to the external CA. After they receive the signed CA certificate, the administrator resumes the installation, giving the installation program the CA certificate and a chain of one or more certificates up to the root CA:

$ ipa-ca-install --external-cert-file ca.crt --external-cert-file ipa.crt
Directory Manager (existing master) password: XXXXXXXX

Configuring certificate server (pki-tomcatd). Estimated time: 3 minutes
  [1/29]: configuring certificate server instance
  [29/29]: configuring certmonger renewal for lightweight CAs
Done configuring certificate server (pki-tomcatd).

Recall, however, that if the external CA is AD CS, a CSR must bear one of the certificate template specifier extensions. There is an additional installation program option to add the template specifier:

$ ipa-ca-install --external-ca --external-ca-type=ms-cs

This adds a name-based template specifier to the CSR, with the name SubCA (this is the name of the default sub-CA template in AD CS).

Specifying an alternative AD CS template

Everything discussed so far is already part of FreeIPA. Until now, there is no way to specify a different template to use with AD CS.

I have been working on a feature that allows an alternative AD CS template to be specified. Both kinds of template specifier extension are supported, via the new --external-ca-profile installation program option:

$ ipa-ca-install --external-ca --external-ca-type=ms-cs \

(Note: huge OIDs like the above are commonly used by Active Directory for installation-specific objects.)

To specify a template by name, the --external-ca-profile value should be:


To specify a template by OID, the OID and major version must be given, and optionally the minor version too:


Like --external-ca and --external-ca-type, the new --external-ca-profile option is available with both ipa-server-install and ipa-ca-install.

With this feature, it is now possible to specify an alternative or custom certificate template when using AD CS to sign the FreeIPA CA certificate. The feature has not yet been merged but there an open pull request. I have also made a COPR build for anyone interested in testing the feature.

The remainder of this post is a short guide to configuring Active Directory Certificate Services, defining a custom CA profile, and submitting a CSR to issue a certificate.

Renewing the certificate

FreeIPA provides the ipa-cacert-manage renew command for renewing an externally-signed CA certificate. Like installation with an externally-signed CA, this is a two-step procedure. In the first step, the command prompts Certmonger to generate a new CSR for the CA certificate, and saves the CSR so that the administrator can submit it to the external CA.

For renewing a certificate signed by AD CS, as in the installation case a template specifier extension is needed. Therefore the ipa-cacert-manage renew command has also learned the --external-ca-profile option:

# ipa-cacert-manage renew --external-ca-type ms-cs \
  --external-ca-profile MySubCA
Exporting CA certificate signing request, please wait
The next step is to get /var/lib/ipa/ca.csr signed by your CA and re-run ipa-cacert-manage as:
ipa-cacert-manage renew --external-cert-file=/path/to/signed_certificate --external-cert-file=/path/to/external_ca_certificate
The ipa-cacert-manage command was successful

The the above example the CSR that was generated will contain a version 1 template extension, using the name MySubCA. Like the installation commands, the version 2 extension is also supported.

This part of the feature requires some changes to Certmonger as well as FreeIPA. At time of writing these changes haven’t been merged. There is a Certmonger pull request and a Certmonger COPR build if you’d like to test the feature.

Appendix A: installing and configuring AD CS

Assuming an existing installation of Active Directory, AD CS installation and configuration will take 10 to 15 minutes. Open Server Manager, invoke the Add Roles and Features Wizard and select the AD CS Certification Authority role:


Proceed, and wait for the installation to complete…


After installation has finished, you will see AD CS in the Server Manager sidebar, and upon selecting it you will see a notification that Configuration required for Active Directory Certificate Services.


Click More…, and up will come the All Servers Task Details dialog showing that the Post-deployment Configuration action is pending. Click the action to continue:


Now comes the AD CS Configuration assistant, which contains several steps. Proceed past the Specify credentials to configure role services step.

In the Select Role Services to configure step, select Certification Authority then continue:


In the Specify the setup type of the CA step, choose Enterprise CA then continue:


The Specify the type of the CA step lets you choose whether the AD CS CA will be a root CA or chained to an external CA (just like how FreeIPA lets you create root or subordinate CA!) Installing AD CS as a Subordinate CA is outside the scope of this guide. Choose Root CA and continue:


The next step lets you Specify the type of the private key. You can use an existing private key or Create a new private key, the continue.

The Specify the cryptographic options step lets you specify the Key length and hash algorithm for the signature. Choose a key length of at least 2048 bits, and the SHA-256 digest:


Next, Specify the name of the CA. This sets the Subject Distinguished Name of the CA. Accept defaults and continue.

The next step is to Specify the validity period. CA certificates (especially root CAs) typically need a long validity period. Choose a value like 5 Years, then continue:


Accept defauts for the Specify the database locations step.

Finally, you will reach the Confirmation step, which summarises the chosen configuration options. Review the settings then Configure:


The configuration will take a few moments, then the Results will be displayed:


AD CS is now configured and you can begin issuing certificates.

Appendix B: creating a custom sub-CA certificate template

In this section we look at how to create a new certificate template for sub-CAs by duplicating an existing template, then modifying it.

To manage certificate templates, from Server Manager right-click the server and open the Certification Authority program:


In the sidebar tree view, right-click Certificate Templates then select Manage.


The Certificate Templates Console will open. The default profile for sub-CAs has the Template Display Name Subordinate Certification Authority. Right-click this template and choose Duplicate Template.


The new template is created and the Properties of New Template dialog appears, allowing the administrator to customise the template. You can set a new Template display name, Template name and so on:


You can also change various aspects of certificate issuance including which extensions will appear on the issued certificate, and the values of those extensions. In the following screenshot, we see a new Certificate Policies OID being defined for addition to certificates issued via this template:


Also under Extensions, you can discover the OID for this template by looking at the Certificate Template Information extension description.

Finally, having defined the new certificate template, we have to activate it for use with the AD CA. Back in the Certification Authority management window, right-click Certificate Templates and select Certificate Template to Issue:


This will pop up the Enable Certificate Templates dialog, containing a list of templates available for use with the CA. Select the new template and click OK. The new certificate template is now ready for use.

Appendix C: issuing a certificate

In this section we look at how to use AD CS to issue a certificate. It is assumed that the CSR to be signed exists and Active Directory can access it.

In the Certification Authority window, in the sidebar right-click the CA and select All Tasks >> Submit new request…:


This will bring up a file chooser dialog. Find the CSR and Open it:


Assuming all went well (including the CSR indicating a known certificate template), the certificate is immediately issued and the Save Certificate dialog appear, asking where to save the issued certificate.

by ftweedal at August 14, 2017 06:04 AM

July 11, 2017

Fraser Tweedale

Implications of Common Name deprecation for Dogtag and FreeIPA

Or, ERR_CERT_COMMON_NAME_INVALID, and what we are doing about it.

Google Chrome version 58, released in April 2017, removed support for the X.509 certificate Subject Common Name (CN) as a source of naming information when validating certificates. As a result, certificates that do not carry all relevant domain names in the Subject Alternative Name (SAN) extension result in validation failures.

At the time of writing this post Chrome is just the first mover, but Mozilla Firefox and other programs and libraries will follow suit. The public PKI used to secure the web and other internet communiations is largely unaffected (browsers and CAs moved a long time ago to ensure that certificates issued by publicly trusted CAs carried all DNS naming information in the SAN extension), but some enterprises running internal PKIs are feeling the pain.

In this post I will provide some historical and technical context to the situation, and explain what we are are doing in Dogtag and FreeIPA to ensure that we issue valid certificates.


X.509 certificates carry subject naming information in two places: the Subject Distinguished Name (DN) field, and the Subject Alternative Name extension. There are many types of attributes available in the DN, including organisation, country, and common name. The definitions of these attribute types came from X.500 (the precursor to LDAP) and all have an ASN.1 representation.

Within the X.509 standard, the CN has no special interpretation, but when certificates first entered widespread use in the SSL protocol, it was used to carry the domain name of the subject site or service. When connecting to a web server using TLS/SSL, the client would check that the CN matches the domain name they used to reach the server. If the certificate is chained to a trusted CA, the signature checks out, and the domain name matches, then the client has confidence that all is well and continues the handshake.

But there were a few problems with using the Common Name. First, what if you want a certificate to support multiple domain names? This was especially a problem for virtual hosts in the pre-SNI days where one IP address could only have one certificate associated with it. You can have multiple CNs in a Distinguished Name, but the semantics of X.500 DNs is strictly heirarichical. It is not an appropriate use of the DN to cram multiple, possibly non-hierarchical domain names into it.

Second, the CN in X.509 has a length limit of 64 characters. DNS names can be longer. The length limit is too restrictive, especially in the world of IaaS and PaaS where hosts and services are spawned and destroyed en masse by orchestration frameworks.

Third, some types of subject names do not have a corresponding X.500 attribute, including domain names. The solution to all three of these problems was the introduction of the Subject Alternative Name X.509 extension, to allow more types of names to be used in a certificate. (The SAN extensions is itself extensible; apart from DNS names other important name types include IP addresses, email addresses, URIs and Kerberos principal names). TLS clients added support for validating SAN DNSName values in addition to the CN.

The use of the CN field to carry DNS names was never a standard. The Common Name field does not have these semantics; but using the CN in this way was an approach that worked. This interpretation was later formalised by the CA/B Forum in their Baseline Requirements for CAs, but only as a reflection of a current practice in SSL/TLS server and client implementations. Even in the Baseline Requirements the CN was a second-class citizen; they mandated that if the CN was present at all, it must reflect one of the DNSName or IP address values from the SAN extension. All public CAs had to comply with this requirement, which is why Chrome’s removal of CN support is only affecting private PKIs, not public web sites.

Why remove CN validation?

So, Common Name was not ideal for carrying DNS naming information, but given that we now have SAN, was it really necessary to deprecate it, and is it really necessary to follow through and actually stop using it, causing non-compliant certificates that were previously accepted to now be rejected?

The most important reason for deprecating CN validation is the X.509 Name Constraints extension. Name Constraints, if they appear in a CA certificate or intermediate CA certificate, constrain the valid subject names on leaf certificates. Various name types are supported including DNS names; a DNS name constraint restricts the domain of validity to the domain(s) listed and subdomains thereof. For example, if the DNS name appears in a CA certificate’s Name Constraints extension, leaf certificates with a DNS name of or could be valid, but a DNS name of could not be valid. Conforming X.509 implementations must enforce these constraints.

But these constraints only apply to SAN DNSName values, not to the CN. This is why accepting DNS naming information in the CN had to be deprecated – the name constraints cannot be properly enforced!

So back in May 2000 the use of Common Name for carrying a DNS name was deprecated by RFC 2818. Although it deprecated the practice this RFC required clients to fall back to the Common Name if there were no SAN DNSName values on the certificate. Then in 2011 RFC 6125 removed the requirement for clients to fall back to the common name, making this optional behaviour. Over recent years, some TLS clients began emitting warnings when they encountered certificates without SAN DNSNames, or where a DNS name in the CN did not also appear in the SAN extension. Finally, Chrome has become the first widely used client to remove support.

Despite more than 15 years notice on the deprecation of this use of Common Name, a lot of CA software and client tooling still does not have first-class support for the SAN extension. Most tools used to generate CSRs do not even ask about SAN, and require complex configuration to generate a request bearing the SAN extension. Similarly, some CA programs does not do a good job of issuing RFC-compliant certificates. Right now, this includes Dogtag and FreeIPA.

Subject Alternative Name and FreeIPA

For some years, FreeIPA (in particular, the default profile for host and service certificates, called caIPAserviceCert) has supported the SAN extension, but the client is required to submit a CSR containing the desired SAN extension data. The names in the CSR (the CN and all alternative names) get validated against the subject principal, and then the CA would issue the certificate with exactly those names. There was no way to ensure that the domain name in the CN was also present in the SAN extension.

We could add this requirement to FreeIPA’s CSR validation routine, but this imposes an unreasonable burden on the user to "get it right". Tools like OpenSSL have poor usability and complex configuration. Certmonger supports generating a CSR with the SAN extension but it must be explicitly requested. For FreeIPA’s own certificates, we have (in recent major releases) ensured that they have contained the SAN extension, but this is not the default behaviour and that is a problem.

FreeIPA 4.5 brought with it a CSR autogeneration feature that, for a given certificate profile, lets the administrator specify how to construct a CSR appropriate for that profile. This reduces the burden on the end user, but they must still opt in to this process.

Subject Alternative Name and Dogtag

Until Dogtag 10.4, there were two ways to produce a certificate with the SAN extension. One was the SubjectAltNameExtDefault profile component, which, for a given profile, supports a fixed number of names, either hard coded or based on particular request attributes (e.g. the CN, the email address of the authenticated user, etc). The other was the UserExtensionDefault which copies a given extension from the CSR to the final certificate verbatim (no validation of the data occurs). We use UserExtensionDefault in FreeIPA’s certificate profile (all names are validated by the FreeIPA framework before the request is submitted to Dogtag).

Unfortunately, SubjectAltNameExtDefault and UserExtensionDefault are not compatible with each other. If a profile uses both and the CSR contains the SAN extension, issuance will fail with an error because Dogtag tried to add two SAN extensions to the certificate.

In Dogtag 10.4 we introduced a new profile component that improves the situation, especially for dealing with the removal of client CN validation. The CommonNameToSANDefault will cause any profile that uses it to examine the Common Name, and if it looks like a DNS name, it will add it to the SAN extension (creating the extension if necessary).

Ultimately, what is needed is a way to define a certificate profile that just makes the right certificate, without placing an undue burden on the client (be it a human user or a software agent). The complexity and burden should rest with Dogtag, for the sake of all users. We are gradually making steps toward this, but it is still a long way off. I have discussed this utopian vision in a previous post.

Configuring CommonNameToSANDefault

If you have Dogtag 10.4, here is how to configure a profile to use the CommonNameToSANDefault. Add the following policy directives (the policyset and serverCertSet and index 12 are indicative only, but the index must not collide with other profile components):

policyset.serverCertSet.12.constraint.class_id=noConstraintImpl Constraint
policyset.serverCertSet.12.default.class_id=commonNameToSANDefaultImpl Common Name to Subject

Add the index to the list of profile policies:


Then import the modified profile configuration, and you are good to go. There are a few minor caveats to be aware of:

  • Names containing wildcards are not recognised as DNS names. The rationale is twofold; wildcard DNS names, although currently recognised by most programs, are technically a violation of the X.509 specification (RFC 5280), and they are discouraged by RFC 6125. Therefore if the CN contains a wildcard DNS name, CommonNameToSANDefault will not copy it to the SAN extension.
  • Single-label DNS names are not copied. It is unlikely that people will use Dogtag to issue certificates for top-level domains. If CommonNameToSANDefault encounters a single-label DNS name, it will assume it is actually not a DNS name at all, and will not copy it to the SAN extension.
  • The CommonNameToSANDefault policy index must come after UserExtensionDefault, SubjectAltNameExtDefault, or any other component that adds the SAN extension, otherwise an error may occur because the older components do not gracefully handle the situation where the SAN extension is already present.

What we are doing in FreeIPA

Updating FreeIPA profiles to use CommonNameToSANDefault is trickier – FreeIPA configures Dogtag to use LDAP-based profile storage, and mixed-version topologies are possible, so updating a profile to use the new component could break certificate requests on other CA replicas if they are not all at the new versions. We do not want this situation to occur.

The long-term fix is to develop a general, version-aware profile update mechanism that will import the best version of a profile supported by all CA replicas in the topology. I will be starting this effort soon. When it is in place we will be able to safely update the FreeIPA-defined profiles in existing deployments.

In the meantime, we will bump the Dogtag dependency and update the default profile for new installations only in the 4.5.3 point release. This will be safe to do because you can only install replicas at the same or newer versions of FreeIPA, and it will avoid the CN validation problems for all new installations.


In this post we looked at the technical reasons for deprecating and removing support for CN domain validation in X.509 certificates, and discussed the implications of this finally happening, namely: none for the public CA world, but big problems for some private PKIs and programs including FreeIPA and Dogtag. We looked at the new CommonNameToSANDefault component in Dogtag that makes it easier to produce compliant certs even when the tools to generate the CSR don’t help you much, and discussed upcoming and proposed changes in FreeIPA to improve the situation there.

One big takeaway from this is to be more proactive in dealing with deprecated features in standards, APIs or programs. It is easy to punt on the work, saying "well yes it is deprecated but all the programs still support it…" The thing is, tomorrow they may not support it anymore, and when it was deprecated for good reasons you really cannot lay the blame at Google (or whoever). On the FreeIPA team we (and especially me as PKI wonk in residence) were aware of these issues but kept putting off the work. Then one day users and customers start having problems accessing their internal services in Chrome! 15 years should have been enough time to deal with it… but we (I) did not.

Lesson learned.

by ftweedal at July 11, 2017 03:25 AM

June 26, 2017

Fraser Tweedale

Wildcard SAN certificates in FreeIPA

In an earlier post I discussed how to make a certificate profile for wildcard certificates in FreeIPA, where the wildcard name appeared in the Subject Common Name (CN) (but not the Subject Alternative Name (SAN) extension). Apart from the technical details that post also explained that wildcard certificates are deprecated, why they are deprecated, and therefore why I was not particularly interested in pursuing a way to get wildcard DNS names into the SAN extension.

But, as was portended long ago (more than 15 years, when RFC 2818 was published) DNS name assertions via the CN field are deprecated, and finally some client software removed CN name processing support. The Chrome browser is first off the rank, but it won’t be the last!

Unfortunately, programs that have typically used wildcard certificates (hosting services/platforms, PaaS, and sites with many subdomains) are mostly still using wildcard certificates, and FreeIPA still needs to support these programs. As much as I would like to say "just use Let’s Encrypt / ACME!", it is not realistic for all of these programs to update in so short a time. Some may never be updated. So for now, wildcard DNS names in SAN is more than a "nice to have" – it is a requirement for a handful of valid use cases.


Here is how to do it in FreeIPA. Most of the steps are the same as in the earlier post so I will not repeat them here. The only substantive difference is in the Dogtag profile configuration.

In the profile configuration, set the following directives (note that the key serverCertSet and the index 12 are indicative only; the index does not matter as long as it is different from the other profile policy components):

policyset.serverCertSet.12.constraint.class_id=noConstraintImpl Constraint
policyset.serverCertSet.12.default.class_id=subjectAltNameExtDefaultImpl Alternative Name Extension Default

Also be sure to add the index to the directive containing the list of profile policies:


This configuration will cause two SAN DNSName values to be added to the certificate – one using the CN from the CSR, and the other using the CN from the CSR preceded by a wildcard label.

Finally, be aware that because the subjectAltNameExtDefaultImpl component adds the SAN extension to a certificate, it conflicts with the userExtensionDefault component when configured to copy the SAN extension from a CSR to the new certificate. This profile component will have a configuration like the following:

policyset.serverCertSet.11.constraint.class_id=noConstraintImpl Constraint
policyset.serverCertSet.11.default.class_id=userExtensionDefaultImpl Supplied Extension Default

Again the numerical index is indicative only, but the OID is not; is the OID for the SAN extension. If your starting profile configuration contains the same directives, remove them from the configuration, and remove the index from the policy list too:



The profile containing the configuration outlined above will issue certificates with a wildcard DNS name in the SAN extension, alongside the DNS name from the CN. Mission accomplished; but note the following caveats.

This configuration cannot contain the userExtensionDefaultImpl component, which copies the SAN extension from the CSR to the final certificate if present in the CSR, because any CSR that contains a SAN extension would cause Dogtag to attempt to add a second SAN extension to the certificate (this is an error). It would be better if the conflicting profile components somehow "merged" the SAN values, but this is not their current behaviour.

Because we are not copying the SAN extension from the CSR, any SAN extension in the CSR get ignored by Dogtag – but not by FreeIPA; the FreeIPA CSR validation machinery always fully validates the subject alternative names it sees in a CSR, regardless of the Dogtag profile configuration.

If you work on software or services that currently use wildcard certificates please start planning to move away from this. CN validation was deprecated for a long time and is finally being phased out; wildcard certificates are also deprecated (RFC 6125) and they too may eventually be phased out. Look at services and technologies like Let’s Encrypt (a free, automated, publicly trusted CA) and ACME (the protocol that powers it) for acquiring all the certificates you need without administrator or operator intervention.

by ftweedal at June 26, 2017 12:48 PM

Powered by Planet