FreeIPA Identity Management planet - technical blogs

November 22, 2017

Fraser Tweedale

Changing a CA’s Subject DN; Part II: FreeIPA

In the previous post I explained how the CA Subject DN is an integral part of X.509 any why you should not change it. Doing so can break path validation, CRLs and OCSP, and many programs will not copye with the change. I proposed some alternative approaches that avoid these problems: re-chaining the CA, and creating subordinate CAs.

If you were thinking of changing your CA’s Subject DN, I hope that I dissuaded you. But if I failed, or you absolutely do need to change the Subject DN of your CA, where there’s a will there’s way. The purpose of this post is to explore how to do this in FreeIPA, and discuss the implications.

This is a long post. If you are really changing the CA subject DN, don’t skip anything. Otherwise don’t feel back about skimming or jumping straight to the discussion. Even skimming the article will give you an idea of the steps involved, and how to repair the ensuing breakage.

Changing the FreeIPA CA’s Subject DN

Before writing this post, I had never even attempted to do this. I am unaware of anyone else trying or whether they were successful. But the question of how to do it has come up several times, so I decided to investigate. The format of this post follows my exploration of the topic as I poked and prodded a FreeIPA deployment, working towards the goal.

What was the goal? Let me state the goal, and some assumptions:

  • The goal is to give the FreeIPA CA a new Subject DN. The deployment should look and behave as though it were originally installed with the new Subject.
  • We want to keep the old CA certificate in the relevant certificate stores and databases, alongside the new certificate.
  • The CA is not being re-keyed (I will deal with re-keying in a future article).
  • We want to be able to do this with both self-signed and externally-signed CAs. It’s okay if the process differs.
  • It’s okay to have manual steps that the administrator must perform.

Let’s begin on the deployment’s CA renewal master.

Certmonger (first attempt)

There is a Certmonger tracking request for the FreeIPA CA, which uses the dogtag-ipa-ca-renew-agent CA helper. The getcert resubmit command lets you change the Subject DN when you resubmit a request, via the -N option. I know the internals of the CA helper and I can see that there will be problems after renewing the certificate this way. Storing the certificate in the ca_renewal LDAP container will fail. But the renewal itself might succeed so I’ll try it and see what happens:

[root@f27-2 ~]# getcert resubmit -i 20171106062742 \
  -N 'CN=IPA.LOCAL CA 2017.11.09'
Resubmitting "20171106062742" to "dogtag-ipa-ca-renew-agent".

After waiting about 10 seconds for Certmonger to do its thing, I check the state of the tracking request:

[root@f27-2 ~]# getcert list -i 20171106062742
Request ID '20171106062742':
  status: MONITORING
  CA: dogtag-ipa-ca-renew-agent
  issuer: CN=Certificate Authority,O=IPA.LOCAL 201711061603
  subject: CN=Certificate Authority,O=IPA.LOCAL 201711061603
  expires: 2037-11-06 17:26:21 AEDT
  ... (various fields omitted)

The status and expires fields show that renewal succeeded, but the certificate still has the old Subject DN. This happened because the dogtag-ipa-ca-renew-agent helper doesn’t think it is renewing the CA certificate (which is true!)

Modifying the IPA CA entry

So let’s trick the Certmonger renewal helper. dogtag-ipa-ca-renew-agent looks up the CA Subject DN in the ipaCaSubjectDn attribute of the ipa CA entry (cn=ipa,cn=cas,cn=ca,{basedn}). This attribute is not writeable via the IPA framework but you can change it using regular LDAP tools (details out of scope). If the certificate is self-signed you should also change the ipaCaIssuerDn attribute. After modifying the entry run ipa ca-show to verify that these attributes have the desired values:

[root@f27-2 ~]# ipa ca-show ipa
  Name: ipa
  Description: IPA CA
  Authority ID: cdbfeb5a-64d2-4141-98d2-98c005802fc1
  Subject DN: CN=IPA.LOCAL CA 2017.11.09
  Issuer DN: CN=IPA.LOCAL CA 2017.11.09
  Certificate: MIIDnzCCAoegAwIBAgIBCTANBgkqhkiG9w0...

Certmonger (second attempt)

Now let’s try and renew the CA certificate via Certmonger again:

[root@f27-2 ~]# getcert resubmit -i 20171106062742 \
  -N 'CN=IPA.LOCAL CA 2017.11.09'
Resubmitting "20171106062742" to "dogtag-ipa-ca-renew-agent".

Checking the getcert list output after a short wait:

[root@f27-2 ~]# getcert list -i 20171106062742
Request ID '20171106062742':
  status: MONITORING
  CA: dogtag-ipa-ca-renew-agent
  issuer: CN=Certificate Authority,O=IPA.LOCAL 201711061603
  subject: CN=IPA.LOCAL CA 2017.11.09
  expires: 2037-11-09 16:11:12 AEDT
  ... (various fields omitted)

Progress! We now have a CA certificate with the desired Subject DN. The new certificate has the old (current) issuer DN. We’ll ignore that for now.

Checking server health

Now I need to check the state of the deployment. Did anything go wrong during renewal? Is everything working?

First, I checked the Certmonger journal output to see if there were any problems. The journal contained the following messages (some fields omitted for brevity):

16:11:17 /dogtag-ipa-ca-renew-agent-submit[1662]: Forwarding request to dogtag-ipa-renew-agent
16:11:17 /dogtag-ipa-ca-renew-agent-submit[1662]: dogtag-ipa-renew-agent returned 0
16:11:19 /stop_pkicad[1673]: Stopping pki_tomcatd
16:11:20 /stop_pkicad[1673]: Stopped pki_tomcatd
16:11:22 /renew_ca_cert[1710]: Updating CS.cfg
16:11:22 /renew_ca_cert[1710]: Updating CA certificate failed: no matching entry found
16:11:22 /renew_ca_cert[1710]: Starting pki_tomcatd
16:11:34 /renew_ca_cert[1710]: Started pki_tomcatd
16:11:34 certmonger[2013]: Certificate named "caSigningCert cert-pki-ca" in token "NSS Certificate DB" in database "/etc/pki/pki-tomcat/alias" issued by CA and saved.

We can see that the renewal succeeded and Certmonger saved the new certificate in the NSSDB. Unfortunately there was an error in the renew_ca_cert post-save hook: it failed to store the new certificate in the LDAP certstore. That should be easy to resolve. I’ll make a note of that and continue checking deployment health.

Next, I checked whether Dogtag was functioning. systemctl status pki-tomcatd@pki-tomcat and the CA debug log (/var/log/pki/pki-tomcat/ca/debug) indicated that Dogtag started cleanly. Even better, the Dogtag NSSDB has the new CA certificate with the correct nickname:

[root@f27-2 ~]# certutil -d /etc/pki/pki-tomcat/alias \
  -L -n 'caSigningCert cert-pki-ca'
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 11 (0xb)
        Signature Algorithm: PKCS #1 SHA-256 With RSA Encryption
        Issuer: "CN=Certificate Authority,O=IPA.LOCAL 201711061603"
        Validity:
            Not Before: Thu Nov 09 05:11:12 2017
            Not After : Mon Nov 09 05:11:12 2037
        Subject: "CN=IPA.LOCAL CA 2017.11.09"
  ... (remaining lines omitted)

We have not yet confirmed that the Dogtag uses the new CA Subject DN as the Issuer DN on new certificates (we’ll check this later).

Now let’s check the state of IPA itself. There is a problem in communication between the IPA framework and Dogtag:

[root@f27-2 ~]# ipa ca-show ipa
ipa: ERROR: Request failed with status 500: Non-2xx response from CA REST API: 500.

A quick look in /var/log/httpd/access_log showed that it was not a general problem but only occurred when accessing a particular resource:

[09/Nov/2017:17:15:09 +1100] "GET https://f27-2.ipa.local:443/ca/rest/authorities/cdbfeb5a-64d2-4141-98d2-98c005802fc1/cert HTTP/1.1" 500 6201

That is a Dogtag lightweight authority resource for the CA identified by cdbfeb5a-64d2-4141-98d2-98c005802fc1. That is the CA ID recorded in the FreeIPA ipa CA entry. This gives a hint about where the problem lies. An ldapsearch reveals more:

[f27-2:~] ftweedal% ldapsearch -LLL \
    -D 'cn=directory manager' -w DM_PASSWORD \
    -b 'ou=authorities,ou=ca,o=ipaca' -s one
dn: cn=cdbfeb5a-64d2-4141-98d2-98c005802fc1,ou=authorities,ou=ca,o=ipaca
authoritySerial: 9
objectClass: authority
objectClass: top
cn: cdbfeb5a-64d2-4141-98d2-98c005802fc1
authorityID: cdbfeb5a-64d2-4141-98d2-98c005802fc1
authorityKeyNickname: caSigningCert cert-pki-ca
authorityEnabled: TRUE
authorityDN: CN=Certificate Authority,O=IPA.LOCAL 201711061603
description: Host authority

dn: cn=008a4ded-fd4b-46fe-8614-68518123c95f,ou=authorities,ou=ca,o=ipaca
objectClass: authority
objectClass: top
cn: 008a4ded-fd4b-46fe-8614-68518123c95f
authorityID: 008a4ded-fd4b-46fe-8614-68518123c95f
authorityKeyNickname: caSigningCert cert-pki-ca
authorityEnabled: TRUE
authorityDN: CN=IPA.LOCAL CA 2017.11.09
description: Host authority

There are now two authority entries when there should be one. During startup, Dogtag makes sure it has an authority entry for the main ("host") CA. It compares the Subject DN from the signing certificate in its NSSDB to the authority entries. If it doesn’t find a match it creates a new entry, and that’s what happened here.

The resolution is straightforward:

  1. Stop Dogtag
  2. Update the authorityDN and authoritySerial attributes of the original host authority entry.
  3. Delete the new host authority entry.
  4. Restart Dogtag.

Now the previous ldapsearch returns one entry, with the original authority ID and correct attribute values:

[f27-2:~] ftweedal% ldapsearch -LLL \
    -D 'cn=directory manager' -w DM_PASSWORD \
    -b 'ou=authorities,ou=ca,o=ipaca' -s one
dn: cn=cdbfeb5a-64d2-4141-98d2-98c005802fc1,ou=authorities,ou=ca,o=ipaca
authoritySerial: 11
authorityDN: CN=IPA.LOCAL CA 2017.11.09
objectClass: authority
objectClass: top
cn: cdbfeb5a-64d2-4141-98d2-98c005802fc1
authorityID: cdbfeb5a-64d2-4141-98d2-98c005802fc1
authorityKeyNickname: caSigningCert cert-pki-ca
authorityEnabled: TRUE
description: Host authority

And the operations that were failing before (e.g. ipa ca-show ipa) now succeed. So we’ve confirmed, or restored, the basic functionality on this server.

LDAP certificate stores

There are two LDAP certificate stores in FreeIPA. The first is cn=ca_renewal,cn=ipa,cn=etc,{basedn}. It is only used for replicating Dogtag CA and system certificates from the CA renewal master to CA replicas. The dogtag-ipa-ca-renew-agent Certmonger helper should update the cn=caSigningCert cert-pki-ca,cn=ca_renewal,cn=ipa,cn=etc,{basedn} entry after renewing the CA certificate. A quick ldapsearch shows that this succeeded, so there is nothing else to do here.

The other certificate store is cn=certificates,cn=ipa,cn=etc,{basedn}. This store contains trusted CA certificates. FreeIPA clients and servers retrieve certificates from this directory when updating their certificate trust stores. Certificates are stored in this container with a cn based on the Subject DN, except for the IPA CA which is stored with cn={REALM-NAME} IPA CA. (In my case, this is cn=IPA.LOCAL IPA CA.)

We discovered the failure to update this certificate store earlier (in the Certmonger journal). Now we must fix it up. We still want to trust certificates with the old Issuer DN, otherwise we would have to reissue all of them. So we need to keep the old CA certificate in the store, alongside the new.

The process to fix up the certificate store is:

  1. Export the new CA certificate from the Dogtag NSSDB to a file:

    [root@f27-2 ~]# certutil -d /etc/pki/pki-tomcat/alias \
       -L -a -n 'caSigningCert cert-pki-ca' > new-ca.crt
  2. Add the new CA certificate to the certificate store:

    [root@f27-2 ~]# ipa-cacert-manage install new-ca.crt
    Installing CA certificate, please wait
    CA certificate successfully installed
    The ipa-cacert-manage command was successful
  3. Rename (modrdn) the existing cn={REALM-NAME} IPA CA entry. The new cn RDN is based on the old CA Subject DN.
  4. Rename the new CA certificate entry. The current cn is the new Subject DN. Rename it to cn={REALM-NAME} IPA CA. I encountered a 389DS attribute uniqueness error when I attempted to do this as a modrdn operation. I’m not sure why it happened. To work around the problem I deleted the entry and added it back with the new cn.

At the end of this procedure the certificate store is as it should be. The CA certificate with new Subject DN is installed as {REALM-NAME} IPA CA and the old CA certificate has been preserved under a different RDN.

Updating certificate databases

The LDAP certificate stores have the new CA certificate. Now we need to update the other certificate databases so that the programs that use them will trust certificates with the new Issuer DN. These databases include:

/etc/ipa/ca.crt

CA trust store used by the IPA framework

/etc/ipa/nssdb

An NSSDB used by FreeIPA

/etc/dirsrv/slapd-{REALM-NAME}

NSSDB used by 389DS

/etc/httpd/alias

NSSDB used by Apache HTTPD

/etc/pki/ca-trust/source/ipa.p11-kit

Adds FreeIPA CA certificates to the system-wide trust store

Run ipa-certupdate to update these databases with the CA certificates from the LDAP CA certificate store:

[root@f27-2 ~]# ipa-certupdate
trying https://f27-2.ipa.local/ipa/json
[try 1]: Forwarding 'schema' to json server 'https://f27-2.ipa.local/ipa/json'
trying https://f27-2.ipa.local/ipa/session/json
[try 1]: Forwarding 'ca_is_enabled/1' to json server 'https://f27-2.ipa.local/ipa/session/json'
[try 1]: Forwarding 'ca_find/1' to json server 'https://f27-2.ipa.local/ipa/session/json'
failed to update IPA.LOCAL IPA CA in /etc/dirsrv/slapd-IPA-LOCAL: Command '/usr/bin/certutil -d /etc/dirsrv/slapd-IPA-LOCAL -A -n IPA.LOCAL IPA CA -t C,, -a -f /etc/dirsrv/slapd-IPA-LOCAL/pwdfile.txt' returned non-zero exit status 255.
failed to update IPA.LOCAL IPA CA in /etc/httpd/alias: Command '/usr/bin/certutil -d /etc/httpd/alias -A -n IPA.LOCAL IPA CA -t C,, -a -f /etc/httpd/alias/pwdfile.txt' returned non-zero exit status 255.
failed to update IPA.LOCAL IPA CA in /etc/ipa/nssdb: Command '/usr/bin/certutil -d /etc/ipa/nssdb -A -n IPA.LOCAL IPA CA -t C,, -a -f /etc/ipa/nssdb/pwdfile.txt' returned non-zero exit status 255.
Systemwide CA database updated.
Systemwide CA database updated.
The ipa-certupdate command was successful
[root@f27-2 ~]# echo $?
0

ipa-certupdate reported that it was successful and it exited cleanly. But a glance at the output shows that not all went well. There were failures added the new CA certificate to several NSSDBs. Running one of the commands manually to see the command output doesn’t give us much more information:

[root@f27-2 ~]# certutil -d /etc/ipa/nssdb -f /etc/ipa/nssdb/pwdfile.txt \
    -A -n 'IPA.LOCAL IPA CA' -t C,, -a < ~/new-ca.crt
certutil: could not add certificate to token or database: SEC_ERROR_ADDING_CERT: Error adding certificate to database.
[root@f27-2 ~]# echo $?
255

At this point I guessed that because there is already a certificate stored with the nickname IPA.LOCAL IPA CA, NSS refuses to add a certificate with a different Subject DN under the same nickname. So I will delete the certificates with this nickname from each of the NSSDBs, then try again. For some reason the nickname appeared twice in each NSSDB:

[root@f27-2 ~]# certutil -d /etc/dirsrv/slapd-IPA-LOCAL -L

Certificate Nickname                                         Trust Attributes
                                                             SSL,S/MIME,JAR/XPI

CN=alt-f27-2.ipa.local,O=Example Organization                u,u,u
CN=CA,O=Example Organization                                 C,,
IPA.LOCAL IPA CA                                             CT,C,C
IPA.LOCAL IPA CA                                             CT,C,C

So for each NSSDB, to delete the certificate I had to execute the certutil command twice. For the 389DS NSSDB, the command was:

[root@f27-2 ~]# certutil -d /etc/httpd/alias -D -n "IPA.LOCAL IPA CA"

The commands for the other NSSDBs were similar. With the problematic certificates removed, I tried running ipa-certupdate again:

[root@f27-2 ~]# ipa-certupdate
trying https://f27-2.ipa.local/ipa/session/json
[try 1]: Forwarding 'ca_is_enabled/1' to json server 'https://f27-2.ipa.local/ipa/session/json'
[try 1]: Forwarding 'ca_find/1' to json server 'https://f27-2.ipa.local/ipa/session/json'
Systemwide CA database updated.
Systemwide CA database updated.
The ipa-certupdate command was successful
[root@f27-2 ~]# echo $?
0

This time there were no errors. certutil shows an IPA.LOCAL IPA CA certificate in the database, and it’s the right certificate:

[root@f27-2 ~]# certutil -d /etc/dirsrv/slapd-IPA-LOCAL -L

Certificate Nickname                                         Trust Attributes
                                                             SSL,S/MIME,JAR/XPI

CN=alt-f27-2.ipa.local,O=Example Organization                u,u,u
CN=CA,O=Example Organization                                 C,,
CN=Certificate Authority,O=IPA.LOCAL 201711061603            CT,C,C
CN=Certificate Authority,O=IPA.LOCAL 201711061603            CT,C,C
IPA.LOCAL IPA CA                                             C,,
[root@f27-2 ~]# certutil -d /etc/dirsrv/slapd-IPA-LOCAL -L -n 'IPA.LOCAL IPA CA'
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 11 (0xb)
        Signature Algorithm: PKCS #1 SHA-256 With RSA Encryption
        Issuer: "CN=Certificate Authority,O=IPA.LOCAL 201711061603"
        Validity:
            Not Before: Thu Nov 09 05:11:12 2017
            Not After : Mon Nov 09 05:11:12 2037
        Subject: "CN=IPA.LOCAL CA 2017.11.09"
        ...

I also confirmed that the old and new CA certificates are present in the /etc/ipa/ca.crt and /etc/pki/ca-trust/source/ipa.p11-kit files. So all the certificate databases now include the new CA certificate.

Renewing the CA certificate (again)

Observe that (in the self-signed FreeIPA CA case) the Issuer DN of the new CA certificate is the Subject DN of the old CA certificate. So we have not quite reached out goal. The original CA certificate was self-signed, so we want a self-signed certificate with the new Subject.

Renewing the CA certificate one more time should result in a self-signed certificate. The current situation is not likely to result in operational issues. So you can consider this an optional step. Anyhow, let’s give it a go:

[root@f27-2 ~]# getcert list -i 20171106062742 | egrep 'status|issuer|subject'
        status: MONITORING
        issuer: CN=Certificate Authority,O=IPA.LOCAL 201711061603
        subject: CN=IPA.LOCAL CA 2017.11.09
[root@f27-2 ~]# getcert resubmit -i 20171106062742
Resubmitting "20171106062742" to "dogtag-ipa-ca-renew-agent".
[root@f27-2 ~]# sleep 5
[root@f27-2 ~]# getcert list -i 20171106062742 | egrep 'status|issuer|subject'
        status: MONITORING
        issuer: CN=IPA.LOCAL CA 2017.11.09
        subject: CN=IPA.LOCAL CA 2017.11.09

Now we have a self-signed CA cert with the new Subject DN. This step has also confirmed that that the certificate issuance is working fine with the new CA subject.

Renewing FreeIPA service certificates

This is another optional step, because we have kept the old CA certificate in the trust store. I want to check that certificate renewals via the FreeIPA framework are working, and this is a fine way to do that.

I’ll renew the HTTP service certificate. This deployment is using an externally-signed HTTP certificate so first I had to track it:

[root@f27-2 ~]# getcert start-tracking \
  -d /etc/httpd/alias -p /etc/httpd/alias/pwdfile.txt \
  -n 'CN=alt-f27-2.ipa.local,O=Example Organization' \
  -c IPA -D 'f27-2.ipa.local' -K 'HTTP/f27-2.ipa.local@IPA.LOCAL'
New tracking request "20171121071700" added.

Then I resubmitted the tracking request. I had to include the -N <SUBJECT> option because the current Subject DN would be rejected by FreeIPA. I also had to include the -K <PRINC_NAME> option due to a bug in Certmonger.

[root@f27-2 ~]# getcert resubmit -i 20171121073608 \
  -N 'CN=f27-2.ipa.local' \
  -K 'HTTP/f27-2.ipa.local@IPA.LOCAL'
Resubmitting "20171121073608" to "IPA".
[root@f27-2 ~]# sleep 5
[root@f27-2 ~]# getcert list -i 20171121073608 \
  | egrep 'status|error|issuer|subject'
      status: MONITORING
      issuer: CN=IPA.LOCAL CA 2017.11.09
      subject: CN=f27-2.ipa.local,O=IPA.LOCAL 201711061603

The renewal succeeded, proving that certificate issuance via the FreeIPA framework is working.

Checking replica health

At this point, I’m happy with the state of the FreeIPA server. But so far I have only dealt with one server in the topology (the renewal master, whose hostname is f27-2.ipa.local). What about other CA replicas?

I log onto f27-1.ipa.local (a CA replica). As a first step I execute ipa-certupdate. This failed in the same was as on the renewal master, and the steps to resolve were the same.

Next I tell Certmonger to renew the CA certificate. This should not renew the CA certificate, only retrieve the certificate from the LDAP certificate store:

[root@f27-1 ~]# getcert list -i 20171106064548 \
  | egrep 'status|error|issuer|subject'
        status: MONITORING
        issuer: CN=Certificate Authority,O=IPA.LOCAL 201711061603
        subject: CN=Certificate Authority,O=IPA.LOCAL 201711061603
[root@f27-1 ~]# getcert resubmit -i 20171106064548
Resubmitting "20171106064548" to "dogtag-ipa-ca-renew-agent".
[root@f27-1 ~]# sleep 30
[root@f27-1 ~]# getcert list -i 20171106064548 | egrep 'status|error|issuer|subject'
        status: MONITORING
        issuer: CN=Certificate Authority,O=IPA.LOCAL 201711061603
        subject: CN=Certificate Authority,O=IPA.LOCAL 201711061603

Well, that did not work. Instead of retrieving the new CA certificate from LDAP, the CA replica issued a new certificate:

[root@f27-1 ~]# certutil -d /etc/pki/pki-tomcat/alias -L \
    -n 'caSigningCert cert-pki-ca'
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 268369927 (0xfff0007)
        Signature Algorithm: PKCS #1 SHA-256 With RSA Encryption
        Issuer: "CN=Certificate Authority,O=IPA.LOCAL 201711061603"
        Validity:
            Not Before: Tue Nov 21 08:18:09 2017
            Not After : Fri Nov 06 06:26:21 2037
        Subject: "CN=Certificate Authority,O=IPA.LOCAL 201711061603"
        ...

This was caused by the first problem we faced when renewing the CA certificate with a new Subject DN. Once again, a mismatch between the Subject DN in the CSR and the FreeIPA CA’s Subject DN has confused the renewal helper.

The resolution in this case is to delete all the certificates with nickname caSigningCert cert-pki-ca or IPA.LOCAl IPA CA from Dogtag’s NSSDB then add the new CA certificate to the NSSDB. Then run ipa-certupdate again. Dogtag must not be running during this process:

[root@f27-1 ~]# systemctl stop pki-tomcatd@pki-tomcat
[root@f27-1 ~]# cd /etc/pki/pki-tomcat/alias
[root@f27-1 ~]# certutil -d . -D -n 'caSigningCert cert-pki-ca'
[root@f27-1 ~]# certutil -d . -D -n 'caSigningCert cert-pki-ca'
[root@f27-1 ~]# certutil -d . -D -n 'caSigningCert cert-pki-ca'
[root@f27-1 ~]# certutil -d . -D -n 'caSigningCert cert-pki-ca'
certutil: could not find certificate named "caSigningCert cert-pki-ca": SEC_ERROR_BAD_DATABASE: security library: bad database.
[root@f27-1 ~]# certutil -d . -D -n 'IPA.LOCAL IPA CA'
[root@f27-1 ~]# certutil -d . -D -n 'IPA.LOCAL IPA CA'
[root@f27-1 ~]# certutil -d . -D -n 'IPA.LOCAL IPA CA'
certutil: could not find certificate named "IPA.LOCAL IPA CA": SEC_ERROR_BAD_DATABASE: security library: bad database.
[root@f27-1 ~]# certutil -d . -A \
    -n 'caSigningCert cert-pki-ca' -t 'CT,C,C' < /root/ipa-ca.pem
[root@f27-1 ~]# ipa-certupdate
trying https://f27-1.ipa.local/ipa/json
[try 1]: Forwarding 'ca_is_enabled' to json server 'https://f27-1.ipa.local/ipa/json'
[try 1]: Forwarding 'ca_find/1' to json server 'https://f27-1.ipa.local/ipa/json'
Systemwide CA database updated.
Systemwide CA database updated.
The ipa-certupdate command was successful
[root@f27-1 ~]# systemctl start pki-tomcatd@pki-tomcat

Dogtag started without issue and I was able to issue a certificate via the ipa cert-request command on this replica.

Discussion

It took a while and required a lot of manual effort, but I reached the goal of changing the CA Subject DN. The deployment seems to be operational, although my testing was not exhaustive and there may be breakage that I did not find.

One of the goals was to define the process for both self-signed and externally-signed CAs. I did not deal with the externally-signed CA case. This article (and the process of writing it) was long enough without it! But much of the process, and problems encountered, will be the same.

There are some important concerns and caveats to be aware of.

First, CRLs generated after the Subject DN change may be bogus. They will be issued by the new CA but will contain serial numbers of revoked certificates that were issued by the old CA. Such assertions are invalid but not harmful in practice because those serial numbers will never be reused with the new CA. This is an implementation detail of Dogtag and not true in general.

But there is a bigger problem related to CRLs. After the CA name change, the old CA will never issue another CRL. This means that revoked certificates with the old Issuer DN will never again appear on a CRL issued by the old CA. Worse, the Dogtag OCSP responder errors when you query the status of a certificate with the old Issuer DN. In sum, this means that there is no way for Dogtag to revoke a certificate with the old Issuer DN. Because many systems "fail open" in the event of missing or invalid CRLs or OCSP errors, this is a potentially severe security issue.

Changing a FreeIPA installation’s CA Subject DN, whether by the procedure outlined in this post or by any other, is unsupported. If you try to do it and break your installation, we (the FreeIPA team) may try to help you recover, to a point. But we can’t guarantee anything. Here be dragons and all that.

If you think you need to change your CA Subject DN and have not read the previous post on this topic, please go and read it. It proposes some alternatives that, if applicable, avoid the messy process and security issues detailed here. Despite showing you how to change a FreeIPA installation’s CA Subject DN, my advice remains: don’t do it. I hope you will heed it.

by ftweedal at November 22, 2017 02:23 AM

November 20, 2017

Fraser Tweedale

Changing a CA’s Subject DN; Part I: Don’t Do That

When you deploy an X.509 certificate authority (CA), you choose a Subject Distinguished Name for that CA. It is sometimes abbreviated as Subject DN, Subject Name, SDN or just Subject.

The Subject DN cannot be changed; it is "for life". But sometimes someone wants to change it anyway. In this article I’ll speculate why someone might want to change a CA’s Subject DN, discuss why it is problematic to do so, and propose some alternative approaches.

What is the Subject DN?

A distinguished name (DN) is a sequence of sets of name attribute types and values. Common attribute types include Common Name (CN), Organisation (O), Organisational Unit (OU), Country (C) and so on. DNs are encoded in ASN.1, but have a well defined string representation. Here’s an example CA subject DN:

CN=DigiCert Global Root CA,OU=www.digicert.com,O=DigiCert Inc,C=US

All X.509 certificates contain an Issuer DN field and a Subject DN field. If the same value is used for both issuer and subject, it is a self-signed certificate. When a CA issues a certificate, the Issuer DN on the issued certificate shall be the Subject DN of the CA certificate. This relationship is a "link" in the chain of signatures from some root CA to end entity (or leaf) certificate.

The Subject DN uniquely identifies a CA. It is the CA. A CA can have multiple concurrent certificates, possibly with different public keys and key types. But if the Subject DN is the same, they are just different certificates for a single CA. Corollary: if the Subject DN differs, it is a different CA even if the key is the same.

CA Subject DN in FreeIPA

A standard installation of FreeIPA includes a CA. It can be a root CA or it can be signed by some other CA (e.g. the Active Directory CA of the organisation). As of FreeIPA v4.5 you can specify any CA Subject DN. Earlier versions required the subject to start with CN=Certificate Authority.

If you don’t explicitly specify the subject during installation, it defaults to CN=Certificate Authority, O=EXAMPLE.COM (replace EXAMPLE.COM with the actual realm name).

Why change the CA Subject DN?

Why would someone want to change a CA’s Subject DN? Usually it is because there is some organisational or regulatory requirement for the Subject DN to have a particular form. For whatever reason the Subject DN doesn’t comply, and now they want to bring it into compliance. In the FreeIPA case, we often see that the default CA Subject DN was accepted, only to later realise that a different name is needed.

To be fair, the FreeIPA installer does not prompt for a CA Subject DN but rather uses the default form unless explicitly told otherwise via options. Furthermore, the CA Subject DN is not mentioned in the summary of the installation parameters prior to confirming and proceeding with the installation. And there are the aforementioned restrictions in FreeIPA < v4.5. So in most cases where a FreeIPA administrator wants to change the CA Subject DN, it is not because they chose the wrong one, rather they were not given an opportunity to choose the right one.

Implications of changing the CA Subject DN

In the X.509 data model the Subject DN is the essence of a CA. So what happens if we do change it? There are several areas of concern, and we will look at each in turn.

Certification paths

Normally when you renew a CA certificate, you don’t need to keep the old CA certificates around in your trust stores. If the new CA certificate is within its validity period you can just replace the old certificate, and everything will keep working.

But if you change the Subject DN, you need to keep the old certificate around, because previously issued certificates will bear the old Issuer DN. Conceptually this is not a problem, but many programs and libraries cannot cope with multiple subjects using the same key. In this case the only workaround is to reissue every certificate, with the new Issuer DN. This is a nightmare.

CRLs

A certificate revocation list is a signed list of non-expired certificates that have been revoked. A CRL issuer is either the CA itself, or a trusted delegate. A CRL signing delegate has its own signing key and an X.509 certificate issued by the CA, which asserts that the subject is a CRL issuer. Like certificates, CRLs have an Issuer DN field.

So if the CA’s Subject DN changes, then CRLs issued by that CA must use the new name in the Issuer field. But recall that certificates are uniquely identified by the Issuer DN and Serial (think of this as a composite primary key). So if the CRL issuer changes (or the issuer of the CRL issuer), all the old revocation information is invalid. Now you must maintain two CRLs:

  • One for the old CA Subject. Even after the name change, this CRL may grow as certificates that were issued using the old CA subject are revoked.
  • One for the new CA Subject. It will start off empty.

If a CRL signing delegate is used, there is further complexity. You need two separate CRL signing certificates (one with the old Issuer DN, one with the new), and must

Suffice to say, a lot of CA programs do not handle these scenarios nicely or at all.

OCSP

The Online Certificate Status Protocol is a protocol for checking the revocation status of a single certificate. Like CRLs, OCSP responses may be signed by the issuing CA itself, or a delegate.

As in the CRL delegation case, different OCSP delegates must be used depending on which DN was the Issuer of the certificate whose status is being checked. If performing direct OCSP signing, if identifying the Responder ID by name, then the old or new name would be included depending on the Issuer of the certificate.

Performing the change

Most CA programs do not offer a way to change the Subject DN. This is not surprising, given that the operation just doesn’t fit into X.509 at all, to say nothing of the implementation considerations that arise.

It may be possible to change the CA Subject DN with some manual effort. In a follow-up post I’ll demonstrate how to change the CA Subject DN in a FreeIPA deployment.

Alternative approaches

I have outlined reasons why renaming a CA is a Bad Idea. So what other options are there?

Whether any of the follow options are viable depends on the use case or requirements. They might not be viable. If you have any other ideas about this I would love to have your feedback! So, let’s look at a couple of options.

Do nothing

If you only want to change the CA Subject DN for cosmetic reasons, don’t. Unless there is a clear business or organisational imperative, just accept the way things are. Your efforts would be better spent somewhere else, I promise!

Re-chaining your CA

If there is a requirement for your root CA to have a Subject DN of a particular form, you could create a CA that satisfies the requirement somewhere else (e.g. a separate instance of Dogtag or even a standalone OpenSSL CA). Then you can re-chain your FreeIPA CA up to this new external CA. That is, you renew the CA certificate, but the issuer of the new IPA CA certificate is the new external CA.

The new external CA becomes a trusted root CA, and your FreeIPA infrastructure and clients continue to function as normal. The FreeIPA CA is now an intermediate CA. No certificates need to be reissued, although some server configurations may need to be updated to include the new FreeIPA CA in their certificate chains.

Subordinate CA

If certain end-entity certificates have to be issued by a CA whose Subject DN meets certain requirements, you could create a subordinate CA (or sub-CA for short) with a compliant name. That is, the FreeIPA CA issues an intermediate CA certificate with the desired Subject DN, and that CA issues the leaf certificates.

FreeIPA support Dogtag lightweight sub-CAs as of v4.4 and there are no restrictions on the Subject DN (except uniqueness). Dogtag lightweight CAs live within the same Dogtag instance as the main FreeIPA CA. See ipa help ca for plugin documentation. One major caveat is that CRLs are not yet supported for lightweight CAs (there is an open ticket).

You could also use the FreeIPA CA to issue a CA certificate for some other CA program (possible another deployment of Dogtag or FreeIPA).

Conclusion

In this post I explained what a CA’s Subject DN is, and how it is an integral part of how X.509 works. We discussed some of the conceptual and practical issues that arise when you change a CA’s Subject DN. In particular, path validation, CRLs and OCSP are affected, and a lot of software will break when encountering a "same key, different subject" scenario.

The general recommendation for changing a CA’s subject DN is don’t. But if there is a real business reason why the current subject is unsuitable, we looked at a couple of alternative approaches that could help: re-chaining the CA, and creating sub-CAs.

In my next post we will have an in-depth look how to change a FreeIPA CA’s Subject DN: how to do it, and how to deal with the inevitable breakage.

by ftweedal at November 20, 2017 06:03 AM

November 15, 2017

Adam Young

Different CloudForms Catalogs for Different Groups

One of the largest value propositions of DevOps is the concept of Self Service provisioning. If you can remove human interaction from resource allocation, you can reduce both the response time and the likelihood of error in configuration. Red Hat CloudForms has a self service feature that allows a user to select from predefined services. You may wish to show different users different catalog items. This might be for security reasons, such as the set of credentials required and provided, or merely to reduce clutter and focus the end user on specific catalog items. Perhaps some items are still undergoing testing and are not ready for general consumption.

Obviously, these predefined services may not match your entire user population.

I’ve been working on setting up a CloudForms instance where members of different groups see different service catalogs. Here is what I did.

Tags are the primary tool used to match up users and their service catalogs. Specifically, A user will only see a catalog item if his group definition matches the Provisioning Scope tag of the Catalog Item. While you can make some catalog items to have a Provisioning Scope of All, you probably want to scope other items down to the target audience.

I have a demonstration setup based on IdM and CloudForms integration. When uses log in to the CloudForms appliance, one of the user groups managed by LDAP will be used to select their CloudForms group. The CloudForms group has a modified Provisioning Scope tag that will be used to select items from the service catalog.

I also have a top level tenant named “North America” that is used to manage the scope of the tags later on.  I won’t talk through setting this up, as most CloudForms deployment have something set as a top level tenant.

I’m not going to go through the steps to create a new catalog item.  There are other tutorials with go through this in detail.

My organization is adding support for statisticians.  Specifically, we need to provide support for VMs that are designed to support a customized version of the R programming environment.  All users that need these systems will be members of the stats group in IdM.  We want to be able to tag these instances with the stats Provisioning Scope as well.  The user is in the cloudusers group as well, which is required to provide access to the CloudForms appliance.

We start by having our sample user log in to the web UI.  This has the side effect of prepopulating the user and group data.  We could do this manually, but this way is less error prone, if a bit more of hassle.

My user currently only has a single item in her service catalog; the PostgreSQL appliance we make available to all developers.  This allows us to have a standard development environment for database work.

Log out and log back in as an administrator.  Here comes the obscure part.

Provisioning Scope tags are limited to set of valid values.  These values are, by default All or EVMGroup-user_self_service.  This second value matches a group with the same name.  In order to add an option, we need to modify the tag category associated with this tag.

  1. As an administrator, on the top right corner of the screen, click on your user name, and select the Configuration option from the dropdown.
  2. Select your region, in my case this is region 1.
  3. Across the top of the screen, you  will see Settings Region 1, and a series of tabs, most of which have the name of your tenant  (those of you that know my long standing issue with this term are probably grinning at my discomfort).  Since my top level tenant is “North America” I have a tab called North America Tags which I select. Select accordingly.
  4. Next to Category select “Provisioning Scope” from the drop down and you can see my existing set of custom tag values for Provisioning Scope.  Click on <New Entry> to add a new value, which I will call stats. I also use stats for the description.
  5. Click the Add button to the right.  See Below.

Now we can edit the newly defined “R Project” service to limit it to this provisioning scope.

  1. Navigate to Services->Catalogs->Catalog Items.
  2. Select the “R Project” Service.
  3. Click on the Policy  dropdown and select “Edit Tags”
  4. Click on the drop down to the right of “Select a customer tag to assign” (it is probably set on “Auto Approve -Max CPU *”) and scroll down to Provisioning Scope.
  5. The dropdown to the right, which defaults to “<Select a Value to Assign”>. Select this and scroll down to the new value.  For me, this is stats.  The new item will be added to the list.
  6. Click the Save button in the lower right of the screen.

Your list should look like this:

Finally, create the association between this provisioning scope and the stats group.

  1. From the dropdown on the top right of the screen that has your username, select Configuration.
  2. Expand the Access Control accordian
  3. Select groups.
  4. From the Configuration dropdown, select “Add a new Group”
  5. Select a Role for the user.  I use EvmRole-user_self_service
  6. Select a Project/Tenant for the user.
  7. Click on the checkbox labeled “Look Up External Authentiation Groups”
  8. A new field appears called “User to Look Up.”  I am going to user the “statuser” I created for this example, and click retrieve.
  9. The dropdown under the LDAP Groups for User is now populated.  I select stats.

To assign the tag for this group:

  1. Scroll down to the bottom of the page
  2. find and expand the “Provisioning Scope” tag
  3. Select “stats”
  4. Click the Add button in the bottom right corner of the page.

See Below.

Now when statuser logs in  to the self service web UI, they see both of the services provided:

 

One Big Caveat that has messed me up a few times:  a user only has one group active at a time.  If a user is a member of two groups, CloudForms will select one of them as the active group.  Services assigned only to the non-active group will not show up in the service catalog.  In my case, I had a group called cloudusers, and since all users are a member of that group, they would only see the Provisioning Scope, and thus the catalog items, for cloudusers, and not the stats group.

The Self Service webUI allows the user to change group to any of the other groups to which they are assigned.

The best option is to try and maintain a one to many relationship between groups and users;  constrain most users to a single group to avoid confusion.

This has been a long post.  The web UI for CloudForms requires a lot of navigation, and the concepts required to get this to work required more explanation than I originally had planned.  As I get more familiar with CloudForms, I’ll try to show how these types of operations can be automated from the command line, converted to Ansible playbooks, and thus checked in to version control.

I’ve also been told that, for simple use cases, it is possible to just put the user groups into separate tenants, and they will see different catalogs.  While that does not allow a single item to be in both catalogs, it is significantly easier to set up.

A Big Thank You to Laurent Domb for editing and corrections.

by Adam Young at November 15, 2017 02:37 AM

November 14, 2017

Nathaniel McCallum

Writing Installer Images Directly With WebUSB

Chrome 61 recently released support for the WebUSB JavaScript API. This allows direct access to USB devices from websites. Somebody should build a website that takes distribution ISOs and writes them directly to USB mass storage devices. This would significally improve one of the most difficult and error prone steps when installing a Linux distribution such as Fedora.

November 14, 2017 08:31 PM

November 10, 2017

William Brown

Creating yubikey SSH and TLS certificates

Creating yubikey SSH and TLS certificates

Recently yubikeys were shown to have a hardware flaw in the way the generated private keys. This affects the use of them to provide PIV identies or SSH keys.

However, you can generate the keys externally, and load them to the key to prevent this issue.

SSH

First, we’ll create a new NSS DB on an airgapped secure machine (with disk encryption or in memory storage!)

certutil -N -d . -f pwdfile.txt

Now into this, we’ll create a self-signed cert valid for 10 years.

certutil -S -f pwdfile.txt -d . -t "C,," -x -n "SSH" -g 2048 -s "cn=william,O=ssh,L=Brisbane,ST=Queensland,C=AU" -v 120

We export this now to PKCS12 for our key to import.

pk12util -o ssh.p12 -d . -k pwdfile.txt -n SSH

Next we import the key and cert to the hardware in slot 9c

yubico-piv-tool -s9c -i ssh.p12 -K PKCS12 -aimport-key -aimport-certificate -k

Finally, we can display the ssh-key from the token.

ssh-keygen -D /usr/lib64/opensc-pkcs11.so -e

Note, we can make this always used by ssh client by adding the following into .ssh/config:

PKCS11Provider /usr/lib64/opensc-pkcs11.so

TLS Identities

The process is almost identical for user certificates.

First, create the request:

certutil -d . -R -a -o user.csr -f pwdfile.txt -g 4096 -Z SHA256 -v 24 \
--keyUsage digitalSignature,nonRepudiation,keyEncipherment,dataEncipherment --nsCertType sslClient --extKeyUsage clientAuth \
-s "CN=username,O=Testing,L=example,ST=Queensland,C=AU"

Once the request is signed, we should have a user.crt back. Import that to our database:

certutil -A -d . -f pwdfile.txt -i user.crt -a -n TLS -t ",,"

Import our CA certificate also. Next export this to p12:

pk12util -o user.p12 -d . -k pwdfile.txt -n TLS

Now import this to the yubikey - remember to use slot 9a this time!

yubico-piv-tool -s9a -i user.p12 -K PKCS12 -aimport-key -aimport-certificate -k

Done!

November 10, 2017 02:00 PM

Fraser Tweedale

Changing the X.509 signature algorithm in FreeIPA

X.509 certificates are an application of digital signatures for identity verification. TLS uses X.509 to create a chain of trust from a trusted CA to a service certificate. An X.509 certificate binds a public key to a subject by way of a secure and verifiable signature made by a certificate authority (CA).

A signature algorithm has two parts: a public key signing algorithm (determined by the type of the CA’s signing key) and a collision-resistant hash function. The hash function digests the certified data into a small value that is hard to find collision for, which gets signed.

Computers keep getting faster and attacks on cryptography always get better. So over time older algorithms need to be deprecated, and newer algorithms adopted for use with X.509. In the past the MD5 and SHA-1 digests were often used with X.509, but today SHA-256 (a variant of SHA-2) is the most used algorithm. SHA-256 is also the weakest digest accepted by many programs (e.g. web browsers). Stronger variants of SHA-2 are widely supported.

FreeIPA currently uses the sha256WithRSAEncryption signature algorithm by default. Sometimes we get asked about how to use a stronger digest algorithm. In this article I’ll explain how to do that and discuss the motivations and implications.

Implications of changing the digest algorithm

Unlike re-keying or changing the CA’s Subject DN, re-issuing a certificate signed by the same key, but using a different digest, should Just Work. As long as a client knows about the digest algorithm used, it will be able to verify the signature. It’s fine to have a chain of trust that uses a variety of signature algorithms.

Configuring the signature algorithm in FreeIPA

The signature algorithm is configured in each Dogtag certificate profile. Different profiles can use different signature algorithms. The public key signing algorithm depends on the CA’s key type (e.g. RSA) so you can’t change it; you can only change the digest used.

Modifying certificate profiles

Before FreeIPA 4.2 (RHEL 7.2), Dogtag stored certificate profile configurations as flat files. Dogtag 9 stores them in /var/lib/pki-ca/profiles/ca and Dogtag >= 10 stores them in /var/lib/pki/pki-tomcat/ca/profiles/ca. When Dogtag is using file-based profile storage you must modify profiles on all CA replicas for consistent behaviour. After modifying a profile, Dogtag requires a restart to pick up the changes.

As of FreeIPA 4.2, Dogtag uses LDAP-based profile storage. Changes to profiles get replicated among the CA replicas, so you only need to make the change once. Restart is not required. The ipa certprofile plugin provides commands for importing, exporting and modifying certificate profiles.

Because of the variation among versions, I won’t detail the process of modifying profiles. We’ll look at what modifications to make, but skip over how to apply them.

Profile configuration changes

For service certificates, the profile to modify is caIPAserviceCert. If you want to renew the CA signing cert with a different algorithm, modify the caCACert profile. The relevant profile policy components are signingAlgConstraintImpl and signingAlgDefaultImpl. Look for these components in the profile configuration:

policyset.serverCertSet.8.constraint.class_id=signingAlgConstraintImpl
policyset.serverCertSet.8.constraint.name=No Constraint
policyset.serverCertSet.8.constraint.params.signingAlgsAllowed=SHA1withRSA,SHA256withRSA,SHA512withRSA,MD5withRSA,MD2withRSA,SHA1withDSA,SHA1withEC,SHA256withEC,SHA384withEC,SHA512withEC
policyset.serverCertSet.8.default.class_id=signingAlgDefaultImpl
policyset.serverCertSet.8.default.name=Signing Alg
policyset.serverCertSet.8.default.params.signingAlg=-

Update the policyset.<name>.<n>.default.params.signingAlg parameter; replace the - with the desired signing algorithm. (I set it to SHA512withRSA.) Ensure that the algorithm appears in the policyset.<name>.<n>.constraint.params.signingAlgsAllowed parameter (if not, add it).

After applying this change, certificates issued using the modified profile will use the specified algorithm.

Results

After modifying the caIPAserviceCert profile, we can renew the HTTP certificate and see that the new certificate uses SHA512withRSA. Use getcert list to find the Certmonger tracking request ID for this certificate. We find the tracking request in the output:

...
Request ID '20171109075803':
  status: MONITORING
  stuck: no
  key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt'
  certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB'
  CA: IPA
  issuer: CN=Certificate Authority,O=IPA.LOCAL
  subject: CN=rhel69-0.ipa.local,O=IPA.LOCAL
  expires: 2019-11-10 07:53:11 UTC
  ...
...

So the tracking request ID is 20171109075803. Now resubmit the request:

[root@rhel69-0 ca]# getcert resubmit -i 20171109075803
Resubmitting "20171109075803" to "IPA".

After a few moments, check the status of the request:

[root@rhel69-0 ca]# getcert list -i 20171109075803
Number of certificates and requests being tracked: 8.
Request ID '20171109075803':
  status: MONITORING
  stuck: no
  key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt'
  certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB'
  CA: IPA
  issuer: CN=Certificate Authority,O=IPA.LOCAL
  subject: CN=rhel69-0.ipa.local,O=IPA.LOCAL
  expires: 2019-11-11 00:02:56 UTC
  ...

We can see by the expires field that renewal succeeded. Pretty-printing the certificate shows that it is using the new signature algorithm:

[root@rhel69-0 ca]# certutil -d /etc/httpd/alias -L -n 'Server-Cert'
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 12 (0xc)
        Signature Algorithm: PKCS #1 SHA-512 With RSA Encryption
        Issuer: "CN=Certificate Authority,O=IPA.LOCAL"
        Validity:
            Not Before: Fri Nov 10 00:02:56 2017
            Not After : Mon Nov 11 00:02:56 2019
        Subject: "CN=rhel69-0.ipa.local,O=IPA.LOCAL"

It is using SHA-512/RSA. Mission accomplished.

Discussion

In this article I showed how to configure the signing algorithm in a Dogtag certificate profile. Details about how to modify profiles in particular versions of FreeIPA was out of scope.

In the example I modified the default service certificate profile caIPAserviceCert to use SHA512withRSA. Then I renewed the HTTP TLS certificate to confirm that the configuration change had the intended effect. To change the signature algorithm on the FreeIPA CA certificate, you would modify the caCACert profile then renew the CA certificate. This would only work if the FreeIPA CA is self-signed. If it is externally-signed, it is up to the external CA what digest to use.

In FreeIPA version 4.2 and later, we support the addition of custom certificate profiles. If you want to use a different signature algorithm for a specific use case, instead of modifying the default profile (caIPAserviceCert) you might add a new profile.

The default signature digest algorithm in Dogtag is currently SHA-256. This is appropriate for the present time. There are few reasons why you would need to use something else. Usually it is because of an arbitrary security decision imposed on FreeIPA administrators. There are currently no plans to make the default signature algorithm configurable. But you can control the signature algorithm for a self-signed FreeIPA CA certificate via the ipa-server-install --ca-signing-algorithm option.

In the introduction I mentioned that the CA’s key type determines the public key signature algorithm. That was hand-waving; some key types support multiple signature algorithms. For example, RSA keys support two signature algorithms: PKCS #1 v1.5 and RSASSA-PSS. The latter is seldom used in practice.

The SHA-2 family of algorithms (SHA-256, SHA-384 and SHA-512) are the "most modern" digest algorithms standardised for use in X.509 (RFC 4055). The Russian GOST R digest and signature algorithms are also supported (RFC 4491) although support is not widespread. In 2015 NIST published SHA-3 (based on the Keccak sponge construction). The use of SHA-3 in X.509 has not yet been standardised. There was an Internet-Draft in 2017, but it expired. The current cryptanalysis of SHA-2 suggests there is no urgency to move to SHA-3. But it took a long time to move from SHA-1 (which is now insecure for applications requiring collision resistance) to SHA-2. Therefore it would be good to begin efforts to standardise SHA-3 in X.509 and add library/client support as soon as possible.

by ftweedal at November 10, 2017 04:10 AM

November 06, 2017

William Brown

What's the problem with NUMA anyway?

What’s the problem with NUMA anyway?

What is NUMA?

Non-Uniform Memory Architecture is a method of seperating ram and memory management units to be associated with CPU sockets. The reason for this is performance - if multiple sockets shared a MMU, they will cause each other to block, delaying your CPU.

To improve this, each NUMA region has it’s own MMU and RAM associated. If a CPU can access it’s local MMU and RAM, this is very fast, and does not prevent another CPU from accessing it’s own. For example:

CPU 0   <-- QPI --> CPU 1
  |                   |
  v                   v
MMU 0               MMU 1
  |                   |
  v                   v
RAM 1               RAM 2

For example, on the following system, we can see 1 numa region:

# numactl --hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3
node 0 size: 12188 MB
node 0 free: 458 MB
node distances:
node   0
  0:  10

On this system, we can see two:

# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 24 25 26 27 28 29 30 31 32 33 34 35
node 0 size: 32733 MB
node 0 free: 245 MB
node 1 cpus: 12 13 14 15 16 17 18 19 20 21 22 23 36 37 38 39 40 41 42 43 44 45 46 47
node 1 size: 32767 MB
node 1 free: 22793 MB
node distances:
node   0   1
  0:  10  20
  1:  20  10

This means that on the second system there is 32GB of ram per NUMA region which is accessible, but the system has total 64GB.

The problem

The problem arises when a process running on NUMA region 0 has to access memory from another NUMA region. Because there is no direct connection between CPU 0 and RAM 1, we must communicate with our neighbour CPU 1 to do this for us. IE:

CPU 0 --> CPU 1 --> MMU 1 --> RAM 1

Not only do we pay a time delay price for the QPI communication between CPU 0 and CPU 1, but now CPU 1’s processes are waiting on the MMU 1 because we are retrieving memory on behalf of CPU 0. This is very slow (and can be seen by the node distances in the numactl –hardware output).

Today’s work around

The work around today is to limit your Directory Server instance to a single NUMA region. So for our example above, we would limit the instance to NUMA region 0 or 1, and treat the instance as though it only has access to 32GB of local memory.

It’s possible to run two instances of DS on a single server, pinning them to their own regions and using replication between them to provide synchronisation. You’ll need a load balancer to fix up the TCP port changes, or you need multiple addresses on the system for listening.

The future

In the future, we’ll be adding support for better copy-on-write techniques that allow the cores to better cache content after a QPI negotiation - but we still have to pay the transit cost. We can minimise this as much as possible, but there is no way today to avoid this penalty. To use all your hardware on a single instance, there will always be a NUMA cost somewhere.

The best solution is as above: run an instance per NUMA region, and internally provide replication for them. Perhaps we’ll support an automatic configuration of this in the future.

November 06, 2017 02:00 PM

October 24, 2017

Red Hat Blog

Understanding Identity Management Client Enrollment Workflows

Enrolling a client system into Identity Management (IdM) can be done with a single command, namely: ipa-client-install. This command will configure SSSD, Kerberos, Certmonger and other elements of the system to work with IdM. The important result is that the system will get an identity and key so that it can securely connect to IdM and perform its operations. However, to get the identity and key, the system should be trusted else any other system would be able to register and interact with the server. To confirm trust there are four different options:

1. Enrollment by a High Privileged Admin

If ipa-client-install command is executed by a high privileged admin and this admin uses his or her password to run the command, the client will first use Kerberos to authenticate the admin and will then send a request to the server to perform a client registration as admin. There server will check what administrator is allowed to do. There are two different permissions at play in this sequence: one is the right to create a host entry and the other one is to provision the key. Since this admin has high privileges, the server will create a new host entry for the client and return a generated Kerberos key that the client will store in a file called keytab. Once this operation is complete other configuration steps will continue but they are the same in all four provisioning options.

2. Enrollment by a Low Privileged Admin

If an admin does not have privileges to create a client host entry but has the permission to provision the key to the client, the host entries need to be pre-created. To pre-create entries you will need to define a special account and allow it to only register clients (i.e. create host entries) and not give it permissions to do any other administrative activity. You can then use this account in your scripts or with automatic provisioning tool of your choice. This account, or the high level admin, will first pre-create host entries in IdM and then the script or low privileged admin can actually “do” the job of provisioning the keys to the client systems. This approach works fine except that it leads to a password being stored verbatim in the scripts or somewhere in a file or in a source control system. Needless to say – from security point of view – this is not the best approach.

3. Enrollment Using One Time Password

An improvement over the previous option is to use a one time registration password. This approach mostly targets an automated provisioning as completed by a provisioning system. Red Hat Satellite 6, for example, is capable of provisioning systems and enrolling them with IdM automatically using this method. The flow of operations includes:

  • User initiates the provisioning operation
  • Provisioning server (e.g. Satellite 6) connects to IdM and registers a future host. It is implied that the server has permission to do so.
  • IdM returns a registration password that can be used only once.
  • Provisioning server passes the registration password to the system being deployed.
  • The system being deployed is synthesized and booted.
  • During this first boot the ipa-client-install script is invoked with the registration password.
  • The IdM server recognizes the code and completes enrollment returning the key.
  • After this a normal flow of client configuration continues.

A similar approach is being implemented in Nova component of the OpenStack targeting OpenStack 12. In OpenStack case this procedure is used to give identity to the OpenStack nodes so that they can automatically acquire certificate from IdM for all point-to-point communication between services inside OpenStack.

There is also a community effort to build a set of Ansible modules that would use the same method and enroll clients leveraging Ansible as an orchestration engine.

4. Re-enrollment

Finally, in some cases, an already provisioned system needs to be re-enrolled. This usually happens when the system is re-imaged and re-installed. In this case, going through a registration sequence again is an overhead. Instead – the file with the Kerberos key can be backed up and used once the system is re-imaged and restored. The client will then authenticate using the old key and then request a new key. Please note that the configuration files will also be brought into a canonical state, so if you did some manual or automated customization of the configuration, these changes will be lost. This method is also handy when seeking to repair the configuration of a client and to perform client key rotation if your policies require periodic key rotation.

As a part of the Ansible effort another use case case has been identified. What if a client system was lost? It was, for example, a virtual machine in a cloud and was killed for some reason. The server would think that the system is fully functional but in reality the keys are gone on the client side. In this situation one first has to disable the host before attempting to enroll this host again. Such step complicates the provisioning sequence which should be idempotent to be effective in automation tools like Ansible. This limitation will be addressed in later versions of IdM allowing the enrollment procedure to be less cumbersome.

Questions? Comments? Are you using an entirely different workflow for client enrollment? As always – I look forward to hearing your thoughts.

by Dmitri Pal at October 24, 2017 02:42 PM

October 20, 2017

Nathaniel McCallum

Introducing libiso8601

Four years ago I needed a library for parsing ISO 8601 dates in C. After I wrote most of it, we ended up going in a different direction. This code has sat on my computer since then. But no more!

This week I polished it up and pushed it to GitHub. The library is fully tested (with >98% code coverage) and handles not only all the ISO 8601 standard formats but many common non-standard variations as well.

Here’s an example of how to use it:

#include <iso8601.h>
#include <assert.h>
#include <string.h>

int main() {
    iso8601_time time = {};
    char str[128] = {};

    iso8601_parse("2010-02-14T13:14:23.123456Z", &time);

    assert(time.year == 2010);
    assert(time.month == 2);
    assert(time.day == 14);
    assert(time.hour == 13);
    assert(time.minute == 14);
    assert(time.second == 23);
    assert(time.usecond == 123456);

    iso8601_unparse(&time, ISO8601_FLAG_NONE, 4, ISO8601_FORMAT_WEEKDATE,
                    ISO8601_TRUNCATE_DAY, sizeof(str), str);

    assert(strcmp(str, "2010-W06-7") == 0);
    return 0;
}

I’d love to get some review of the API before I release the first version. So if you’re into telling people how bad their code is, please wander this way!

October 20, 2017 10:45 PM

October 18, 2017

Adam Young

Deliberate Elevation of Privileges

“Ooops.” — Me, doing something as admin that I didn’t mean to do.

While the sudo mechanism has some warranted criticism, it is still an improvement on doing everything as the root account. The essential addition that sudo provides for the average sys admin is the ability to only grant themselves system admin when they explicitly want it.

I was recently thinking about a FreeIPA based cluster where the users did not realize that they could get admin permissions by adding themselves to the user group admins. One Benefit to the centralized admin account is that a user has to chose to operate as admin to perform the operation. If a hacker gets the users password, they do not get admin. However, the number of attacks and weaknesses in this approach far outweigh the benefits. Multiple people need to know the password, revoking it for one revokes it for everyone, anyone can change the password, locking everyone else out, and so on.

We instead added a few key individuals to the admins group and changed the password on the admin account.

This heightened degree of security supports the audit trail. Now if someone performs and admin operation, we know which user did it. It involves enabling audit on the Directory Server (I need to learn how to do this!).

It got me thinking, though, if there was a mechanism like the sudo approach that we could implement for users to temporarily elevate them to admins status. Something like a short term group membership. The requirements, as I can see are these:

  1. A user has to chose to be admin:  “admin-powers activate!”
  2. A user can downgrade back to non-admin at any point: “admin-powers activate!”
  3. Admin powers wear off.  admin-powers only last an hour
  4. No new password has to be memorized for admin-powers
  5. The mechanism for admin-powers has to be resistant to attack.
    1. customizable enough that someone outside the organization can’t guess what they are.
    2. provide some way to prevent shoulder surfing.

I’m going to provide a straw-man here.

  • A REST API protected via SPNEGO
    • another endpoint with client cert possible, too
  • The REST API is password protected with basic-auth.  This is the group password.
  • The IPA service running the web server has the ability to add anyone that is in the “potentaladmins” group to the “admins” groups”
  • The IPA service also schedules an AT job to remove the user from the group.  If an AT entry already exists, remove the older one, so a user can extend their window.
  • A cron job runs each night to remove anyone from the admin group that does not have a current at job scheduled.

As I said, a strawman, but I think it points in the right direction.  Thoughts?

by Adam Young at October 18, 2017 07:31 PM

James Shubin

Copyleft is Dead. Long live Copyleft!

As you may have noticed, we recently re-licensed mgmt from the AGPL (Affero General Public License) to the regular GPL. This is a post explaining the decision and which hopefully includes some insights at the intersection of technology and legal issues.

Disclaimer:

I am not a lawyer, and these are not necessarily the opinions of my employer. I think I’m knowledgeable in this area, but I’m happy to be corrected in the comments. I’m friends with a number of lawyers, and they like to include disclaimer sections, so I’ll include this so that I blend in better.

Background:

It’s well understood in infrastructure coding that the control of, and trust in the software is paramount. It can be risky basing your business off of a product if the vendor has the ultimate ability to change the behaviour, discontinue the software, make it prohibitively expensive, or in the extreme case, use it as a backdoor for corporate espionage.

While many businesses have realized this, it’s unfortunate that many individuals have not. The difference might be protecting corporate secrets vs. individual freedoms, but that’s a discussion for another time. I use Fedora and GNOME, and don’t have any Apple products, but you might value the temporary convenience more. I also support your personal choice to use the software you want. (Not sarcasm.)

This is one reason why Red Hat has done so well. If they ever mistreated their customers, they’d be able to fork and grow new communities. The lack of an asymmetrical power dynamic keeps customers feeling safe and happy!

Section 13:

The main difference between the AGPL and the GPL is the “Remote Network Interaction” section. Here’s a simplified explanation:

Both licenses require that if you modify the code, you give back your contributions. “Copyleft” is Copyright law that legally requires this share-alike provision. These licenses never require this when using the software privately, whether as an individual or within a company. The thing that “activates” the licenses is distribution. If you sell or give someone a modified copy of the program, then you must also include the source code.

The AGPL extends the GPL in that it also activates the license if that software runs on a application providers computer which is common with hosted software-as-a-service. In other words, if you were an external user of a web calendaring solution containing AGPL software, then that provider would have to offer up the code to the application, whereas the GPL would not require this, and neither license would require distribution of code if the application was only available to employees of that company nor would it require distribution of the software used to deploy the calendaring software.

Network Effects and Configuration Management:

If you’re familiar with the infrastructure automation space, you’re probably already aware of three interesting facts:

  1. Hosted configuration management as a service probably isn’t plausible
  2. The infrastructure automation your product uses isn’t the product
  3. Copyleft does not apply to the code or declarations that describe your configuration

As a result of this, it’s unlikely that the Section 13 requirement of the AGPL would actually ever apply to anyone using mgmt!

A number of high profile organizations outright forbid the use of the AGPL. Google and Openstack are two notable examples. There are others. Many claim this is because the cost of legal compliance is high. One argument I heard is that it’s because they live in fear that their entire proprietary software development business would be turned on its head if some sufficiently important library was AGPL. Despite weak enforcement, and with many companies flouting the GPL, Linux and the software industry have not shown signs of waning. Compliance has even helped their bottom line.

Nevertheless, as a result of misunderstanding, fear and doubt, using the AGPL still cuts off a portion of your potential contributors. Possible overzealous enforcing has also probably caused some to fear the GPL.

Foundations and Permissive Licensing:

Why use copyleft at all? Copyleft is an inexpensive way of keeping the various contributors honest. It provides an organization constitution so that community members that invest in the project all get a fair, representative stake.

In the corporate world, there is a lot of governance in the form of “foundations”. The most well-known ones exist in the United States and are usually classified as 501(c)(6) under US Federal tax law. They aren’t allowed to generate a profit, but they exist to fulfill the desires of their dues-paying membership. You’ve probably heard of the Linux Foundation, the .NET foundation, the OpenStack Foundation, and the recent Linux Foundation child, the CNCF. With the major exception being Linux, they primarily fund permissively licensed projects since that’s what their members demand, and the foundation probably also helps convince some percentage of their membership into voluntarily contributing back code.

Running an organization like this is possible, but it certainly adds a layer of overhead that I don’t think is necessary for mgmt at this point.

It’s also interesting to note that of the top corporate contributions to open source, virtually all of the licensing is permissive, usually under the Apache v2 license. I’m not against using or contributing to permissively licensed projects, but I do think there’s a danger if most of our software becomes a monoculture of non-copyleft, and I wanted to take a stand against that trend.

Innovation:

I started mgmt to show that there was still innovation to be done in the automation space, and I think I’ve achieved that. I still have more to prove, but I think I’m on the right path. I also wanted to innovate in licensing by showing that the AGPL isn’t actually  harmful. I’m sad to say that I’ve lost that battle, and that maybe it was too hard to innovate in too many different places simultaneously.

Red Hat has been my main source of funding for this work up until now, and I’m grateful for that, but I’m sad to say that they’ve officially set my time quota to zero. Without their support, I just don’t have the energy to innovate in both areas. I’m sad to say it, but I’m more interested in the technical advancements than I am in the licensing progress it might have brought to our software ecosystem.

Conclusion / TL;DR:

If you, your organization, or someone you know would like to help fund my mgmt work either via a development grant, contract or offer of employment, or if you’d like to be a contributor to the project, please let me know! Without your support, mgmt will die.

Happy Hacking,

James

You can follow James on Twitter for more frequent updates and other random noise.

EDIT: I mentioned in my article that: “Hosted configuration management as a service probably isn’t plausible“. Turns out I was wrong. The splendiferous Nathen Harvey was kind enough to point out that Chef offers a hosted solution! It’s free for five hosts as well!

I was probably thinking more about how I would be using mgmt, and not about the greater ecosystem. If you’d like to build or use a hosted mgmt solution, please let me know!


by purpleidea at October 18, 2017 01:22 AM

October 06, 2017

Red Hat Blog

Picking your Deployment Architecture

In the previous post I talked about Smart Card Support in Red Hat Enterprise Linux. In this article I will drill down into how to select the right deployment architecture depending on your constraints, requirements and availability of the smart card related functionality in different versions of Red Hat Enterprise Linux.

To select the right architecture for a deployment where users would authenticate using smart cards when logging into Linux systems you need to answer a couple of questions.

The main one is “where are my users” and thus “where my users authenticated”? Are your users going to be in Active Directory, in IdM or they are in some other solution? If they are somewhere else other than in AD or IdM the situation might require a deeper dive so please reach out to your technical account manager or sales representative. If you want to keep users in Active Directory and have AD as an authoritative source for the account information and authentication you can do it in two ways. The preferred way is to deploy IdM to manage your Linux environment and establish trust with AD. However this will work only with clients that run version 7.3 and later since they have SSSD capable of working with Active Directory and understand smart card authentication. For older i.e. 6.x clients in this case you might have to use pam_pkcs11 and manage mapping files.

The alternative, if for some valid reason you really can’t use trusts which are highly recommended, would be to deploy IdM and sync accounts from AD. In this case clients 6.8+ and 7.2+ can work against IdM and you will be synchronizing user accounts from Active Directory to IdM. This integration is less preferable since synchronization approach is much less robust than a trust approach and in this case AD becomes a source of accounts but real authentication happens against IdM so if you need authentication auditing you need to do it against IdM.

You can also deploy IdM without ongoing synchronization with AD and manage accounts for your Linux environment purely in IdM. This will work with 6.8+ and 7.2+ clients. And with 7.4 clients you will be able to get Kerberos tickets as a part of smart card authentication allowing Kerberos based SSO between servers and services.

A couple other questions need to be answered.

    • Can I avoid using IdM? Yes you can connect SSSD directly to AD and use smart card authentication since 7.3. With older clients you will have to do mapping via files as described above.
    • How can I handle a small set of Windows servers I have in those scenarios?
      • If you have Active Directory and your users are in Active directory you can connect your Windows systems to Active Directory.
      • If your users are in IdM and there is no AD in the picture there are some ways to configure Windows systems to work with IdM accounts. However this functionality is limited and not supported out of box. To see what can be done on this front contact your TAM or sales representative. In future it will be possible to have IdM be the authoritative source for users and expose those user to Windows systems. That would require a feature that is being worked on in IdM’s upstream project – FreeIPA. It is called a Global Catalog. With Global Catalog users managed by IdM can be exposed to a trusted AD domain and then Windows systems can be connected to such domain. If you are interested in testing such functionality please reach out to FreeIPA team using community mailing lists or by opening a case with Red Hat support.

Scenario 1:

So let us take a case when users will be in IdM, certificates are issued by an external CA and it is either a green field deployment or you can upgrade your clients to 7.4. Here is what it will entail:

  1. Install the latest IdM version (7.4 at the moment the article was written)
  2. Create or load users into IdM
  3. Map certificates to user entries in IdM
  4. Install your clients using 7.4 and ipa-client-install script
  5. Prepare for smart card authentication on clients and server
  6. Test your smart card authentication on those clients
    1. Console login
    2. SSH (locally)
    3. SSH (remotely)

Scenario 2:

If the environment has a mixture of clients with different versions before 7.4:

  1. Install the latest IdM version (7.4 at the moment the article was written)
  2. Create or load users into IdM
  3. Update clients to be at least 6.8 or 7.2
  4. Install your clients on client systems using ipa-client-install script
  5. Prepare for smart card authentication on clients
  6. Publish certificates into user entries
    1. Extract from the card
    2. Publish into IdM
  7. Test your smart card authentication on those clients

Scenario 3:

If you want to leverage trust then the sequence will be the following:

  1. Install the latest IdM version (7.4 at the moment the article was written)
  2. Establish trust with AD
  3. Update clients to be at least 7.3
  4. Install your clients on client systems using ipa-client-install script
  5. Prepare for smart card authentication on clients
  6. Link your AD users with the smart cards
  7. Test your smart card authentication on those clients
    1. Console login
    2. SSH (locally)
    3. SSH (remotely)

As you can see there is unfortunately no support of the trust-based smart card authentication for older 6.x clients. I was asked a question about this the other day at the Defense in Depth conference and gave an answer without checking my notes. The truth is that the smart card authentication with older clients is possible only if you use IdM as a source of your users. Support of the trusts would require back porting of the SSSD to 6.x which will be very hard to do at this stage of Red Hat Enterprise Linux 6 support.

For more information about smart card support in identity management see the following documentation.

For more details about lower level support of the smart cards please see the following knowledge base article.

 

by Dmitri Pal at October 06, 2017 02:11 PM

September 28, 2017

Rich Megginson

How to debug "undefined method for nil:NilClass" in OpenShift Aggregated Logging

In OpenShift Aggregated Logging https://github.com/openshift/origin-aggregated-logging the Fluentd pipeline tries very hard to ensure that the data is correct, because it depends on having clean data in the output section in order to construct the index names for Elasticsearch. If the fields and values are not correct, then the index name construction will fail with an unhelpful error like this:

2017-09-28 13:22:22 -0400 [warn]: temporarily failed to flush the buffer. next_retry=2017-09-28 13:22:23 -0400 error_class="NoMethodError"
error="undefined method `[]' for nil:NilClass" plugin_id="object:1c0bd1c"
2017-09-28 13:22:22 -0400 [warn]: /opt/app-root/src/gems/fluent-plugin-elasticsearch-1.9.5.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:240:in `eval'
2017-09-28 13:22:22 -0400 [warn]: /opt/app-root/src/gems/fluent-plugin-elasticsearch-1.9.5.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:240:in `eval'

There is no context about what field might be missing, what tag is matching, or even which plugin it is, the operations output or the applications output (although you do get the plugin_id, which could be used to look up the actual plugin information, if the Fluentd monitoring is enabled).
One solution is to just edit the logging-fluentd ConfigMap, and add a stdout filter in the right place:
## matches
          <filter **>
            @type stdout
          </filter>
          @include configs.d/openshift/output-pre-*.conf
          ...

and dump the time, tag, and record just before the outputs. The problem with this is that it will cause a feedback loop, since Fluentd is reading from its own pod log. The solution to this is to also throw away Fluentd pod logs.
## filters
          @include configs.d/openshift/filter-pre-*.conf
          @include configs.d/openshift/filter-retag-journal.conf
          <match kubernetes.journal.container.fluentd kubernetes.var.log.containers.fluentd**>
            @type null
          </match>

This must come after the filter-retag-journal.conf which identifies and tags Fluentd pod log records. Then restart Fluentd (oc pod delete $fluentd_pod, oc label node, etc.). The Fluentd pod log will now contain data like this:
2017-09-28 13:44:47 -0400 output_tag: {"type":"response","@timestamp":"2017-09-28T17:44:19.524989+00:00","pid":8,"method":"head","statusCode":200,
"req":{"url":"/","method":"head","headers":{"user-agent":"curl/7.29.0","host":"localhost:5601","accept":"*/*"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1"},
"res":{"statusCode":200,"responseTime":2,"contentLength":9},
"message":"HEAD / 200 2ms - 9.0B",
"docker":{"container_id":"e1cc1b22d04683645b00de53c0891e284c492358fd2830142f4523ad29eec060"},
"kubernetes":{"container_name":"kibana","namespace_name":"logging","pod_name":"logging-kibana-1-t9tvv",
"pod_id":"358622d8-a467-11e7-ab9a-0e43285e8fce","labels":{"component":"kibana","deployment":"logging-kibana-1",
"deploymentconfig":"logging-kibana","logging-infra":"kibana","provider":"openshift"},
"host":"ip-172-18-0-133.ec2.internal","master_url":"https://kubernetes.default.svc.cluster.local",
"namespace_id":"9dbd679c-a466-11e7-ab9a-0e43285e8fce"},...

Now, if you see a record that is missing @timestamp, or a record from a pod that is missing kubernetes.namespace_name or kubernetes.namespace_id, you know that the exception is caused by one of these missing fields.

September 28, 2017 08:07 PM

September 26, 2017

Red Hat Blog

Smart Card Support in Red Hat Enterprise Linux

Recent Red Hat Enterprise Linux releases see an expansion in support of the smart card related use cases. However customers usually have a mixed environment and standardize on a specific version of Red Hat Enterprise Linux for period of time. It is important to understand the evolution of the smart card related feature to plan your deployment and understand what capabilities are available in what version of the operating system.

Understanding Smart Card Support

When we talk about smart cards support there are several dimensions that need to be considered:

  • Card type
  • Driver
  • PAM module
  • Identity mapping
  • Advanced features
  • Red Hat Enterprise Linux version

Let us first look at those areas in more details.

Card Type

There are different types of cards and they require different logic to process data on the card. There are four types of cards that are supported by Red Hat Enterprise Linux: coolkey cards, CAC, PIV and PKCS#15. The support for different types of cards has been added over the time but coolkey and CAC cards have been supported since Red Hat Enterprise Linux 5. PIV and PKCS#15 was added in later releases. PIV support was added in 6.3 and then carried forward in 7.0 while PKCS#15 was added in 7.2 and was not backported to 6.x stream. Over the course of the releases other capabilities were added, for example the support of the contactless cards. Different types of cards have different capabilities. For example CAC cards have one compartment where the certificates can be stored while PIV card has multiple. For more details about the difference between the types of the cards please read corresponding documentation.

Driver

For many years Red Hat Enterprise Linux provided a coolkey driver to support smart card operations. First it supported coolkey cards, then CAC, then PIV and then PKCS#15 but the number of cards that are formally supported and validated was still limited. Also support of PIV and PKCS#15 was pretty bare bones. Red Hat Enterprise Linux 7.4 brings an alternative driver to coolkey called OpenSC. OpenSC project supports a big variety of cards and has a much better feature coverage than coolkey. However originally the community version of OpenSC lacked support of coolkey and CAC cards. That functionality have been ported from coolkey to OpenSC making OpenSC a full replacement for coolkey which is now deprecated as of 7.4 (see release notes).

PAM module

Pluggable Authentication Module or PAM is a component that sits higher in the stack than a smart card driver and is invoked to perform user authentication using user’s smart card. It leverages driver to interact with the card so to a big extent the type of the card is really abstracted from the PAM module meaning that CAC or PIV cards would look very similar. There might be different sets of certificates on those cards but for a PAM module it is just a bunch of certificates that can be used for different purposes. One (or some) of those certificates can be used for authentication. This is what PAM module uses. To authenticate a user PAM module needs to establish whether user that owns the card knows the PIN. To do that PAM module creates a challenge, encrypts it with a public key fetched from the card and calls into the card for it to use its private key to decrypt the challenge. Access to the private key is controlled by a user PIN. So only if the user types the right PIN and unlocks the private key the card would be able to properly process and respond to the challenge. In addition to the validation of the knowledge of the PIN PAM module needs to make sure that the user is still allowed to login. The certificate might have expired so PAM module checks the expiration and makes sure that this is not the case. The user also might have been laid off so his certificate might have been revoked. PAM module would need to directly or indirectly check for such situation too. In Red Hat Enterprise Linux the check is done by NSS library on behalf of the PAM modules. The check is done over OCSP protocol against an OCSP server or against a certificate revocation list (CRL) published by CA that issued the cert.

Over a long period of time even from early days of Linux there have been two PAM modules pam_pkcs11 and pam_krb5 that were capable of interacting with smart cards and performing user authentication. However there were a lot of challenges with identity mapping that is covered in the next section.

Starting Red Hat Enterprise Linux 7.2 the System Security Service Daemon (SSSD) project started to add smart card related capabilities. There have been a series of releases 7.2, 7.3, 7.4 that implemented different features that will be covered later in this post. Features that were implemented in 7.2 were back ported to 6.8. Features added in later 7.x releases were not ported back and there is no plan to do so.

Identity Mapping

Once the user is authenticated, i.e. the PAM module established that certificate is valid and the user who is in the possession of the card knows the PIN, the user then needs to be mapped to a POSIX user. POSIX users are the ones that Linux system understands. Those users have an id and login name. They run their session on the system and processes within the session are owned by the id they have. This id is called uid. Without mapping to a POSIX user a user session can’t be started.

Challenges with Mapping

Mapping of a user is in fact quite a challenge. In a simple case, when user authenticates with username and password, the PAM module right away knows the username. It can use local or remote lookup to determine whether the user exists and what uid he has. In the case of the smart card authentication it is not that easy. Every certificate has couple different attributes that might identify a user. But historically there is no standard mapping. Moreover in many cases the identity that is baked into a certificate has no relationship to the actual user login. This is true with many government issued CAC cards. The identity on the certificate is a string that identifies the cert but not the user.  So how one can decide which user to map to? Since there is no standard developers from different projects tried their best to solve the problem.

pam_krb5

Pam_krb5 module being a Kerberos module looks at the special field in the certificate that should contain a user Kerberos principal. If it is there – the problem is solved. It can be used automatically by a PAM module to identify user and perform Kerberos certificate based authentication called PKINIT. Unfortunately most of the CAC cards issued over the years do not have this field populated. And this is quite understandable. Government employees get their cards from a central server first and only then get into the office they will work in and register with the IT. So the central server that is issuing the cards has no clue what username will user use in his department. Without this field populated in the certificate Kerberos server does not know how to process the user and thus pam_krb5 can’t be used in such situations. This is quite limiting.

pam_pkcs11

The other module – pam_pkcs11 went the other route. It allows the certificates to be mapped to users in a file. This file is local to a system user logs into. This works OK for a single system but does not scale quite well if you have multiple people accessing same system and a user needs to access multiple systems. Maintaining and copying such file around is very hard so only a brave few went with such kind of deployment creating heavy automation using other tools and means to distribute the files.

The other approach is to store mapping centrally in a central identity solution. Since most of the modern identity solutions are LDAP based it made sense to store the mapping there. Luckily standard LDAP schema allows a user to have an userCertificate attribute where the certificate can be published. So OK let us publish a certificate into an LDAP entry and then just lookup user from a PAM module. If we found a user – great, we will use his login and uid for POSIX user. Problem solved? Not so fast. First it is not clear how you actually lookup a user? Do you get some attribute from the cert and use it as a key in the search? Do you search by the certificate as a blob? Or you create a hash and search by the hash? But then the server would have to store the hashes too somewhere and there is no standard attribute for that approach. Also how to deal with the situations when the certificate is published into more than one user account? Which one should be selected? LDAP lookups do not guarantee an order so if you just pick the first one you might be getting a different account from the previous time you tried to look the user up. Again different solutions did things differently. Pam_pkcs11 implemented enumeration. It would iterate through all the users, get user entry from LDAP and check the cert on the client side. That approach is extremely inefficient and was causing a lot of performance issues. In 7.2 pam_pkcs11 was fixed to do a lookup against LDAP using the whole certificate (a binary blob) as a key for the lookup.

SSSD

When SSSD started adding smart card related features in 7.2 the first thing it did is implemented binary blob lookup against a directory. It could have been any directory that had a userCertificate attribute in the user entry.

This approach worked. However it had couple of drawbacks:

  • Managing certificates in the user entry created a deployment overhead. The IT folks would have in some way extract the certificate from the card and place it into their LDAP. How? If you have a CA that will just publish certificate into LDAP automatically you are all set but this is not the case with many CAC and PIV based deployments as was explained earlier.
  • Using certificate as a blob was losing some semantics. If the user loses a card he will be issued a new one. But this requires his certificate to be updated in the LDAP too. Implementing this workflow is very cumbersome for IT.
  • Finally, there was still no way to disambiguate users if the certificate is mapped to more than one account.

In 7.4 SSSD focused on resolving these issues. It turned out that it is better to have a special attribute that would identify the certificate by a combination of subject and issuer rather than a full certificate blob stored in the LDAP entry. This is what Microsoft Active Directory implemented to support a similar workflow. Such attribute has several advantages:

  • Attribute does not require the certificate to be published into the entry.
  • Creating such attribute is simple, it can be done by IT when they get notification about the user joining organization. IT can create such attribute automatically without waiting for the user to show up and manually register, i.e. upload the certificate that he has on the card into LDAP.

SSSD working in conjunction with IdM team added even more flexible capability so that not only a combination of subject and issuer (which is a de facto standard) but other combination of the attributes can be used.

Advanced Features

Also SSSD and IdM implemented a feature allowing a user to select the account to use if the certificate is mapped to multiple accounts. Why is this important? It is important when a single identity system is used to serve multiple environments and user has different roles in these environments. In this case each of these roles is represented by a dedicated account but certificate information is shared between all of them.

In addition to this SSSD in 7.4 is now capable of performing PKINIT in the same way as pam_krb5 thus surpassing pam_krb5 in pretty much all of its capabilities. As a result pam_krb5 has been deprecated. It will be supported till the end of life of Red Hat Enterprise Linux 7 but most likely will not be included into the next major release.

On top of that SSSD and IdM can now run in FIPS mode.

Last but not least SSSD in 7.4 has logic to recognize the Kerberos principal specified in the certificate and also prompt user for him to explicitly enter the account that should be used rather than inferring it by doing lookups and giving user choice. This forces the user to make up his mind what account he plans to use before he starts authentication making the solution less prone to leaking of information about what accounts the card is mapped to.

As you might have noticed I skipped 7.3 in the description of the SSSD smart card features. Most of the SSSD work in 7.3 related to smart card support was to enable SSSD to work with Active Directory or with IdM when IdM is in trust relationships with Active Directory. SSSD team as usual worked with IdM to allow smart card authentication for Active Directory users with certificates published into AD or with certificates managed in override user entries in IdM. More about this can be read in my post about Red Hat Enterprise Linux 7.3 release.

Summary of the Evolution

Now when we look into the aspects of the smart card support we can create a single table that captures its evolution.

Release 6.0 – 6.3 6.3 – 6.7 6.8+ 7.0 – 7.1 7.2 7.3 7.4
Card Type CAC, coolkey CAC, coolkey, PIV CAC, coolkey, PIV CAC, coolkey, PIV CAC, coolkey, PIV, PKCS#15 CAC, coolkey, PIV, PKCS#15 CAC, coolkey, PIV, PKCS#15
Driver Coolkey Coolkey Coolkey Coolkey Coolkey Coolkey Coolkey/OpenSC
PAM Pam_pkcs11

Pam_krb5

Pam_pkcs11

Pam_krb5

Pam_pkcs11

Pam_krb5

SSSD

Pam_pkcs11

Pam_krb5

Pam_pkcs11

Pam_krb5

SSSD

Pam_pkcs11

Pam_krb5

SSSD

Pam_pkcs11

Pam_krb5

SSSD

Mapping Limited Limited Limited + by certificate blob using SSSD Limited Limited + by certificate blob using SSSD or pam_pkcs11 Limited + by certificate blob using SSSD or pam_pkcs11 Limited + by certificate blob or subject and issuer using SSSD or by blob using pam_pkcs11 Advanced features IdM can be a server AD or IdM in trust with AD can be a server Account selection;

User hints;

PKINIT;

FIPS mode;

So what can we deduce from this? Well, there is a general rule: use the latest server and depending upon what versions your clients have you might get different functionality.

However it is a little bit more complex since depending on  the versions of your clients you might choose different architecture of your deployment.

Stay tuned for more details. In the next post I will focus on specific guidelines that will help you to choose the right deployment steps depending on your situation.

by Dmitri Pal at September 26, 2017 02:33 PM

September 20, 2017

Rob Crittenden

Setting up AD for winsync testing

It had literally been years since I had to setup an AD test environment to do basic winsync testing. I found some scraggly notes and decided to transcribe them here for posterity. They were written for AD 2003 and things for 2008 are a bit different but I still found it fairly easy to figure out (in 2008 there is less need to go to the Start menu).

I don’t in fact remember what a lot of these notes do so don’t kill the messenger.

Start with an AD 2008 instance by following http://www.freeipa.org/page/Setting_up_Active_Directory_domain_for_testing_purposes

Once that is booted:

  1. Change the hostname
  2. My Computer -> right click -> Properties -> Computer Name -> Change = win2003
  3. REBOOT
  4. Manage your Server
    1. Add or remove a role -> Next [Preliminary Steps]
    2. Custom -> Domain Controller
    3. Domain controller for a new domain
    4. Domain in a new forest
    5. Fill DNS name for new domain: example.com
  5. If conflict select Install and Configure DNS on this server
  6. REBOOT
  7. Start -> Control Panel -> Add or Remove Programs
    1. Add/Remove Windows Components
    2. Certificate Services, yes to the question
    3. Next
    4. Enterprise root CA
    5. AD CA for the common name
    6. Accept other defaults
    7. Ok about IIS
  8. REBOOT (or wait a little while for certs to issue)
  9. Start -> Admin Tools -> Certificate Authority
    1. Certificate Authority -> AD CA -> Issued Certificates
    2. Select the cert, double click
    3. Certificate Path
    4. Select AD CA, view certificate
    5. Details
    6. Copy to file
    7. Base 64-encoded x509 (.cer)
  10. Install WinSCP
  11. Copy cert to IPA

Now on the IPA master the agreement can be created:

# ipa-replica-manage connect win2003.example.com –winsync –cacert=/home/rcrit/adca.cer -v –no-lookup –binddn ‘cn=administrator,cn=users,dc=example,dc=com’ –bindpw <AD pw> –passsync <something>

As I recall I tended to put the AD hostname into /etc/hosts (hence the –no-lookup).

by rcritten at September 20, 2017 12:46 PM

September 18, 2017

Red Hat Blog

Evaluating Total Cost of Ownership of the Identity Management Solution

Increasing Interest in Identity Management

During last several months I’ve seen a rapid growth of interest in Red Hat’s Identity Management (IdM) solution. This might have been due to different reasons.

  • First of all IdM has become much more mature and well known. In the past you come to a conference and talk about FreeIPA (community version of IdM) and IdM and you get a lot of people in the audience that have never heard about it. It is not the case any more. IdM, as a solution, is well known now. There are thousands of the deployments all over the world both using Red Hat supported and community bits. Many projects and open source communities implemented integration with it as an identity back end. There is no surprise that customers who are looking for a good, cost effective identity management solution are now aware of it and start considering it. This leads to questions, calls, face-to-face meetings and presentations.
  • Another reason is that IdM/FreeIPA project has been keeping an ear to the ground and was quick to adjust its plans and implement features in response to some of the tightening regulations in different verticals. Let us, for example,  consider the government space. Over the last couple of years, the policies became more strict requiring a robust solution for two-factor-authentication using CAC and PIV smart cards. IdM responded by adding support for smart cards based authentication making it easy to achieve compliance with the mentioned regulations.
  • Yet another reason is that more and more customers realize that moving to a modern Identity Management system is going to enable them to more quickly and easily transition into the age of hybrid cloud, taking advantage of both public and on premises clouds like OpenStack, and as well as to the world of containers and container management platforms like OpenShift.

Software Costs

One of the main questions people ask when they hear about the IdM solution is: Is Identity Management in Red Hat Enterprise Linux free? It is. Identity Management in Red Hat Enterprise Linux is a component of the platform and not a separately licensable product. What does this mean? This means that you can install IdM on any Red Hat Enterprise Linux server system with a valid subscription and get support from Red Hat.

There are many solutions on the market that build business around identity management services and integration with Active Directory that are not free. They require extra cost and dip into your IT budget.  Red Hat’s IdM solution is different. It is available without extra upfront cost for the software itself.

Total Cost of Ownership

People who have done identity management projects in their lives would support me in the claim that Identity Management should not be viewed as a project. It should be viewed as a program. There can be different phases, but the mindset and budgeting should assume that Identity Management is an ongoing endeavor. And it is actually quite reasonable if you think about it. Identity Management software connects to actual people and workforce dynamics. As the workforce evolves, the Identity Management software reflects the changes: growth, re-orgs, acquisitions and spin-offs. No two identity management implementations are the same. The solution has to adapt to a long list of use cases and be capable of unique requirements of every deployment. On one hand, the solution has to work all the time, and on the other hand, its limits are constantly stretched.

During my visits, I also help to architect a solution if customers are interested in quick “on the fly” white-boarding suggestions. Such designs need to be taken with a grain of salt as drive-by architecture usually considers the main technical requirements outlined during the discussion but does not consider hidden challenges and roadblocks that each organization has. So the suggested architecture should be viewed as a very rough draft and something to start thinking about rather than a precise blueprint that can be followed to a letter. After the first conversation it is recommended to read various publicly available materials. Red Hat documentation and man pages are good sources of information as well as the community project wikis for FreeIPA and SSSD. Identity Management documentation is very well maintained and regularly updated to reflect new changes or address reported issues.

In addition to reading documentation one can engage Red Hat professional services to help with a proof-of-concept or production deployment. Those services are priced per engagement. There are different pre-packaged offerings with the predefined results that you can purchase from Red Hat – just get in touch with your sales representative or technical account manager.

No matter what software you choose for your identity management solution, it makes sense to have someone on the vendor side who will be there to help with any issues and challenges you face, to connect you to experts and to reduce your downtime. Red Hat offers multiple tiers of support. One level includes a Technical Account Manager. More about TAM program can be read here. Since Identity Management should be viewed as an ongoing process and effort it makes sense to consider a TAM or the equivalent service from your vendor. Is it an extra cost? Yes but it is independent from the solution you choose. It is just a good risk mitigation strategy that makes your money work for you with the best possible return.

As always your comments and feedback are very welcome.

by Dmitri Pal at September 18, 2017 02:19 PM

September 11, 2017

Florence Blanc-Renaud

Troubleshooting FreeIPA: pki-tomcatd fails to start

When performing the upgrade of FreeIPA, you may encounter an issue with pki-tomcatd failing to start. At first this issue looks related to the upgrade, but often reveals a latent problem and gets detected only because the upgrade triggers a restart of pki-tomcatd.

So how to troubleshoot this type of issue?

 

Upgrade logs

The upgrade is using /var/log/ipaupgrade.log and may contain a lot of useful information. In this specific case, I could see:

[...] DEBUG The ipa-server-upgrade command failed,
exception: ScriptError: CA did not start in 300.0s
[...] ERROR CA did not start in 300.0s
[...] ERROR The ipa-server-upgrade command failed. See
/var/log/ipaupgrade.log for more information

 

CA debug logs

The first step is to understand why pki-tomcatd refuses to start. This process is launched inside Tomcat and corresponds to the CA component of FreeIPA. It is logging into /var/log/pki/pki-tomcat/ca/debug:

[...][localhost-startStop-2]: ============================================
[...][localhost-startStop-2]: ===== DEBUG SUBSYSTEM INITIALIZED =======
[...][localhost-startStop-2]: ============================================
[...][localhost-startStop-2]: CMSEngine: restart at autoShutdown? false
[...][localhost-startStop-2]: CMSEngine: autoShutdown crumb file path? /var/lib/pki/pki-tomcat/logs/autoShutdown.crumb
[...][localhost-startStop-2]: CMSEngine: about to look for cert for auto-shutdown support:auditSigningCert cert-pki-ca
[...][localhost-startStop-2]: CMSEngine: found cert:auditSigningCert cert-pki-ca
[...][localhost-startStop-2]: CMSEngine: done init id=debug
[...][localhost-startStop-2]: CMSEngine: initialized debug
[...][localhost-startStop-2]: CMSEngine: initSubsystem id=log
[...][localhost-startStop-2]: CMSEngine: ready to init id=log
[...][localhost-startStop-2]: Creating RollingLogFile(/var/lib/pki/pki-tomcat/logs/ca/signedAudit/ca_audit)
[...][localhost-startStop-2]: Creating RollingLogFile(/var/lib/pki/pki-tomcat/logs/ca/system)
[...][localhost-startStop-2]: Creating RollingLogFile(/var/lib/pki/pki-tomcat/logs/ca/transactions)
[...][localhost-startStop-2]: CMSEngine: restart at autoShutdown? false
[...][localhost-startStop-2]: CMSEngine: autoShutdown crumb file path? /var/lib/pki/pki-tomcat/logs/autoShutdown.crumb
[...][localhost-startStop-2]: CMSEngine: about to look for cert for auto-shutdown support:auditSigningCert cert-pki-ca
[...][localhost-startStop-2]: CMSEngine: found cert:auditSigningCert cert-pki-ca
[...][localhost-startStop-2]: CMSEngine: done init id=log
[...][localhost-startStop-2]: CMSEngine: initialized log
[...][localhost-startStop-2]: CMSEngine: initSubsystem id=jss
[...][localhost-startStop-2]: CMSEngine: ready to init id=jss
[...][localhost-startStop-2]: CMSEngine: restart at autoShutdown? false
[...][localhost-startStop-2]: CMSEngine: autoShutdown crumb file path? /var/lib/pki/pki-tomcat/logs/autoShutdown.crumb
[...][localhost-startStop-2]: CMSEngine: about to look for cert for auto-shutdown support:auditSigningCert cert-pki-ca
[...][localhost-startStop-2]: CMSEngine: found cert:auditSigningCert cert-pki-ca
[...][localhost-startStop-2]: CMSEngine: done init id=jss
[...][localhost-startStop-2]: CMSEngine: initialized jss
[...][localhost-startStop-2]: CMSEngine: initSubsystem id=dbs
[...][localhost-startStop-2]: CMSEngine: ready to init id=dbs
[...][localhost-startStop-2]: DBSubsystem: init() mEnableSerialMgmt=true
[...][localhost-startStop-2]: Creating LdapBoundConnFactor(DBSubsystem)
[...][localhost-startStop-2]: LdapBoundConnFactory: init
[...][localhost-startStop-2]: LdapBoundConnFactory:doCloning true
[...][localhost-startStop-2]: LdapAuthInfo: init()
[...][localhost-startStop-2]: LdapAuthInfo: init begins
[...][localhost-startStop-2]: LdapAuthInfo: init ends
[...][localhost-startStop-2]: init: before makeConnection errorIfDown is true
[...][localhost-startStop-2]: makeConnection: errorIfDown true
[...][localhost-startStop-2]: TCP Keep-Alive: true
[...][localhost-startStop-2]: SSLClientCertificateSelectionCB: Setting desired cert nickname to: subsystemCert cert-pki-ca
[...][localhost-startStop-2]: LdapJssSSLSocket: set client auth cert nickname subsystemCert cert-pki-ca
[...][localhost-startStop-2]: SSL handshake happened
Could not connect to LDAP server host ipaserver.ipadomain.com port 636 Error netscape.ldap.LDAPException: Authentication failed (49)
 at com.netscape.cmscore.ldapconn.LdapBoundConnFactory.makeConnection(LdapBoundConnFactory.java:205)
 at com.netscape.cmscore.ldapconn.LdapBoundConnFactory.init(LdapBoundConnFactory.java:166)
 at com.netscape.cmscore.ldapconn.LdapBoundConnFactory.init(LdapBoundConnFactory.java:130)
 at com.netscape.cmscore.dbs.DBSubsystem.init(DBSubsystem.java:654)
 at com.netscape.cmscore.apps.CMSEngine.initSubsystem(CMSEngine.java:1172)
 at com.netscape.cmscore.apps.CMSEngine.initSubsystems(CMSEngine.java:1078)
 at com.netscape.cmscore.apps.CMSEngine.init(CMSEngine.java:570)
 at com.netscape.certsrv.apps.CMS.init(CMS.java:188)
 at com.netscape.certsrv.apps.CMS.start(CMS.java:1621)
 at com.netscape.cms.servlet.base.CMSStartServlet.init(CMSStartServlet.java:114)
 at javax.servlet.GenericServlet.init(GenericServlet.java:158)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.catalina.security.SecurityUtil$1.run(SecurityUtil.java:288)
 at org.apache.catalina.security.SecurityUtil$1.run(SecurityUtil.java:285)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAsPrivileged(Subject.java:549)
 at org.apache.catalina.security.SecurityUtil.execute(SecurityUtil.java:320)
 at org.apache.catalina.security.SecurityUtil.doAsPrivilege(SecurityUtil.java:175)
 at org.apache.catalina.security.SecurityUtil.doAsPrivilege(SecurityUtil.java:124)
 at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1270)
 at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1195)
 at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1085)
 at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5318)
 at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5610)
 at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147)
 at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:899)
 at org.apache.catalina.core.ContainerBase.access$000(ContainerBase.java:133)
 at org.apache.catalina.core.ContainerBase$PrivilegedAddChild.run(ContainerBase.java:156)

The exception shows that LDAP authentication failed with return code 49: invalid credentials.

 

Communication between pki-tomcatd and the LDAP server

We can see that pki-tomcatd is trying to open a LDAP connection through SSL but fails to authenticate. Within FreeIPA, pki-tomcat is storing data in the 389-ds LDAP server and needs to communicate through LDAP with this server.

The configuration of this communication channel can be read in /etc/pki/pki-tomcat/ca/CS.cfg:

internaldb.ldapauth.authtype=SslClientAuth
internaldb.ldapauth.bindDN=cn=Directory Manager
internaldb.ldapauth.bindPWPrompt=internaldb
internaldb.ldapauth.clientCertNickname=subsystemCert cert-pki-ca
internaldb.ldapconn.host=ipaserver.ipadomain.com
internaldb.ldapconn.port=636
internaldb.ldapconn.secureConn

The connection is using port 636 (SSL port) with SSL Client authentication (authtype=SslClientAuth). This means that pki-tomcatd provides a user certificate to the LDAP server, and the LDAP server maps a user to this certificate in order to authenticate the communications.

Note: Authtype can either be SslClientAuth or BasicAuth (authentication with username and password).

In this case, the SSL client authentication is done with the certificate named ‘subsystemCert cert-pki-ca‘ that is stored in /etc/pki/pki-tomcat/alias. So what could be causing the authentication to fail? We need to check that the certificate is available in /etc/pki/pki-tomcat/alias and that pki-tomcat is able to use the associated private key, and that the LDAP server is able to map this certificate to a user.

 

Check the subsystemCert cert-pki-ca

The first step consists in making sure that this certificate is present in /etc/pki/pki-tomcat/alias:

$ sudo certutil -L -d /etc/pki/pki-tomcat/alias -n 'subsystemCert cert-pki-ca'
Certificate:
 Data:
 Version: 3 (0x2)
...

 

Then make sure that the private key can be read using the password found in /var/lib/pki/pki-tomcat/conf/password.conf (with the tag internal=…)

$ sudo grep internal /var/lib/pki/pki-tomcat/conf/password.conf | cut -d= -f2 > /tmp/pwdfile.txt
$ sudo certutil -K -d /etc/pki/pki-tomcat/alias -f /tmp/pwdfile.txt -n 'subsystemCert cert-pki-ca'
certutil: Checking token "NSS Certificate DB" in slot "NSS User Private Key and Certificate Services"
< 0> rsa 86a7fe00cc2a01ad085f35d4ed3e84e7b82ab4f5 subsystemCert cert-pki-ca

At this point we know that pki-tomcat is able to access the certificate and the private key. So the issue is likely to be on the LDAP server side.

 

LDAP server configuration

The LDAP configuration describes how a certificate can be mapped to a user in /etc/dirsrv/slapd-IPADOMAIN-COM/certmap.conf:

$ sudo cat /etc/dirsrv/slapd-IPADOMAIN-COM/certmap.conf 
[...]
certmap default default
[...]
default:DNComps
default:FilterComps uid
certmap ipaca CN=Certificate Authority,O=IPADOMAIN.COM
ipaca:CmapLdapAttr seeAlso
ipaca:verifycert on

This means that when the LDAP server receives an authentication request with a certificate issued by the CA CN=Certificate Authority,O=IPADOMAIN.COM, it will look for users that contain a seeAlso attribute equal to the subject of the certificate, and the user entry must contain the certificate in the usercertificate attribute (verifycert: on).

With a default config, the ‘subsystemCert cert-pki-ca‘ is mapped to the user uid=pkidbuser,ou=people,o=ipaca. So let’s compare the user entry and the certificate:

$ ldapsearch -LLL -D 'cn=directory manager' -W -b uid=pkidbuser,ou=people,o=ipaca userCertificate description seeAlso
Enter LDAP Password: 
dn: uid=pkidbuser,ou=people,o=ipaca
userCertificate:: MIID...uwab3
description: 2;4;CN=Certificate Authority,O=IPADOMAIN.COM;CN=CA Subsystem,O=IPADOMAIN.COM
seeAlso: CN=CA Subsystem,O=IPADOMAIN.COM


$ sudo certutil -L -d /etc/pki/pki-tomcat/alias -n 'subsystemCert cert-pki-ca' -a
-----BEGIN CERTIFICATE-----
MIID...e5QAR
-----END CERTIFICATE-----

The certificate in the userCertificate attribute is different from the one in the NSS database! This can also be seen by comparing the serial number with the value from the ldap entry (in description: 2;<serial>;<issuer>;<subject>):

$ sudo certutil -L -d /etc/pki/pki-tomcat/alias -n 'subsystemCert cert-pki-ca' | grep Serial
 Serial Number: 1341718541 (0x4ff9000d

This explains why pki-tomcat could not authenticate to the LDAP server. The fix consists in updating the LDAP entry with the right certificate (and do not forget to update the description attribute with the right serial number!)

But we still do not know the root cause for the inconsistency between the NSS database /etc/pki/pki-tomcat/alias and the user entry for uid=pkidbuser. This could be described in another blog post (for the impatients, the automatic renewal of the certificate failed to update the LDAP server entry…)


by floblanc at September 11, 2017 03:05 PM

September 07, 2017

Red Hat Blog

Discovery and Affinity

Questions related to DNS and service discovery regularly come up during deployments of Identity Management (IdM) in Red Hat Enterprise Linux in a trust configuration with Active Directory. This blog article will shed some light of this aspect of the integration.

We will start with a description of the environment. Let us say that the Active Directory  environment consist of four servers split in pairs between two datacenters in two different geographic regions with low bandwidth and high latency network between them. For the sake of this article it does not really matter whether there are several domains or not. Of cause factoring in the hierarchy of domains and what replication agreements these servers have matters but for discovery and affinity conversation we will assume that all four mentioned servers represent one AD domain and replication is working properly so the content of the servers is the same. We will also introduce four IdM servers, two in each datacenter. There is a one-way trust relationship established between the IdM and Active Directory forests (IdM trusts AD). Finally there is a Linux client in each datacenter. This Linux client represents all the Linux clients in the datacenter assuming they are configured in the same way. The following diagram represents this setup.

The horizontal lines between AD and IdM servers denote replication between those servers and yellow arrows show one-way trust.

 

Communication flows

The questions about affinity and discovery arise mostly in the setups like this which is in fact quite popular. So how exactly can I configure the client to prefer local servers? The question is actually more complex than that. Let us look under the hood and see what communication is actually going on.

On the next diagram we will remove replication and trust lines and arrows to make diagram cleaner and more readable but, please, keep in mind that they are still there.

Kerberos

The first diagram shows Kerberos protocol communication that is used for authentication of the users as well as for authentication of the client itself. As you see clients can talk to either of the AD servers and to either of IdM servers. It is also important to understand that there is a client component that runs on IdM servers themselves. Those “embedded” clients are used to pull identity information from Active Directory so those clients also need to be able to connect to the right AD servers. 

LDAP

In addition to the communication over Kerberos protocol there is a communication over LDAP protocol. The following diagram shows the details.

The difference is that the client only talks LDAP to IdM servers however the “embedded” client on the IdM server can talk to either of the AD servers.

So the task of preventing the communication to go over the low latency network to another datacenter boils down to the following list of items:

  • Make sure that client talks to local IdM servers over LDAP
  • Make sure that client talks to local IdM servers over Kerberos
  • Make sure that client talks to local AD servers over Kerberos
  • Make sure that embedded client on the IdM talks to local AD servers over Kerberos
  • Make sure that embedded client on the IdM talks to local AD servers over LDAP

 

So to accomplish the goal LDAP and Kerberos configuration needs to be considered on the clients and on the IdM servers.

There is yet one more factor that we have not brought up yet. Active Directory has a notion of sites. An Active Directory client usually detects the presence of site definition by looking up information in LDAP and then resolving the actual servers using DNS. If a site is configured DNS lookup will return the list of local AD servers for client to connect to. Let us assume that AD server in each of the two datacenters form a site.

There is a similar capability in IdM but only if the IdM DNS is used. Now we have all the permutations of the setup established and can jump to the configuration options.

 

Configuring LDAP on the client

IdM DNS is used

Lookup of the corresponding services can be done automatically based on the DNS. Clients by default will use this method unless it is explicitly overwritten during the ipa-client installation by providing failover parameters in the command line or if the SSSD configuration is modified after the installation. See next section for more details about altering the configuration and not using DNS discovery.

If the default configuration is chosen and not overridden and IdM DNS is used then the client can take advantage of the feature called DNS locations. It is similar in concept to the AD sites. The client is expected to have IdM DNS configured in the resolv.conf. If locations are configured in IdM the client will always receive the local servers in the DNS lookup for the IdM service when it asks for SRV records. For more information on how to configure this feature see Linux Domain Identity, Authentication, and Policy Guide.

IdM DNS is not used

If IdM DNS is not used the chosen DNS server would not have appropriate information about the IdM server topology and thus service discovery method can’t be used. Clients would have to be explicitly configured with the failover information. This is done by explicitly configuring SSSD on the client to use primary and secondary server configuration. One can read more about the details of this configuration in sssd-ldap man page (look for Failover section). Besides changing SSSD configuration one can use additional arguments during ipa-client installation to provide an explicit list of primary and secondary servers the client should connect to.  

 

Configuring Kerberos on the client

There are two parts of the Kerberos configuration. One is related to the communication with IdM servers and another to communication with AD servers.

For communication with IdM servers the client will use same information that is described in the previous section so the discovery or explicit configuration  would apply to the Kerberos communication too. However communication with AD requires special handling. There is currently no way for the client to automatically discover the right AD servers to talk to. So the only way for the clients to limit them to the specific AD servers is to alter krb5.conf configuration to add AD realm information and explicitly list the AD servers to talk to. For more information how to do it see krb5.conf man page.

There is an outstanding feature request that the SSSD project is aware about. It is on the roadmap but so far, as of the time of writing, the automatic discovery of the AD servers in the same location is not yet implemented.

 

Configuring embedded client on the server for Kerberos and LDAP

The embedded client on the IdM server is configured as a client of the Active Directory server. For several releases it already has been capable of discovering AD sites and using them. However to do the discovery the first lookup might hit an AD server in the remote location that it not accessible or is reachable over the slow network. In such situations for the embedded client not to give up too early the dns_resolver_timeout should be increased. For clarity the man pages were recently updated to better explain the meaning and use of this timeout. These changes did not make Red Hat Enterprise Linux 7.4 but will be included in the next minor release. There is also a plan to increase the default value of the timeout in the next community release which might be included into next Red Hat Enterprise Linux minor release.

Before 7.4 the client on the IdM server used an automated discovery algorithm to determine the right AD site and AD servers to use. In some cases autodiscovery did not return the right information. There are different reasons why wrong information might be returned. In real deployments the servers are added and removed and in some cases the discovery would return an AD server that was decommissioned or completely firewalled.

To overcome this issue a new feature has been added in recent community release and made its way into Red Hat Enterprise Linux 7.4. It allows explicit overrides of some of otherwise automatically discoverable information. The following page in the SSSD site describes the related changes in a lot of details. The corresponding changes have been made to the SSSD man pages. Unfortunately a separate chapter of the Red Hat documentation that helps with configuration of the failover and domain discovery in the complex setups like this did not make the documentation available from Red Hat web site at the moment of writing. The corresponding team is actively working on it so by the time you read this you most likely will be able to find a corresponding chapter in the Windows Integration Guide.

If you are running IdM version that is still not updated to 7.4 you can use the same failover related overrides with AD server as the ones described in the sssd-ldap man pages for IdM servers. Those settings will explicitly pin client on the IdM server to the specific set of AD servers you chose without respect to sites. This, however, might not work well in the cases when you have a complex hierarchy of domains since there is no way to express which preferred servers belong to which domain. The general recommendation would be to upgrade your IdM server to 7.4 and use ability to pin client running on the server to a specific AD site.

 

Summary

As you can see getting the failover configuration right is not simple. But now you have all the moving parts covered and can make sure that all the service in one datacenter only talk to the services in that same datacenter.

As always I will be looking forward to your questions and comments as well as suggestions regarding next blog topics.

by Dmitri Pal at September 07, 2017 01:05 PM

September 04, 2017

Fraser Tweedale

Running Keycloak in OpenShift

At PyCon Australia in August I gave a presentation about federated and social identity. I demonstrated concepts using Keycloak, an Open Source, feature rich identity broker. Keycloak is deployed in JBoss, so I wasn’t excited about the prospect of setting up Keycloak from scratch. Fortunately there is an official Docker image for Keycloak, so with that as the starting point I took an opportunity to finally learn about OpenShift v3, too.

This post is simply a recounting of how I ran Keycloak on OpenShift. Along the way we will look at how to get the containerised Keycloak to trust a private certificate authority (CA).

One thing that is not discussed is how to get Keycloak to persist configuration and user records to a database. This was not required for my demo, but it will be important in a production deployment. Nevertheless I hope this article is a useful starting point for someone wishing to deploy Keycloak on OpenShift.

Bringing up a local OpenShift cluster

To deploy Keycloak on OpenShift, one must first have an OpenShift. OpenShift Online is Red Hat’s public PaaS offering. Although running the demo on a public PaaS was my first choice, OpenShift Online was experiencing issues at the time I was setting up my demo. So I sought a local solution. This approach would have the additional benefit of not being subject to the whims of conference networks (or, it was supposed to – but that is a story for another day!)

oc cluster up

Next I tried oc cluster up. oc is the official OpenShift client program. On Fedora, it is provided by the origin-clients package. oc cluster up command pulls required images and brings up an OpenShift cluster running on the system’s Docker infrastructure. The command takes no further arguments; it really is that simple! Or is it…?

% oc cluster up
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.5.0 image ...
   Pulling image openshift/origin:v1.5.0
   Pulled 0/3 layers, 3% complete
   ...
   Pulled 3/3 layers, 100% complete
   Extracting
   Image pull complete
-- Checking Docker daemon configuration ... FAIL
   Error: did not detect an --insecure-registry argument on the Docker daemon
   Solution:

     Ensure that the Docker daemon is running with the following argument:
        --insecure-registry 172.30.0.0/16

OK, so it is not that simple. But it got a fair way along, and (kudos to the OpenShift developers) they have provided actionable feedback about how to resolve the issue. I added --insecure-registry 172.30.0.0/16 to the OPTIONS variable in /etc/sysconfig/docker, then restarted Docker and tried again:

% oc cluster up
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.5.0 image ... OK
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ... OK
-- Checking type of volume mount ...
   Using nsenter mounter for OpenShift volumes
-- Creating host directories ... OK
-- Finding server IP ...
   Using 192.168.0.160 as the server IP
-- Starting OpenShift container ...
   Creating initial OpenShift configuration
   Starting OpenShift using container 'origin'
   Waiting for API server to start listening
-- Adding default OAuthClient redirect URIs ... OK
-- Installing registry ... OK
-- Installing router ... OK
-- Importing image streams ... OK
-- Importing templates ... OK
-- Login to server ... OK
-- Creating initial project "myproject" ... OK
-- Removing temporary directory ... OK
-- Checking container networking ... OK
-- Server Information ... 
   OpenShift server started.
   The server is accessible via web console at:
       https://192.168.0.160:8443

   You are logged in as:
       User:     developer
       Password: developer

   To login as administrator:
       oc login -u system:admin

Success! Unfortunately, on my machine with several virtual network, oc cluster up messed a bit too much with the routing tables, and when I deployed Keycloak on this cluster it was unable to communicate with my VMs. No doubt these issues could have been solved, but being short on time and with other approaches to try, I abandoned this approach.

Minishift

Minishift is a tool that launches a single-node OpenShift cluster in a VM. It supports a variety of operating systems and hypervisors. On GNU+Linux it supports KVM and VirtualBox.

First install docker-machine and docker-machine-driver-kvm. (follow the instructions at the preceding links). Unfortunately these are not yet packaged for Fedora.

Download and extract the Minishift release for your OS from https://github.com/minishift/minishift/releases.

Run minishift start:

% ./minishift start
-- Installing default add-ons ... OK
Starting local OpenShift cluster using 'kvm' hypervisor...
Downloading ISO 'https://github.com/minishift/minishift-b2d-iso/releases/download/v1.0.2/minishift-b2d.iso'

... wait a while ...

It downloads a boot2docker VM image containing the openshift cluster, boots the VM, and the console output then resembles the output of oc cluster up. I deduce that oc cluster up is being executed on the VM.

At this point, we’re ready to go. Before I continue, it is important to note that once you have access to an OpenShift cluster, the user experience of creating and managing applications is essentially the same. The commands in the following sections are relevant, regardless whether you are running your app on OpenShift online, on a cluster running on your workstation, or anything in between.

Preparing the Keycloak image

The JBoss project provides official Docker images, including an official Docker image for Keycloak. This image runs fine in plain Docker but the directory permissions are not correct for running in OpenShift.

The Dockerfile for this image is found in the jboss-dockerfiles/keycloak repository on GitHub. Although they do not publish an official image for it, this repository also contains a Dockerfile for Keycloak on OpenShift! I was able to build that image myself and upload it to my Docker Hub account. The steps were as follows.

First clone the jboss-dockerfiles repo:

% git clone https://github.com/jboss-dockerfiles/keycloak docker-keycloak
Cloning into 'docker-keycloak'...
remote: Counting objects: 1132, done.
remote: Compressing objects: 100% (22/22), done.
remote: Total 1132 (delta 14), reused 17 (delta 8), pack-reused 1102
Receiving objects: 100% (1132/1132), 823.50 KiB | 158.00 KiB/s, done.
Resolving deltas: 100% (551/551), done.
Checking connectivity... done.

Next build the Docker image for OpenShift:

% docker build docker-keycloak/server-openshift
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM jboss/keycloak:latest
 ---> fb3fc6a18e16
Step 2 : USER root
 ---> Running in 21b672e19722
 ---> eea91ef53702
Removing intermediate container 21b672e19722
Step 3 : RUN chown -R jboss:0 $JBOSS_HOME/standalone &&     chmod -R g+rw $JBOSS_HOME/standalone
 ---> Running in 93b7d11f89af
 ---> 910dc6c4a961
Removing intermediate container 93b7d11f89af
Step 4 : USER jboss
 ---> Running in 8b8ccba42f2a
 ---> c21eed109d12
Removing intermediate container 8b8ccba42f2a
Successfully built c21eed109d12

Finally, tag the image into the repo and push it:

% docker tag c21eed109d12 registry.hub.docker.com/frasertweedale/keycloak-openshift

% docker login -u frasertweedale registry.hub.docker.com
Password:
Login Succeeded

% docker push registry.hub.docker.com/frasertweedale/keycloak-openshift
... wait for upload ...
latest: digest: sha256:c82c3cc8e3edc05cfd1dae044c5687dc7ebd9a51aefb86a4bb1a3ebee16f341c size: 2623

Adding CA trust

For my demo, I used a local FreeIPA installation to issue TLS certificates for the the Keycloak app. I was also going to carry out a scenario where I configure Keycloak to use that FreeIPA installation’s LDAP server to authenticate users. I wanted to use TLS everywhere (eat your own dog food!) I needed the Keycloak application to trust the CA of one of my local FreeIPA installations. This made it necessary to build another Docker image based on the keycloak-openshift image, with the appropriate CA trust built in.

The content of the Dockerfile is:

FROM frasertweedale/keycloak-openshift:latest
USER root
COPY ca.pem /etc/pki/ca-trust/source/anchors/ca.pem
RUN update-ca-trust
USER jboss

The file ca.pem contains the CA certificate to add. It must be in the same directory as the Dockerfile. The build copies the CA certificate to the appropriate location and executes update-ca-trust to ensure that applications – including Java programs – will trust the CA.

Following the docker build I tagged the new image into my hub.docker.com repository (tag: f25-ca) and pushed it. And with that, we are ready to deploy Keycloak on OpenShift.

Creating the Keycloak application in OpenShift

At this point we have a local OpenShift cluster (via Minishift) and a Keycloak image (frasertweedale/keycloak-openshift:f25-ca) to deploy. When deploying the app we need to set some environment variables:

KEYCLOAK_USER=admin

A username for the Keycloak admin account to be created

KEYCLOAK_PASSWORD=secret123

Passphrase for the admin user

PROXY_ADDRESS_FORWARDING=true

Because the application will be running behind OpenShift’s HTTP proxy, we need to tell Keycloak to use the "external" hostname when creating hyperlinks, rather than Keycloak’s own view.

Use the oc new-app command to create and deploy the application:

% oc new-app --docker-image frasertweedale/keycloak-openshift:f25-ca \
    --env KEYCLOAK_USER=admin \
    --env KEYCLOAK_PASSWORD=secret123 \
    --env PROXY_ADDRESS_FORWARDING=true
--> Found Docker image 45e296f (4 weeks old) from Docker Hub for "frasertweedale/keycloak-openshift:f25-ca"

    * An image stream will be created as "keycloak-openshift:f25-ca" that will track this image
    * This image will be deployed in deployment config "keycloak-openshift"
    * Port 8080/tcp will be load balanced by service "keycloak-openshift"
      * Other containers can access this service through the hostname "keycloak-openshift"

--> Creating resources ...
    imagestream "keycloak-openshift" created
    deploymentconfig "keycloak-openshift" created
    service "keycloak-openshift" created
--> Success
    Run 'oc status' to view your app.

The app gets created immediately but it is not ready yet. The download of the image and deployment of the container (or pod in OpenShift / Kubernetes terminology) will proceed in the background.

After a little while (depending on how long it takes to download the ~300MB Docker image) oc status will show that the deployment is up and running:

% oc status
In project My Project (myproject) on server https://192.168.42.214:8443

svc/keycloak-openshift - 172.30.198.217:8080
  dc/keycloak-openshift deploys istag/keycloak-openshift:f25-ca 
    deployment #2 deployed 3 minutes ago - 1 pod

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

(In my case, the first deployment failed because the 10-minute timeout elapsed before the image download completed; hence deployment #2 in the output above.)

Creating a secure route

Now the Keycloak application is running, but we cannot reach it from outside the Keycloak project itself. In order to be able to reach it there must be a route. The oc create route command lets us create a route that uses TLS (so clients can authenticate the service). We will use the domain name keycloak.ipa.local. The public/private keypair and certificate have already been generated (how to do that is outside the scope of this article). The certificate was signed by the CA we added to the image earlier. The service name – visible in the oc status output above – is svc/keycloak-openshift.

% oc create route edge \
  --service svc/keycloak-openshift \
  --hostname keycloak.ipa.local \
  --key /home/ftweedal/scratch/keycloak.ipa.local.key \
  --cert /home/ftweedal/scratch/keycloak.ipa.local.pem
route "keycloak-openshift" created

Assuming there is a DNS entry pointing keycloak.ipa.local to the OpenShift cluster, and that the system trusts the CA that issued the certificate, we can now visit our Keycloak application:

% curl https://keycloak.ipa.local/
<!--
  ~ Copyright 2016 Red Hat, Inc. and/or its affiliates
  ~ and other contributors as indicated by the @author tags.
  ~
  ~ Licensed under the Apache License, Version 2.0 (the "License");
  ~ you may not use this file except in compliance with the License.
  ~ You may obtain a copy of the License at
  ~
  ~ http://www.apache.org/licenses/LICENSE-2.0
  ~
  ~ Unless required by applicable law or agreed to in writing, software
  ~ distributed under the License is distributed on an "AS IS" BASIS,
  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  ~ See the License for the specific language governing permissions and
  ~ limitations under the License.
  -->
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">

<html>
<head>
    <meta http-equiv="refresh" content="0; url=/auth/" />
    <meta name="robots" content="noindex, nofollow">
    <script type="text/javascript">
        window.location.href = "/auth/"
    </script>
</head>
<body>
    If you are not redirected automatically, follow this <a href='/auth'>link</a>.
</body>
</html>

If you visit in a browser, you will be able to log in using the admin account credentials specified in the KEYCLOAK_USER and KEYCLOAK_PASSWORD environment variables specified when the app was created. And from there you can create and manage authentication realms, but that is beyond the scope of this article.

Conclusion

In this post I discussed how to run Keycloak in OpenShift, from bringing up an OpenShift cluster to building the Docker image and creating the application and route in OpenShift. I recounted that I found OpenShift Online unstable at the time I tried it, and that although oc cluster up did successfully bring up a cluster I had trouble getting the Docker and VM networks to talk to each other. Eventually I tried Minishift which worked well.

We saw that although there is no official Docker image for Keycloak in OpenShift, there is a Dockerfile that builds a working image. It is easy to further extend the image to add trust for private CAs.

Creating the Keycloak app in OpenShift, and adding the routes, is straightforward. There are a few important environment variables that must be set. The oc create route command was used to create a secure route to access the application from the outside.

We did not discuss how to set up Keycloak with a database for persisting configuration and user records. The deployment we created is ephemeral. This satisfied my needs for demonstration purposes but production deployments will require persistence. There are official JBoss Docker images that extend the base Keycloak image and add support for PostgreSQL, MySQL and MongoDB. I have not tried these but I’d suggest starting with one of these images if you are looking to do a production deployment. Keep in mind that these images may not include the changes that are required for deploying in OpenShift.

by ftweedal at September 04, 2017 06:26 AM

August 30, 2017

Alexander Bokovoy

Flock 2017 day one

I’m attending Flock 2017, which is an annual conference for Fedora Project. This year it happens on a Cape Cod peninsula of the Massachusetts state in the U.S. The conference started on August 29th at a local resort and conference center in Hyannis, a town with a history, most commonly known for JFK legacy.

This year Flock is more action oriented. Many tolks are in fact collaborations where people discuss and hack together rather than being lectured. However, there is plenty of talks that allow others to digest what’s happening within fast moving projects in Fedora project universe.

While containers and modularity were topics discussed almost universally, I’d like to hilight two talks for the day one.

An entertaining talk of the day was “Fedora Legal - This is why I drink” by Tom Callaway. Tom explained past, present, and future of a life he has to live as a Fedora (para)legal person. He made clear he is not a lawyer and disclaimed almost everything you might think about, including taking the facts he is talking about as any kind of a legal adwise. Still, his review of what was done and what is expected to happen in near and far-fetched future was valuable.

Fedora was one of early distributions that went through a review and standardization of its licensing needs. It took some time to review 350 different free software and open source licenses from the packages in Fedora but Tom did build a list of them, both for good ones and bad ones. The latter are the ones which aren’t acceptable in Fedora anymore.

This work tends to go unnoticeable to users. Over years it took quite some time to also work with various opensource projects and get them to realise where their licensing was preventing them from better collaboration for their users. For example, many font designers have fun producing free fonts and giving them into hands of users but sometimes they have terrible ideas about how copyright law works. A part of Tom effort was to make it possible to package more fonts in Fedora and at the same time helping those font designers to improve their licensing. Another examples given during the talk were gradual clean up of Artistic License 1.0-licensed projects from CPAN, cleaning up TeXLive distribution, and replacement of Fedora individual contributor licensing agreement with the current Fedora Project Contributor Agreement that doesn’t require any copyright assignment.

An interesting topic Tom reflected on was consideration of the patents when deciding whether certain technology needs to be packaged in Fedora. Fedora finally has full support for MP3 now that all related known patents did expire. However, it took quite a lot of time to analyze expiration terms for many of them. It also took a lot of time to find out what ellyptic cryptography curves can be implemented and packaged in Fedora – ranging from six to ten years depending on a curve.

Tom noted that he has calendar reminders set up for some known patents whose expiration he tracks. These defined an interesting schedule for him and even set him off occasionally when a surprising alarm comes in about an expiration of a patent he forgot about.

All these achievements weren’t easy. When two lawyers cannot agree on how to interpret a text written by another lawyers, engineers have hard time navigating through all the minefield. The work behind the scenes in defining a set of rules that can be understood by mere people, written in easy to digest English, is impressive. The talk title was mostly a joke – in the end, we still don’t know why Tom thinks it could force him drinking and what exactly are the beverages. As with most of legal texts, it was left for interpretation – as well as a bottle of a local wine, gifted to Tom by another Flock attendee.

State of Fedora Server

Stephen Gallagher ran an annual review of what’s happening in Fedora Server world. Fedora Server work started after we successfully ran a focused experiment by making a Fedora Workstation as a focused product with system-wide attention to simplify typicall configuration tasks and improve usability of a Linux desktop. Fedora Server was an attempt to similarly provide an improved experience for administrators who came with non-UNIX background based on a feed back Red Hat tracks as part of its support operations.

Fedora’s server roles concept is now deprecated. Rolekit package will be removed in Fedora 28 or after it. A focus has changed to provide a better experience with Cockpit server apps. These apps are full-fledged Cockpit plugins. Rolekit-based deployment was covering only simple scenarios. Cockpit apps can and should be getting a better grasp of what is happening in the system and across a server fleet as Cockpit allows to browse and command multiple machines. We are currently working on FreeIPA cockpit app that would make it easier to decide what options to use for actual enrollment of FreeIPA domain controllers.

Another example is FleetCommander. FleetCommander cockpit app is a real deal: it combines an interface to configure desktop profiles and rules how to apply them in FreeIPA. At the same time it configures an actual virtual machine where desktop profile can be set up and tuned. FleetCommander Cockpit App gives rich experience that would not be possible with Rolekit alone.

During questions and answers session of Stephen’s talk we also attempted to find a concept that would help to understand why modularity is needed for spins like Fedora Server. A helpful concept is of a factory conveyer where multiple components assembled together to produce one of many pre-defined models that a factory is churning out. When you are owning an equipment, tools, and processes to define your factory it is relatively easy to control all the options. However, if your factory is a franchise, ability to re-define certain steps or tools, or make it possible to deviate in colors and ability to respond to a customer demand is the key. Fedora project is a community with a long story of creating spins and “deviations” from the “core”. Modularity project, while it has started as something different, is now bringing a lot of this flexibility to all community partners. Many problems were identified in the existing tooling around RPM management and distribution image composition through the years, modularity project attempts to solve them by a different view on how to attack them. It is an important initiative that gives a way to improve collaboration at the distribution core that we call Fedora Project. At the same time it gives much needed tools to make it possible to fork and (hopefully, happily) merge at exactly those stages of a distribution development and maintenance processes which were very rigid. In past any attempt to modify a spin too much typically led to a hard fork, with a new distribution project. As result, resources were unnecessarily spread out too thin in the community. I really hope we’ll get less burden on a repeatable chunk of work across all interesting spins and concentrate on more important aspects of why those spins are created: a value to their users.

August 30, 2017 03:26 PM

Powered by Planet