FreeIPA Identity Management planet - technical blogs

October 14, 2016

Rich Megginson

External Elasticsearch route with OpenShift logging

The Elasticsearch deployed with OpenShift aggregated logging is not accessible externally, outside the logging cluster, by default. The intention is that Kibana will be used to access the data, and the various ways to deploy/install OpenShift with logging allow you to specify the externally visible hostname that Kibana (including the separate operations cluster) will use. However, there are many tools that want to access the data from Elasticsearch. This post describes how to enable a route for external access to Elasticsearch.

You will first need an FQDN for the Elasticsearch (and a separate FQDN for the Elasticsearch ops instance if using the separate operations cluster). I am testing with an all-in-one (OpenShift master + node + logging components) install on an OpenStack machine, which has a private IP and hostname, and a public (floating) IP and hostname. In a real deployment, the public IP addresses and hostnames for the elasticsearch services will need to be added to DNS.
private host, IP: host-192-168-78-2.openstacklocal,
public host, IP: run-logging-source.oshift.rmeggins.test.novalocal, 10.x.y.z 

I have done the following on my local machine and in the all-in-one machine, by hacking /etc/hosts. All-in-one machine:   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.x.y.z run-logging-source.oshift.rmeggins.test.novalocal

My local machine:
10.x.y.z run-logging-source.oshift.rmeggins.test.novalocal run-logging-source.oshift.rmeggins.test

I set up a router after installing OpenShift:
$ oc create serviceaccount router -n default
$ oadm policy add-scc-to-user privileged system:serviceaccount:default:router
$ oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:default:router
$ oadm router --create --namespace default --service-account=router \
     --credentials $MASTER_CONFIG_DIR/openshift-router.kubeconfig

$ oc get pods -n default
NAME                      READY     STATUS    RESTARTS   AGE
docker-registry-1-7z0gq   1/1       Running   0          35m
router-1-8bp88            1/1       Running   0          24m

$ oc logs -n default router-1-8bp88
I1010 19:57:57.815578       1 router.go:161] Router is including routes in all namespaces
I1010 19:57:57.922277       1 router.go:404] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).

Logging setup should have already created services for Elasticsearch:
$ oc project logging
$ oc get svc
NAME                     CLUSTER-IP       EXTERNAL-IP   PORT(S)  AGE
logging-es         none          9200/TCP 33m
logging-es-ops    none          9200/TCP 33m

The route is a reencrypt route. TLS is terminated at the router, then reencrypted using client cert auth to Elasticsearch - SearchGuard is configured to require client cert auth. We use the admin cert/key (using the method to extract them from the previous posting. This allows us to use username/password/token authentication to Elasticsearch - the auth is proxied through the router to SearchGuard/Elasticsearch.
$ ca=`mktemp`
$ cert=`mktemp`
$ key=`mktemp`
$ oc get secret logging-elasticsearch \
    --template='{{index .data "admin-ca"}}' | base64 -d > $ca
$ oc get secret logging-elasticsearch \
    --template='{{index .data "admin-cert"}}' | base64 -d > $cert
$ oc get secret logging-elasticsearch \
    --template='{{index .data "admin-key"}}' | base64 -d > $key
$ oc create route -n logging reencrypt --service logging-es \
                        --port 9200 --hostname \
                        --dest-ca-cert=$ca --cert=$cert --key=$key
$ oc create route -n logging reencrypt --service logging-es-ops \
                         --port 9200 --hostname \
                         --dest-ca-cert=$ca --cert=$cert --key=$key

I'm using the AllowAll identity provider so I can just create users/passwords with oc login (for testing):
$ more /tmp/openshift/origin-aggregated-logging/openshift.local.config/master/master-config.yaml
  - challenge: true
    login: true
    mappingMethod: claim
    name: anypassword
      apiVersion: v1
      kind: AllowAllPasswordIdentityProvider

I create a user called "kibtest" (I also use this user for kibana testing) that has cluster admin rights:
$ oc login --username=system:admin
$ oc login --username=kibtest --password=kibtest
$ oc login --username=system:admin
$ oadm policy add-cluster-role-to-user cluster-admin kibtest

I get the username and token for kibtest:
$ oc login --username=kibtest --password=kibtest
$ test_token="$(oc whoami -t)"
$ test_name="$(oc whoami)"
$ test_ip=""
$ oc login --username=system:admin

Now I can use curl like this:
$ curl -s -k -H "X-Proxy-Remote-User: $test_name" -H "Authorization: Bearer $test_token" -H "X-Forwarded-For:"
  "name" : "Sugar Man",
  "cluster_name" : "logging-es",
  "version" : {
    "number" : "2.3.5",
    "build_hash" : "90f439ff60a3c0f497f91663701e64ccd01edbb4",
    "build_timestamp" : "2016-07-27T10:36:52Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.0"
  "tagline" : "You Know, for Search"

$ curl -s -k -H "X-Proxy-Remote-User: $test_name" -H "Authorization: Bearer $test_token" -H "X-Forwarded-For:"*/_search?q=message:centos | python -mjson.tool | more
    "_shards": {
        "failed": 0,
        "successful": 1,
        "total": 1
    "hits": {
        "hits": [
                "_id": "AVewK5inAJ6n02oOdaIc",
                "_index": ".operations.2016.10.10",
                "_score": 11.1106205,
                "_source": {
                    "@timestamp": "2016-10-10T19:46:43.000000+00:00",
                    "hostname": "host-192-168-78-2.openstacklocal",
                    "ident": "docker-current",
                    "ipaddr4": "",
                    "ipaddr6": "fe80::42:acff:fe11:5",
                    "message": "time=\"2016-10-10T19:46:43.564686094Z\" level=in....."

Works the same from my local machine.

October 14, 2016 04:29 PM

October 07, 2016

Adam Young

Securing the Cyrus SASL Sample Server and Client with Kerberos

Since running the Cyrus SASL sample server and client was not too bad, I figured I would see what happened when I tried to secure it using Kerberos.


I’m going to run this on a system that has been enrolled as a FreeIPA client, so I start with a known good Kerberos setup.

To see the list of mechanisms available, run


I have the following available.


For Kerberos, I want to use GSSAPI.

Lets do this the hard way, by trial and error. First, run the server, telling it to use the GSSAPI mechanism

/usr/bin/sasl2-sample-server -p 1789 -h localhost -s hello  -m GSSAPI

Then run the client in another terminal:

sasl2-sample-client -s hello -p 1789  -m GSSAPI localhost

Which includes the following in the output:

starting SASL negotiation: generic failure
SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (No Kerberos credentials available)
closing connection


I need a Kerberos TGT in order to get a service ticket. Use kinit

$ kinit admin
Password for admin@AYOUNG-DELL-T1700.TEST: 

This time the error message is:

starting SASL negotiation: generic failure
SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Server rcmd/localhost@AYOUNG-DELL-T1700.TEST not found in Kerberos database)

I notice two things, here:

  1. The service needs to be in the Kerberos servers directory.
  2. the service name should match the hostname.


If I rerun the command using the FQDN of the server, I can see the service name as expected:


$ sasl2-sample-client -s hello -p 1789 -m GSSAPI undercloud.ayoung-dell-t1700.testreceiving capability list... ...
SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure.  Minor code may provide more information (Server hello/undercloud.ayoung-dell-t1700.test@AYOUNG-DELL-T1700.TEST not found in Kerberos database)
closing connection


So I tried to create the service in the ipa server:

ipa service-add
Principal: hello/overcloud.ayoung-dell-t1700.test@AYOUNG-DELL-T1700.TEST
ipa: ERROR: Host does not have corresponding DNS A/AAAA record
[stack@overcloud ~]$ ipa service-find

Strange error, I don’t understand, as the Host does have an A record.

Work around it with Force:

ipa service-add  --force  hello/undercloud.ayoung-dell-t1700.test@AYOUNG-DELL-T1700.TEST


Added service "hello/undercloud.ayoung-dell-t1700.test@AYOUNG-DELL-T1700.TEST"
  Principal: hello/undercloud.ayoung-dell-t1700.test@AYOUNG-DELL-T1700.TEST
  Managed by: undercloud.ayoung-dell-t1700.test

OK, lets try running this again.

 sasl2-sample-client -s hello -p 1789 -m GSSAPI 

SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure.  Minor code may provide more information (KDC has no support for encryption type)


OK, I’m going to guess that this is because my remote service can’t deal with the Kerberos service tickets it is getting. Since the service tickets are for the principal: hello/undercloud.ayoung-dell-t1700.test@AYOUNG-DELL-T1700.TEST it needs to be able to decrypt requests using a key meant for this principal.

Fetch a keytab for that principal, and put it in a place where the GSSAPI libraries can access it automatically. This place is:


Where {uid} is the numeric UID for a users. In this case, the users name is stack and I can find the numeric UID value using getent.


ipa-getkeytab -p hello/undercloud.ayoung-dell-t1700.test@AYOUNG-DELL-T1700.TEST -k client.keytab  -s identity.ayoung-dell-t1700.test
Keytab successfully retrieved and stored in: client.keytab
$  getent passwd stack
$ sudo mkdir /var/kerberos/krb5/user/1000
$ sudo chown stack:stack /var/kerberos/krb5/user/1000
$ mv client.keytab /var/kerberos/krb5/user/1000

Restart the server process, try again, and the log is interesting. Here is the full client side trace.

$ sasl2-sample-client -s hello -p 1789 -m GSSAPI undercloud.ayoung-dell-t1700.test
receiving capability list... recv: {6}
please enter an authorization id: admin
using mechanism GSSAPI
send: {6}
send: {1}
send: {655}
`[82][2][8B][6][9]*[86]H[86][F7][12][1][2][2][1][0]n[82][2]z0[82][2]v[A0][3][2][1][5][A1][3][2][1][E][A2][7][3][5][0] [0][0][0][A3][82][1][82]a[82][1]~0[82][1]z[A0][3][2][1][5][A1][18][1B][16]AYOUNG-DELL-T1700.TEST[A2]503[A0][3][2][1][3][A1],0*[1B][5]hello[1B]!undercloud.ayoung-dell-t1700.test[A3][82][1] 0[82][1][1C][A0][3][2][1][12][A1][3][2][1][1][A2][82][1][E][4][82][1][A]T[DD][F8]B[F4][B4]5[D]`[A3]![EE][19]-NN[8E][F5][B7]{O,#[91][A4]}[86]k[D5][EE]vL[E4]&[6][3][A][1C][91][A5][A7][88]j[D1][A3][82][EC][A][D6][CB][F3]9[C][13]#[94][86]d+[B8]V[B7]C^[C6][A8][16][D1]r[E4][0][B9][2][2]&2[E5]Y~[C1]\([BA]x}[17][BC][D][FC][D5][CA][CA]h[E4][A1][81].[15][17]?[CA][A][8B]}[1C]l[F0][D9][E8][96]3<+[84][E7]q.[8E][D5][6][1C]p[E6][6]v[B0][84]5[9][B7]w[D6]3[B8][E3][5]T[BF][92][AA][D5][B3][[83]X[C0]:[BA]V[E5]{>[A5]T[F6]j[CB]p[BF]][EF][E1][91][ED][C][F3]Y[4]x[8E][C2]H[E7][14]#9[EE]5[B3]=[FA][80][DD][93][EF]3[0]q~22[6]I<[EB][F9]V[D1][9D][A8][A6]:[CE]u[AE]-l[D3]"[D7][FE]iB[84][E0]]B[E][C8]U[E][FD][D2]=[F2][97][88][D3][DA]j[B4][FA][16][D1]^CE2?[9F][89]^A[E9][AF][1A]5[99][CE][7][AF]M[1A][A][CB]^[E1][BA]f[7]-n<[F8]8![A4][81][DA]0[81][D7][A0][3][2][1][12][A2][81][CF][4][81][CC][91][F0][A]D[91][F6][FA][F4][B9][13][DF]d|[F4]Y[DF][9E]M[A2]f[11][15]x[C5]-|Qt[F4]nL>@[F4][18][FF],[F6][B5]F6[EC]+[C3]V[F1][81][97][E2][1D]i[4]wD&[9A]V[CE][A1][16][D7]4[E0]C[B]O[D1]v[DD][E9][84]lW[DA]%[F6]v[93]<m"SAfiF[8E][[95]"[CC][D2]4[FA]_[FB]i[E7][D4]M[AE][5][82][FF][D7][0][8C]6[8D][B0]3[F8][E3][B4]P[9C][9E][A2]`[7]U[F7][1D]zub[E0]([A9]P>[AE]f[1A][B1][80][A0]}s[EA][D1]Zk[FF]n_S[9E]rK[E5]n [85]#[DB][FF][B3][E2][19];[F5][E2][8A]>2[E5][A4][81][E8]z[9D][E3][BC][C8][87][F]:[81]7[C9]ix[1E]5[15])[8D][9D][C7][DB][13][98][97][C7]C[6]q[D2][C1][ED][B3]:[E0]
waiting for server reply...
authentication failed
closing connection

On the server side, it looks similar, but ends like this:

starting SASL negotiation: generic failureclosing connection

It is not a GSSAPI error this time. To dig deeper, I’m going to look at the source code on the server side.


I’ll shortcut a few steps. Install both gdb and the debugInfo for the sample code:

sudo yum install gdb
sudo debuginfo-install cyrus-sasl-devel-2.1.26-20.el7_2.x86_64

Note that the version might change for the debuginfo.

The source code is included with the debuginfo rpm:

$ rpmquery  --list cyrus-sasl-debuginfo-2.1.26-20.el7_2.x86_64 | grep server.c

Looking at the server code at line 267 I see:

if (r != SASL_OK && r != SASL_CONTINUE) {
saslerr(r, “starting SASL negotiation”);
fputc(‘N’, out); /* send NO to client */
return -1;

Let’s put a breakpoint at line 255 above it and see what is happening. Here is the session for setting up the breakpoint:

$  gdb /usr/bin/sasl2-sample-server
(gdb) break 255
Breakpoint 1 at 0x2557: file server.c, line 255.
(gdb) run  -h undercloud.ayoung-dell-t1700.test -p 1789 -m GSSAPI

Running the client code gets as far as prompting for the please enter an authorization id: admiyo

This is suspect. We’ll come back to it in a moment.

Back on the server, now, we see the breakpoint has been hit.

Breakpoint 1, mysasl_negotiate (in=0x55555575c150, out=0x55555575c390, conn=0x55555575a6e0)
    at server.c:255
255	    if(buf[0] == 'Y') {
Missing separate debuginfos, use: debuginfo-install keyutils-libs-1.5.8-3.el7.x86_64 libdb-5.3.21-19.el7.x86_64 libselinux-2.2.2-6.el7.x86_64 nss-softokn-freebl- openssl-libs-1.0.1e-51.el7_2.7.x86_64 pcre-8.32-15.el7_2.1.x86_64 xz-libs-5.1.2-12alpha.el7.x86_64 zlib-1.2.7-15.el7.x86_64

We might need some other RPMS if we want to step deeper through the code, but for now, let’s keep on here.

(gdb) print buf
$1 = "Y", '\000' ...
(gdb) n
257	        len = recv_string(in, buf, sizeof(buf));
(gdb) n
recv: {655}
`[82][2][8B][6][9]*[86]H[86][F7][12][1][2][2][1][0]n[82][2]z0[82][2]v[A0][3][2][1][5][A1][3][2][1][E][A2][7][3][5][0] [0][0][0][A3][82][1][82]a[82][1]~0[82][1]z[A0][3][2][1][5][A1][18][1B][16]AYOUNG-DELL-T1700.TEST[A2]503[A0][3][2][1][3][A1],0*[1B][5]hello[1B]!undercloud.ayoung-dell-t1700.test[A3][82][1] 0[82][1][1C][A0][3][2][1][12][A1][3][2][1][1][A2][82][1][E][4][82][1][A]T[DD][F8]B[F4][B4]5[D]`[A3]![EE][19]-NN[8E][F5][B7]{O,#[91][A4]}[86]k[D5][EE]vL[E4]&[6][3][A][1C][91][A5][A7][88]j[D1][A3][82][EC][A][D6][CB][F3]9[C][13]#[94][86]d+[B8]V[B7]C^[C6][A8][16][D1]r[E4][0][B9][2][2]&2[E5]Y~[C1]\([BA]x}[17][BC][D][FC][D5][CA][CA]h[E4][A1][81].[15][17]?[CA][A][8B]}[1C]l[F0][D9][E8][96]3<+[84][E7]q.[8E][D5][6][1C]p[E6][6]v[B0][84]5[9][B7]w[D6]3[B8][E3][5]T[BF][92][AA][D5][B3][[83]X[C0]:[BA]V[E5]{>[A5]T[F6]j[CB]p[BF]][EF][E1][91][ED][C][F3]Y[4]x[8E][C2]H[E7][14]#9[EE]5[B3]=[FA][80][DD][93][EF]3[0]q~22[6]I<[EB][F9]V[D1][9D][A8][A6]:[CE]u[AE]-l[D3]"[D7][FE]iB[84][E0]]B[E][C8]U[E][FD][D2]=[F2][97][88][D3][DA]j[B4][FA][16][D1]^CE2?[9F][89]^A[E9][AF][1A]5[99][CE][7][AF]M[1A][A][CB]^[E1][BA]f[7]-n<[F8]8![A4][81][DA]0[81][D7][A0][3][2][1][12][A2][81][CF][4][81][CC]hgdf j[CF][AE][7F]:![1C]D[F8]3^w[B7];"[3][D8]3"[8]i[9]J[D3]R[F]A[E7]![BE]0<[8][D3]'j`[B7]J[16][A9][F3][E6]=[E5]J[FE].-[A1]t[[2]W[8D]7[F3][8][EC][92][BB][A3]o5h[C1]A[CC][A2][F1][99][AA][93]2{[BA]Mx0[9D][9][CC]![A]Y[12][D8][2][95][17]ml[B4][1A][94]y[1A][BC][D2]I[8F]7Vg2[8E]6[13]:Lx[E6][1][D3][3][7]r?[12][84]3[B1][B5][AA]E)[EA][87][A][9F]Nk[D1]I[FD]{[B8]9#-[D][8]2[CC]C1[A8]Lfl[B0][E8][82][13][F9]t[1A][F6]^[8D] O13[12]L[E7][C0]k[99][E1]J[1F][FE]#[14]u[B][B2][8F][DB][E6]73*[FA][ED][11][F7][9E][B0][DC][D9][19][AB][97][D7][8B][BB]
260	        r = sasl_server_start(conn, chosenmech, buf, len,
(gdb) print len
$2 = 1
(gdb) n
257	        len = recv_string(in, buf, sizeof(buf));
260	        r = sasl_server_start(conn, chosenmech, buf, len,
267	    if (r != SASL_OK && r != SASL_CONTINUE) {
Missing separate debuginfos, use: debuginfo-install gssproxy-0.4.1-8.el7_2.x86_64
(gdb) print r
$3 = -1

A -1 response code usually is an error. Looking in /usr/include/sasl/sasl.h:

#define SASL_FAIL -1 /* generic failure */

I wonder if we can figure out why. Let’ see, first, if we can figure out what the client is sending in the authentication request. If it is a bad principal, then we have a pretty good reason to expect the server to reject it.

Let’s let the server continue running, and try debugging the client.

Client code can be found here

$ rpmquery  --list cyrus-sasl-debuginfo | grep client.c

At line 258 I see the call to sasl_client_start which includes what appears to be the initialization of the data variable. Set a breakpoint there

Running the code in the debugger like this:

$ gdb sasl2-sample-client
(gdb) break 258
Breakpoint 1 at 0x201b: file client.c, line 258.
(gdb) run -s hello -p 1789 -m GSSAPI undercloud.ayoung-dell-t1700.test
Starting program: /bin/sasl2-sample-client -s hello -p 1789 -m GSSAPI undercloud.ayoung-dell-t1700.test
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/".
receiving capability list... recv: {6}

Breakpoint 1, mysasl_negotiate (in=0x55555575cab0, out=0x55555575ccf0, conn=0x55555575b520)
    at client.c:258
258	    r = sasl_client_start(conn, mech, NULL, &data, &len, &chosenmech);
(gdb) print data
$1 = 0x0
(gdb) print mech
$2 = 0x7fffffffe714 "GSSAPI"
(gdb) print conn
$3 = (sasl_conn_t *) 0x55555575b520
(gdb) print len
$4 = 6
(gdb) n
please enter an authorization id: 

So it is the SASL library itself requesting an authorization ID. Let me try putting in the full Principal associated with the service ticket.

please enter an authorization id: ayoung@AYOUNG-DELL-T1700.TEST
259	    if (r != SASL_OK && r != SASL_CONTINUE) {
Missing separate debuginfos, use: debuginfo-install gssproxy-0.4.1-8.el7_2.x86_64
(gdb) print r
$5 = 1

And from sasl.h we know that is good.

#define SASL_CONTINUE 1 /* another step is needed in authentication */

Let’s let it continue.

authentication failed

Nope. Continuing through the debugger, I see another generic failure here:

1531	            } else {
1532	                /* Mech wants client-first, so let them have it */
1533	                result = sasl_server_step(conn,
1534	                                          clientin,
1535						  clientinlen,
1536	                                          serverout,
1537						  serveroutlen);
(gdb) n
1557	    if (  result != SASL_OK
(gdb) print result
$15 = -1

Still…why is the Client side SASL call kicking into an interactive prompt? There should be enough information via the GSSAPI SASL library interaction to authenticate. The Man page for sasl_client_start even indicates that there might be prompts returned.

Looking deeper at the client code, I do see that the prompt is from line 122. The function simple at line 107 must be set as a callback. Perhaps the client code is not smart enough to work with the GSSAPI? At line 190 and 192 I see that the simple code is provided as a callback for the responses SASL_CB_USER or SASL_CB_AUTHNAME. Setting a break point and rerunning shows the id value to be 16385 or x4001.

#define SASL_CB_USER 0x4001 /* client user identity to login as */


Humility and Success

If you have followed through this far, you know I am in the weeds. I asked for help. Help, in this case,was Robbie Harwood, how showed me that the sample server/client worked OK if I ran the server as root, and userd the service host instead of hello. That gave me a succesfful comparison other to work with. I ran using strace and noticed that the failing version was not trying to read the keytab file from /var/kerberos/krb5/user/1000/client.keytab. The successful one running as root read the keytab from /etc/krb5.keytab THe failing one was trying to read from there and getting a permissions failure. The final blow that took down the wall was to realize that the krb5.conf file defined different values for default_client_keytab_name and default_keytab_name, with the latter being set to FILE:/etc/krb5.keytab. To work around this, I needed the environment variable KRB5_KTNAME to be set to the keytab. This was the winning entry:

KRB5_KTNAME=/var/kerberos/krb5/user/1000/client.keytab  sasl2-sample-server -h $HOSTNAME -p 9999 -s hello -m GSSAPI 

And then ran

sasl2-sample-client -s hello -p 9999 -m GSSAPI undercloud.ayoung-dell-t1700.test

Oh, one other tyhing Robbie told me was that the string I type when prompted with

please enter an authorization id:

Should be the Kerberos principal, minus the Realm, so for me it was

please enter an authorization id: ayoung

by Adam Young at October 07, 2016 02:34 AM

September 22, 2016

Red Hat Blog

PCI Series: Requirement 6 – Develop and Maintain Secure Systems and Applications

This post is the fifth installment in my PCI DSS series – a series dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS). This specific post is related to requirement six (i.e. the requirement to develop and maintain secure systems and applications). The outline and mapping of individual articles to requirements can be found in the overarching post that started the series.

Section six of the PCI DSS standard covers guidelines related to secure application development and testing. IdM and its ecosystem can help in multiple ways to address requirements in this part of the PCI-DSS standard. First of all, IdM includes a set of Apache modules for different methods of authentication. These modules externalize authentication logic from a web application so that the application does not need to re-implement different authentication methods itself. Such an approach significantly reduces the effort that developers need to invest into building different authentication capabilities into their applications – allowing them to focus on the business logic of the application itself and to deliver results faster. Externalized authentication based on Apache modules is (just) one of the best practices currently being adopted in the industry. There are a number of modules that provide different authentication methods, including:

  • A forms based password or one-time-password (OTP) authentication module (…a module that integrates with a given application’s login page and uses the PAM stack and SSSD in particular).
  • A Kerberos based single-sign-on (GSSAPI) module that allows for login into an application without prompting a given user for his or her credentials if he (or she) is already authenticated against a Kerberos server and holds proof of the authentication.
  • Certificate based modules based on either NSS or OpenSSL crypto libraries that enable certificate based authentication into an application.
  • A SAML module that connects an application to an identity provider (IdP); IdP-based federation uses redirection of the application login to an IdP – then accepting an authentication assertion as issued by the IdP.
  • An OpenID Connect module (similar to the SAML module) that allows an application to accept an OpenID Connect ticket from an authentication server.

The modules and details on how to integrate them are described on the following wiki page. Of note: all of the aforementioned modules are available in the current shipping version of Red Hat Enterprise Linux except for the OpenID Connect one.

As mentioned (above), externalizing authentication saves a lot of effort and is a good practice. To make developer life even easier we have been working on a container-based developer environment that would provide an application container, Apache web server (with pre-configured modules), an authentication server based on IdM (FreeIPA), and a client that allows for the testing of an application via browser. A prototype of this setup can be found here and the following video demonstrates how it can be used for development.

There is also an existing feature of the IdM server that allows for the management of SSH keys for different environments. Imagine you have an application with an administrative account. There are some operations that are done using this account, including SSH-ing into the system the application is running on. If you are developing this application, or if you are testing this application, or (perhaps) if you are deploying this appliaction – you would (likely) want to have different credentials for administrative accounts. IdM allows for the creation of ID views. Loading different SSH keys into different views enables use of the same administrative account across different environments with different SSH keys. Together, with different credentials, IdM allows for defining access control rules that are different for different environments and thus (for example) addresses requirement 6.4.1 (…or, to some extent, requirement 6.5.8).

Finally, it’s worth mentioning that it’s generally not a good idea to store passwords in configuration files. That said, indeed, some applications were built this way (in the past). To help developers to deal with secrets that an application needs to use, there are plans to provide a secrets API that would allow applications to fetch or store secrets in a more secure way without putting them in clear text in configuration files. You can read more about this capability here. A Technology Preview of the API is included as a part of SSSD (System Security Services Daemon) in the beta release of Red Hat Enterprise Linux 7.3.  Please reach out if you are interested in using this feature – our Technical Account Managers and Solution Architects would love to speak with you.

Questions about how Identity Management relates to requirement six?  Reach out using the comments section (below).

by Dmitri Pal at September 22, 2016 06:06 PM

September 20, 2016

Adam Young

Mirroring Keystone Delegations in FreeIPA/389DS

This is more musing than a practical design.

Most application servers have a means to query LDAP for the authorization information for a user.  This is separate from, and follows after, authentication which may be using one of multiple mechanism, possibly not even querying LDAP (although that would be strange).

And there are other mechanisms (SAML2, SSSD+mod_lookup_identity) that can, also, provide the authorization attributes.

Separating mechanism from meaning, however, we are left with the fact that applications need a way to query attributes to make authorization decisions.  In Keystone, the general pattern is this:

A project is a group of resources.

A user is assigned a role on a project.

A user requests a token for a project. That token references the users roles.

The user passes the token to the server when accessing and API. Access control is based on the roles that the user has in the associated token.

The key point here is that it is the roles associated with the token in question that matter.  From that point on, we have the ability to inject layers of indirection.

Here is where things fall down today. If we take an app like WordPress, and tried to make it query against Red Hat’s LDAP server for the groups to use, there is no mapping  between the groups assigned and the permissions that the user should have.  As the WordPress instance might be run by any one of several organizations within Red Hat, there is no direct mapping possible.

If we map this problem domain to IPA, we see where things fall down.

WordPress, here, is a service.  If the host it is running on is owned by a particular organization (say, EMEA-Sales) it should be the EMEA Sales group that determines who gets what permissions on WordPress.

Aside: WordPress, by the way, makes a great example to use, as it has very clear, well defined roles,  which have a clear scope of authorization for operations.

Subscriber < Contributor < Author < Editor < Administrator

Back to our regular article:

If we define and actor as either a user or agroup of users, a Role assignment is a : (actor, organization, application, role)



Now, a user should not have to go to IPA, get a token, and hand that to WordPress.  When a user connects to WordPress, and attempts to do any non-public action, they are prompted for credentials, and are authenticated.  At this point, WordPress can do the LDAP query. And here is the question:

“what should an application query for in LDAP”

If we use groups, then we have a nasty naming scheme.  EMEA-sales_wordpress_admin versus LATAM-sales_worpress_admin.  This is appending the query  (organization, application) and the result (role).

Ideally, we would tag the role on the service.  The service already reflects organization and application.

In the RFC based schemas, there is a organizationalRole objectclass which almost mirrors what we want.  But I think the most important thing is to return an object that looks like a Group, most specifically groupofnames.  Fortunately, I think this is just the ‘cn’.

Can we put a group of names under a service?  Its not a container.

‘ipaService’ DESC ‘IPA service objectclass’ AUXILIARY MAY ( memberOf $ managedBy $ ipaKrbAuthzData) X-ORIGIN ‘IPA v2’ )

objectClass: ipaobject
objectClass: top
objectClass: ipaservice
objectClass: pkiuser
objectClass: ipakrbprincipal
objectClass: krbprincipal
objectClass: krbprincipalaux
objectClass: krbTicketPolicyAux

It probably would make more sense to have a separate subtree service-roles,  with each service-name a container, and each role a group-of-names under that container. The application would  filter on (service-name) to get the set of roles.  For a specific user, the service would add an additional filter for memberof.

Now, that is a lot of embedded knowledge in the application, and does not provide any way to do additional business logic in the IPA server or to hide that complexity from the end user.  Ideally, we would have something like automember to populate these role assignments, or, even better, a light-weight way for a user with a role assignment to re-delegate that to another user or principal.

That is what really gets valuable:  user self service for delegation.  We want to make it such that you do not need to be an admin to create a role assignment, but rather (with exceptions) you can delegate to others any role that you have assigned to yourself.  This is a question of scale.

However, more than just scale, we want to be able to track responsibility;  who assigned a user the role that they have, and how did they have the authority to assign it?  When a user no longer has authority, should the people they have delegated to also lose it, or does that delegation get transferred?  Both patterns are required for some uses.

I think this fast gets beyond what can be represented easily in an LDAP schema.  Probably the right step is to use something like automember to place users into role assignments.  Expanding nested groups, while nice, might be too complicated.

by Adam Young at September 20, 2016 03:37 AM

September 19, 2016

Alexander Bokovoy

Samba and identity tales

Samba is built to bridge Windows and POSIX worlds. Apart from the file system semantics, there are many other differences. The story I’m about to tell concerns users and groups. They have different meaning and representation in both worlds, so translation is required, similar to a real life. In real life translators often have to take into account cultural differences and sometimes lack of certain concepts in the language they are translating to.

Protocol communications which Samba implements, end up bringing in objects which have a certain meaning in one world that doesn’t really have a one to one counterpart on the other side. One of tasks samba undertakes is translating the concepts between Windows and POSIX. It does this translation with the help of mapping databases.

Security identifiers

In Windows access controls are built around a concept of a security identifiers and security descriptors. Security identifier (SID) is associated with the object it represents. Internal processes in Windows refer to security identifiers of the objects rather than their names. Security descriptor is used to list what security identifiers can have access to a certain resource and what kind of access it could be. An important part of the story is that security identifiers have the same structure regardless of an object they represent. When security identifier is expressed in a textual form, in general we cannot say what object they represent – a user, a group, or a machine account, apart from so called ‘well-known’ SIDs. A nice property of a SID is that it is a global identifier – for two different domains their SIDs are guaranteed to be different even for ‘well-known’ objects within the domains.

POSIX identifiers

In POSIX world access controls are built around a simple model of rights for the resource owner, rights for the resource group ownership, and rights for all others. The model is further extended with POSIX Access Control Lists (ACLs) which allow to associate multiple simple model descriptors with a single resource but resulting access descriptor is still far from its Windows counterpart.

To a kernel of POSIX-compatible operating system access checks are done using numbers which represent users and groups. The kernel application interfaces don’t deal with user or group names, they deal with integer-based identifiers. Standard language library is supposed to translate user or group names to their numeric identifiers when talking to the kernel.

When operating on files and directories, Samba needs to translate NTFS-like semantics to POSIX file semantics. This includes translating security identifiers of SMB clients to POSIX identifiers of the users and their group membership. There are no SID-like structures in the kernel of POSIX operating system that Samba could directly map to; instead, it has to maintain such mapping in user space.

However, POSIX operating system already has own databases for users and groups which all POSIX applications are utilizing. In a primitive form these databases are stored as textual files, /etc/passwd and /etc/group, with a well-defined format. On Linux systems there are other ways to store information about POSIX users and groups, with the help of so-called ‘name service switch’ modules (NSS modules). How multiple modules are stacked up in an effort to deliver information about users, groups, and other resources is defined in /etc/nsswitch.conf configuration file. Standard C library reads this configuration file at application start and loads modules responsible for the resources. Standard application interfaces then will call the modules as defined in /etc/nsswitch.conf to retrieve required information.

Identity mapping

The information NSS modules provide includes nothing related to SMB protocol. Applications can query by user or group name but that’s all: they cannot query by SID value. Also, the interface functions differentiate between user and group information. When Samba gets a SID, it does not know whether it corresponds to a user or to a group, it cannot chose which interface function to call.

Let’s step aside at this point. Samba needs to deal with the system-level databases for users and groups. Samba needs to deal with SIDs that could be mapped to users, groups, and machine accounts. When user is referenced in SMB protocol communication, it can be in the form of a user name or a SID associated with the user object. When group is referenced in SMB protocol communication, it can also be in the form of a group name or a SID associated with the group object. Finally, the same applies for machine accounts but here Samba (and Windows) cheat and represent machine accounts as a special type of a user object.

The fact that Samba sits in the middle between the SMB protocol communication and the system-level databases for users and groups means Samba has to maintain own mapping between information relevant to SMB protocol and the information relevant to system level references to users and groups. In Windows a system level interface and a database for users, groups, and machine accounts is called Security Account Manager, SAM. Samba implements an abstraction level that allows to handle SAM-like requests. In fact, it implements two of those layers, not one.

IDMAP layer

To map security identifier to a POSIX identifier Samba uses identity mapping interfaces, IDMAP. IDMAP interface is very simple, it only has three functions:

  • map SID to a POSIX ID
  • map POSIX ID to a SID
  • allocate POSIX ID for a SID

A mapping of SID and POSIX ID is handled by an IDMAP module. SID name space is larger than POSIX ID name spaces (combined for users and groups). A relative identifier part of the SID, RID, is 32-bit long and identifies resources within a single domain, but there could be multiple domains involved. Samba has to potentially map all of those RIDs from all domains to a single 32-bit user and single 32-bit group name spaces. Such mapping most likely is a compression scheme with a collision potential when done algorithmically. There could be limiting factors in what particular 32-bit values for user and group identifiers could be chosen. Finally, manual assignment is something that could also be done. Thus, there are many IDMAP modules in Samba to cater to different needs.

A default IDMAP module in Samba is idmap_tdb. This module stores SID to POSIX ID mapping in a Samba native database format, so-called ‘trivial database’, TDB. When Samba requests a look up by SID, idmap_tdb module may allocate new POSIX ID if this SID is not mapped yet and there are enough POSIX IDs in the range defined for the domain. As result, when range is big enough to cover all users and groups from the domain, all SIDs will be mapped. However, there is no guarantee that SIDs will be mapped to the same POSIX IDs on all Samba servers in the domain. The order in which SID mapping request comes influences POSIX ID which is allocated for the SID. If different Samba servers get requests in the different order, they would assign different POSIX IDs to the same SIDs. This is, of course, a problem when accessing files on a distributed file system.

To solve this problem, other IDMAP modules were created. idmap_rid module algorithmically maps relative identifier of the SID to the range associated with the domain. idmap_ad looks up POSIX IDs at a domain controller of the Active Directory domain. In a similar approach, idmap_ldap looks up POSIX IDs at LDAP server defined in the configuration.

For configurations, where users and groups are maintained in the system-level databases, Samba allows to use idmap_nss module. The module queries the system-level databases in case it is known what SID maps to – to a user or to a group. In case it is unknown, IDMAP module queries a primary domain controller of the domain to convert SID to a name. A primary domain controller should know all users and groups of the domain, thus it should be able to answer where the SID maps to, or fail the request. In the latter case idmap_nss will also fail the request and Samba will consider the SID as unmapped.

PASSDB layer

Users and groups need to be known to Samba before they can be used. The very same users and groups must be known to the operating system because Samba processes change identity when performing operations as a particular user. The second layer Samba uses for identity mapping also allows to manage users and groups: create new ones, delete existing ones, modify information about them and, in general, perform a lot of actions Windows expects from SAM interface.

PASSDB module is an abstraction over the system-level database about users. It allows to retrieve user information from LDAP server or other storage scheme. The reason for this is, again, a lack of needed information in the system-level database format. Samba needs to know a lot more details about the user than POSIX interfaces provide and some of this information is unique to SMB protocol. For example, for each user to be able to authenticate with password, Samba needs to known corresponding password hashes for NTLM negotiation. NT and LM hashes are not used by the POSIX-compatible operating systems. Also, the interface to retrieve user information does not give access to actual passwords. In fact, in many environments applications have no access to password hashes, not even passwords.

Default PASSDB module is tdbsam. Similar to idmap_tdb, it stores additional information Samba needs to know about users in its own ‘trivial database’, TDB. tdbsam expects that if user information is stored in the database, the very same user exists in the system-level databases.

One can also force IDMAP subsystem to look up SID to POSIX ID mappings in a PASSDB backend. For this IDMAP module idmap_passdb can be used. As result, Samba will look up SIDs and POSIX IDs in a PASSDB module defined in smb.conf.

Group mapping

Groups are not stored in Samba databases. Instead, Samba allows to map existing POSIX group to a group in a domain. Because groups in Windows world can have different scope, Samba provides a mechanism to specify which POSIX group is mapped to which Windows group and what scope it should have. The mapping is managed with the help of Samba’s net utility: net groupmap family includes commands to add, modify, and remove group mappings. It also allows to associate (alias) certain SIDs with existing groups and list members of the groups.

For distributed environments it is convenient to store POSIX and SMB information about users and groups in the same place. For example, LDAP server could be used to store and retrieve such information with ldapsam PASSDB module and idmap_ldap IDMAP module. However, group mapping would still be maintained locally with net groupmap set of commands.

Practical considerations

Let’s apply all discussed above to a practice. Consider a single Samba server which serves as a primary domain controller to its own domain. The server does not use LDAP or any other distributed storage for its POSIX and SMB information for users and groups.

A minimal smb.conf configuration file for a primary domain controller is following:

# Global parameters
    workgroup = SAMBA
    domain logons = Yes
    security = USER
    winbind offline logon = Yes
    winbind use default domain = Yes
    idmap config * : range = 1000-1000000
    idmap config * : backend = passdb
    passdb backend = tdbsam
    template homedir = /home/%U
    template shell = /bin/bash

    comment = Home Directories
    browseable = No
    inherit acls = Yes
    read only = No
    valid users = %S %D%w%S

This configuration defines a single-domain SMB server with IDMAP configuration to look up SID to POSIX ID mappings in a PASSDB module. PASSDB module is set to tdbsam which is a default module.

As result of this configuration, all non-POSIX attributes of users need to be stored in the PASSDB module. To modify them one can use pdbedit tool. But before that we need to create users and groups at the system level first.

SMB domains have few ‘well-known’ groups: ‘Domain Users’, ‘Domain Administrators’, ‘Domain Guests’. For ‘Domain Users’ and ‘Domain Guests’ we can reuse POSIX groups ‘users’ and ‘nobody’, for ‘Domain Admins’ it is better to create a separate group, for example, ‘admins’.

On Fedora 24 there are existing POSIX groups ‘users’ and ‘nobody’:

# getent group users nobody

We can create ‘admins’ group using groupadd utility:

# groupadd admins

When groups are ready, we can associated them with the well-known domain groups using net groupmap commands:

# net groupmap add ntgroup="Domain Admins" unixgroup=admins rid=512 type=d
Successfully added group Domain Admins to the mapping db as a domain group
# net groupmap add ntgroup="Domain Users"  unixgroup=users rid=513 
Successfully added group Domain Users to the mapping db as a domain group
# net groupmap add ntgroup="Domain Guests"  unixgroup=nobody rid=514
Successfully added group Domain Guests to the mapping db as a domain group

Finally, add users. Users should have their primary group associated with any of the groups mapped to the domain because Samba needs to recognize them. So there should be SID to POSIX ID mapping for primary groups. Let’s pretend that all our users are members of ‘users’ group:

# useradd -m -g users -G admins administrator
# pdbedit -a -u admin
new password:
retype new password:
Unix username:        administrator
NT username:          
Account Flags:        [U          ]
User SID:             S-1-5-21-1345368309-3761995768-4153620981-1008
Primary Group SID:    S-1-5-21-1345368309-3761995768-4153620981-513
Full Name:            
Home Directory:       \\smb\administrator
HomeDir Drive:        
Logon Script:         
Profile Path:         \\smb\administrator\profile
Domain:               SAMBA
Account desc:         
Munged dial:          
Logon time:           0
Logoff time:          Wed, 06 Feb 2036 17:06:39 EET
Kickoff time:         Wed, 06 Feb 2036 17:06:39 EET
Password last set:    Mon, 19 Sep 2016 12:43:45 EEST
Password can change:  Mon, 19 Sep 2016 12:43:45 EEST
Password must change: never
Last bad password   : 0
Bad password count  : 0

In the screen output above ‘Primary Group SID’ was automatically inferred from the group mapping.

We can now ask winbindd to resolve user information based on the IDMAP and PASSDB databases:

# wbinfo -i administrator
# wbinfo -n administrator
S-1-5-21-1345368309-3761995768-4153620981-1008 SID_USER (1)
# wbinfo -s S-1-5-21-1345368309-3761995768-4153620981-1008
SAMBA\administrator 1

September 19, 2016 09:52 AM

September 16, 2016

Rich Megginson

How to print field name with dash ("-") in a golang template

For example, let's say your OpenShift secret has been created like this:
$ oc secrets new logging-elasticsearch \
        key=$dir/keystore.jks truststore=$dir/truststore.jks \
        searchguard.key=$dir/searchguard_node_key \
        searchguard.truststore=$dir/searchguard_node_truststore \
        admin-key=$dir/${admin_user}.key admin-cert=$dir/${admin_user}.crt \
        admin-ca=$dir/ca.crt \

Now you want to extract the CA cert:
$ oc get secret logging-elasticsearch --template='{{.data.admin-ca}}'
error: error parsing template {{.data.admin-ca}}, template: output:1: bad character U+002D '-'

It doesn't like the - character in the field name. You can work around this using index like so:
$ oc get secret logging-elasticsearch --template='{{index .data "admin-ca"}}' |base64 -d > ca
$ openssl x509 -in ca -text|more
        Version: 3 (0x2)
        Serial Number: 1 (0x1)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=logging-signer-20160915173520
            Not Before: Sep 15 17:35:19 2016 GMT
            Not After : Sep 14 17:35:20 2021 GMT
        Subject: CN=logging-signer-20160915173520
        Subject Public Key Info:

September 16, 2016 01:57 AM

September 08, 2016

Red Hat Blog

PCI Series: Requirement 3 – Protect Stored Cardholder Data

Welcome to another post dedicated to the use of Identity Management (IdM) and related technologies in addressing the Payment Card Industry Data Security Standard (PCI DSS). This specific post is related to requirement three (i.e. the requirement to protect stored cardholder data). In case you’re new to the series – the outline and mapping of individual articles to the requirements can be found in the overarching post that started the series.

Section three of the PCI DSS standard talks about storing cardholder data in a secure way. One of the technologies that can be used for secure storage of cardholder data is disk encryption called LUKS. But LUKS keys also need to be managed (as mentioned in requirement 3.6.3). One potential solution: IdM’s Vault – a secret store that can be used to escrow disk encryption passwords and implement policies and conditions for the recovery of such passwords (or keys). While in a Vault, the keys and passwords do not need to be in any way related to keys and passwords used by users that access the cardholder services; requirement 3.4.1 is thus fully met by this solution.

Requirement 3.5.3 creates a challenge demanding separation of keys. This usually leads to the need to involve a user to unlock their key to start a process. For example, a system volume can be encrypted but in case of a reboot an administrator has to come over and enter a password to continue the boot process. A new technology called Network Bound Disk Encryption addresses this problem by placing a special server on the network. While this technology is not currently included with Red Hat Enterprise Linux – here is a pointer to a demo.

Questions about how Identity Management relates to requirement three?  Reach out using the comments section (below).

by Dmitri Pal at September 08, 2016 07:47 PM

September 06, 2016

Red Hat Blog

PCI Series: Requirement 2 – Do Not Use Vendor-Supplied Defaults for System Passwords and Other Security Parameters

This article is third in a series dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS). This specific post covers the PCI DSS requirement related to not using vendor-supplied defaults for system passwords and other security parameters. The outline and mapping of individual articles to the requirements can be found in the overarching post that started the series.

The second section of the PCI-DSS standard applies to defaults – especially passwords and other security parameters. The standard calls for the reset of passwords (etc.) for any new system before placing it on the network. IdM can help here. Leveraging IdM for centralized accounts and policy information allows for a simple automated provisioning of new systems with tightened configurations. In addition, Red Hat Satellite 6 and IdM play well together – allowing for automatic enrollment of Linux systems into an IdM managed identity fabric.

Requirements 2.2.3 and 2.3 (also covered in Appendix A2) call for use of different security features like SSH or TLS. Both SSH and TLS require a solution that would provision and manage associated keys. IdM comes to rescue in both cases. For SSH IdM can manage and deliver user and host public keys to the systems joined to the IdM domain. For TLS both the client and server need to have proper certificates and private keys. Where do they come from? How they are tracked and renewed? IdM, together with a client side component called certmonger (integrated with the Linux operating system), allows for provisioning, tracking and rotation of the certificates. These key management aspects of the environment are usually left to the IT professionals to figure out. With IdM and certmonger, certificate management can really become an automated process making environment more secure and less susceptible to a human error or misconfiguration.

TLS is used in many places for many purposes and while automation is great… it’s not enough. If certificates are issued by a single certificate authority for multiple environments and use cases there is a chance that a certificate issued for one purpose will be misused to authenticate a different connection. This can be mitigated with fine grained access control rules implemented inside each of the services that accepts TLS based authentication. But this is error prone. Having a certificate authority (CA) for a domain of use would be preferable. Unfortunately creating such CAs is usually a hassle and a cost. This is why the IdM team is working on a solution called subCAs. With just a single command an administrator would be able to create a subCA dedicated to a particular domain of use. Then all the certificates issued by this subCA would be usable only within the context of that specific domain.

Finally, requirement 2.2.4 calls for configuring system security parameters. Once again, IdM, with central management of the host-based-access-control rules, privilege escalation (sudo), and SELinux (for user mapping) provides a relief and help with such configuration.

Questions about how Identity Management relates to requirement two?  Reach out using the comments section (below).

by Dmitri Pal at September 06, 2016 06:40 PM

September 02, 2016

Florence Blanc-Renaud

Using a Dogtag instance as external CA for FreeIPA installation

A FreeIPA user recently had issues installing FreeIPA with an external CA. He was using Dogtag certificate system as external CA and FreeIPA installation was failing, complaining about the certificate provided by Dogtag.

So I decided to try the same deployment and share my findings in this post.

A little background…

FreeIPA server can be configured to act as a Certificate Authority inside FreeIPA IDM domain. It will then be able to create the certificates used by the LDAP server, the Apache server used for the Web GUI or the users and hosts.

This CA can be set-up in different ways:

  • The CA is a root CA, meaning that its certificate is self-signed
  • or the CA is subordinate to an external, 3rd-party CA, meaning that its certificate is signed by the 3rd party CA.

There are a wide range of products that can be used as 3rd-party CAs, among which Dogtag certificate system. In this blog post, I will explain how Dogtag can provide the certificate for IPA CA.


The following instructions apply to Fedora 24. They will:

  1. run the 1st step of ipa-server-install to generate a CSR
  2. submit the CSR to Dogtag and have Dogtag issue a certificate for FreeIPA server
  3. run the 2nd step of ipa-server-install with the certificate obtained in step 2.

For instructions to setup the Dogtag server, you can refer to this post: Dogtag installation.


FreeIPA server installation – step 1

In order to install FreeIPA with an externally-signed CA, we must use the –external-ca option of ipa-server-install. The installation is then a multi-step install, where:

  • ipa-server-install produces a CSR
  • we need to submit this CSR to the external CA, that will in return provide a certificate and certificate chain
  • we need to run ipa-server-install a 2nd time, with different options and providing the certificates obtained in the previous step.

So let’s run the first step of ipa-server-install:

root@ipaserver$ ipa-server-install --setup-dns \
 --auto-forwarders \
 --auto-reverse \
 -n \
 -p Secret123 -a Secret123 \
 --external-ca \
Configuring certificate server (pki-tomcatd). Estimated time: 3 minutes 30 seconds
 [1/8]: creating certificate server user
 [2/8]: configuring certificate server instance
The next step is to get /root/ipa.csr signed by your CA and re-run /sbin/ipa-server-install as:
/sbin/ipa-server-install --external-cert-file=/path/to/signed_certificate --external-cert-file=/path/to/external_ca_certificate


Generation of the certificate using Dogtag

We then need to copy this CSR on the Dogtag instance and submit the CSR, approve it and export the certificate.

The submission is an important step as it allows to specify a profile. Basically, if we pick caCACert profile, we signal our intent to use the produced certificate as a Certificate Authority in our FreeIPA deployment, and the resulting certificate will contain the required extensions:

root@dogtag$ pki ca-cert-request-submit --profile caCACert --request-type pkcs10 --csr-file ipa.csr
Submitted certificate request
 Request ID: 7
 Type: enrollment
 Request Status: pending
 Operation Result: success

Note the Request ID as we will need it in order to approve the submission:

root@dogtag$ pki -c Secret123 -d /root/.dogtag/nssdb/ -n "PKI Administrator for" cert-request-review 7 --action approve
Approved certificate request 7
 Request ID: 7
 Type: enrollment
 Request Status: complete
 Operation Result: success
 Certificate ID: 0x7

Note the Certificate ID as we will need it to export the certificate into a file ipa.cert:

root@dogtag$ pki -c Secret123 -d /root/.dogtag/nssdb/ -n "PKI Administrator for" cert-show 7 --encoded --output ipa.cert

We will also need the dogtagca certificate chain:

root@dogtag$ pki ca-cert-show 1 --encoded --output dogtagca.cert

At this point, we have a new certificate and chain (ipa.cert and dogtagca.cert), that we need to copy on FreeIPA server. We can resume FreeIPA installation.

FreeIPA server installation – step 2

In order to resume FreeIPA installation, we will follow the instructions provided in step 1:

root@ipaserver$ /sbin/ipa-server-install --external-cert-file=ipa.cert --external-cert-file=dogtagca.cert


The installation will resume and use the ipa.cert for IPA Certificate Authority. That’s it!

by floblanc at September 02, 2016 12:29 PM

September 01, 2016

Ben Lipton

Thinking about templating, part 2: Handling missing data



This post is a followup to Thinking about templating for automatic CSR generation. In it we will look at a requirement of the templating system that was not discussed in that post, and see how it is handled by the implementation.

Sometimes you might want to generate a certificate for a principal that doesn’t have all the fields referenced in the profile. This could be due to an error (e.g. used the “user” profile for a “service” principal) or just the way the data is (e.g. the principal has no email address, or the requesting user has no access to that field). We want to handle this cleanly by omitting the sections of config that have missing data.

Simple approach: data rules only

We can pretty simply update our data rules to do this partly right, like in this example:

{% if subject.fqdn.0 %}DNS = {{subject.fqdn.0}}{% endif %}

This adds some extra work for administrators creating new rules, and is another step that someone could forget, but could be manageble.

However, if none of the data rules for a field has any data, we need to avoid rendering the syntax rule for that field as well, otherwise we get weird empty sections that openssl doesn’t like. Modifying the rule templates can’t solve this problem, because the syntax rule intentionally doesn’t know what data it may depend on for different profiles; that all depends on the data rules.

Current solution: See if something renders

One way to make this work is to build syntax rules so they use jinja2 control tags to compute the output of any data rules first, then render their own text only if some data rule rendered successfully. In its raw form, this gets ugly (see [1] for explanation):

{% raw %}{% set contents %}{% endraw %}{{ datarules|join('\n') }}
{% raw %}{% endset %}{% if contents %}{% endraw %}
subjectAltName = @{% call openssl.section() %}{% raw %}{{ contents }}
{% endraw %}{% endcall %}{% raw %}{% endif %}{% endraw %}

For comparison, that rule used to look like this:

subjectAltName = @{% call openssl.section() %}
{{ datarules|join('\n') }}{% endcall %}

I think this might be a heavy burden for administrators who want to write new syntax rules.

However, we can introduce some macros to make this better. One macro, syntaxrule, computes the result of rendering the data rules it contains, but does not output these results unless a flag is set to true. That flag is controlled by another macro, datarule, which updates the flag to true when the enclosed data rule renders successfully. We can apply a similar technique to the fields in the data rules, rendering the rule only if all fields are present.

Now, the framework can automatically wrap all syntax rules in {% call ipa.syntaxrule() %}...{% endcall %} and all data rules in {% call ipa.datarule() %}...{% endcall %}. Writers of data rules must wrap all field references in ipa.datafield() to mark values that could be missing, such as {{ ipa.datafield(subject.mail.0) }}, but no other modifications to the rules are necessary.

This is the way rule suppression is currently implemented.


This system seems to be working fairly well, but it has a few drawbacks.

First, the macros to do this are a little arcane, as can be seen in [2], and can’t be commented very well because any whitespace becomes part of the macro output. They rely on global variables within the template, but this should be ok as long as we always nest datafields within datarules within syntaxrules, and never nest more than once.

Second, syntax rules with multiple assigned data rules present a problem. Generally we will want the results of those rules to be presented in the output with some character in between, e.g. {{datarules|join(',')}} for certutil. However, when we finally render this template with data, what if one of our datarules renders while another does not due to lack of data? The above rule segment would produce a template like:

{% call ipa.datarule() %}email:{{ipa.datafield(subject.mail.0)|quote}}{% endcall %},{% call ipa.datarule() %}uri:{{ipa.datafield(subject.inetuserhttpurl.0)|quote}}{% endcall %}

If this subject has no inetuserhttpurl field, the second ipa.datarule will be suppressed, leaving an empty string. But, the comma will still be there! This creates odd-looking output like the following:


Fortunately, certutil seems not to mind these extra commas, and openssl is also ok with the extra blank lines that arise the same way, so this isn’t breaking anything right now. But, it’s worrying not to be able to do much to improve this formatting.

Third, there is an unfortunate interaction between the macros created for this technique, the above issue, and the macro that produces openssl sections. That macro [3] also relies on side effects to do its job - the contents of the section are appended to a global list of sections, while only the section name is returned at the point where the macro is called. Since the technique discussed in this section evaluates each data rule to see if it produces any data, if the rule includes an openssl section, a section is stored on rule evaluation even if it has no data. Again, openssl is ok with the extra sections as long as they are not referenced within the config file, but the result is ugly.

Alternative: Declare data dependencies

Another approach to suppressing syntax rules when none of their data rules are going to render is to take the “simple approach” of listing the required data items in an {% if %} statement one step further. We could amend the schema for data rules to include a record of the included data item, so that each rule would know its dependencies. Data rules could then be automatically wrapped so they wouldn’t be rendered if this item was unavailable. Syntax rules could be treated similarly; by querying the dependencies of all the data rules it was configured to include, the whole syntax rule could be suppressed if none of those items were available.

In this scheme, the template produced would look like (linebreaks and indentation added):

{% if subject.mail.0 or subject.inethttpurl.0 %}--extSAN
  {% if subject.mail.0 %}email:{{subject.mail.0|quote}}{% endif %},
  {% if subject.inethttpurl.0 %}uri:{{subject.inethttpurl.0|quote}}{% endif %}
{% endif %}

This takes care of the third problem of the previous solution, because data rules with missing data will never be evaluated, meaning that superfluous openssl sections will not be added. However, the second problem still persists, because the commas and newlines are part of the syntax rule (which is rendered) not the data rules (some of which aren’t rendered).

Suppressing excess commas and newlines

The challenge with preventing these extra commas and newlines is that they must be evaluated during the final render, when the subject data is available, not when the syntax rules are evaluated to build the final template. Using the join filter in the syntax rule is insufficient, because it is evaluated before that data is available. What we really want is to pass the output of all the data rules to the join filter, at final render time.

This is not a polished solution, but an image of what this could look like is for the data rule to be:

--extSAN {{datarules|filternonempty("join(',')")}}

Which would create a final template like:

{% filternonempty join(',') %}
<data rule 1>
{% filterpart %}
<data rule 2>
{% endfilternonempty %}

And the filternonempty tag would be implemented so the effect of this would be approximately:

{% set parts = [] %}
{% set part %}
<data rule 1>
{% endset %}
{% if part %}{% do parts.append(part %}{% endif %}
{% set part %}
<data rule 2>
{% endset %}
{% if part %}{% do parts.append(part %}{% endif %}
{{ parts|join(',') }}

I think this is doable, but I don’t have a prototype yet.


The current implementation is working ok, but the “Declaring data dependencies” solution is also appealing. Recording in data rules what data they depend on is only slightly more involved than wrapping that reference in ipa.datafield(), and could also be useful for other purposes. Plus, it would get rid of the empty sections in openssl configs, as well as some of the complex macros.

The extra templating and new tags required to get rid of extra commas and newlines don’t seem worth it to me, unless we discover a version of openssl or certutil that can’t consume the current output.

Finally, I think the number of hoops needing to be jumped through to fine-tune the output format hint at this “template interpolation” approach being less successful than originally expected. While it was expected that inserting data rule templates into syntax rule templates and rendering the whole thing would produce similar results to rendering data rules first and inserting the output into syntax rules, that is not turning out to be the case. It might be wise to reconsider the simpler option - it may be easier to implement reliable jinja2 template markup escaping than to build templates smart enough to handle any combination of data that’s available.


[1] In case you’re having trouble parsing this mess, when rendered to insert data rules, and with whitespace added for readability, it turns into this:

{% set contents %}
    {% if subject.mail.0 %}email = {{subject.mail.0}}{% endif %} <-- this is the data rule
{% endset %}
{% if contents %}
    subjectAltName = @{% call openssl.section() %}{{ contents }}{% endcall %}
{% endif %}


{% set rendersyntax = {} %}

{% set renderdata = {} %}

{# Wrapper for syntax rules. We render the contents of the rule into a
variable, so that if we find that none of the contained data rules rendered we
can suppress the whole syntax rule. That is, a syntax rule is rendered either
if no data rules are specified (unusual) or if at least one of the data rules
rendered successfully. #}
{% macro syntaxrule() -%}
{% do rendersyntax.update(none=true, any=false) -%}
{% set contents -%}
{{ caller() -}}
{% endset -%}
{% if rendersyntax['none'] or rendersyntax['any'] -%}
{{ contents -}}
{% endif -%}
{% endmacro %}

{# Wrapper for data rules. A data rule is rendered only when all of the data
fields it contains have data available. #}
{% macro datarule() -%}
{% do rendersyntax.update(none=false) -%}
{% do renderdata.update(all=true) -%}
{% set contents -%}
{{ caller() -}}
{% endset -%}
{% if renderdata['all'] -%}
{% do rendersyntax.update(any=true) -%}
{{ contents -}}
{% endif -%}
{% endmacro %}

{# Wrapper for fields in data rules. If any value wrapped by this macro
produces an empty string, the entire data rule will be suppressed. #}
{% macro datafield(value) -%}
{% if value -%}
{{ value -}}
{% else -%}
{% do renderdata.update(all=false) -%}
{% endif -%}
{% endmacro %}


{# List containing rendered sections to be included at end #}
{% set openssl_sections = [] %}

List containing one entry for each section name allocated. Because of
scoping rules, we need to use a list so that it can be a "per-render global"
that gets updated in place. Real globals are shared by all templates with the
same environment, and variables defined in the macro don't persist after the
macro invocation ends.
{% set openssl_section_num = [] %}

{% macro section() -%}
{% set name -%}
sec{{ openssl_section_num|length -}}
{% endset -%}
{% do openssl_section_num.append('') -%}
{% set contents %}{{ caller() }}{% endset -%}
{% if contents -%}
{% set sectiondata = formatsection(name, contents) -%}
{% do openssl_sections.append(sectiondata) -%}
{% endif -%}
{{ name -}}
{% endmacro %}

{% macro formatsection(name, contents) -%}
[ {{ name }} ]
{{ contents -}}
{% endmacro %}

September 01, 2016 12:00 AM

August 31, 2016

Red Hat Blog

PCI Series: Requirement 1 – Install and Maintain a Firewall Configuration to Protect Cardholder Data

This article is one of the blog posts dedicated to use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS). This specific post is related to requirement one – install and maintain a firewall configuration to protect cardholder data. The outline and mapping of individual articles to the requirements can be found in the overarching post that started the series.

The first requirement of the PCI standard talks about the firewalls and networking. While Red Hat’s Identity Management solution is not directly related to setting up networks and firewall rules, there are several aspects of IdM that need to be mentioned in this context. The first is that IdM servers can be deployed inside and outside a firewall. In either case IdM servers need to communicate with clients and to each other using the LDAP and Kerberos protocols.

IdM servers that are deployed inside the firewall create challenges for authenticating clients that are located outside the firewall on a separate network or in a DMZ. The IdM solution leverages Kerberos heavily. The main reason for this is that the Kerberos protocol ensures that end user passwords are not sent “over the wire” thereby reducing the risk of password interception or leak. However the use of Kerberos creates a challenge for administrators who traditionally had to open a Kerberos port in the firewall to allow the authentication to go through. This, in many cases, is a non-starter. The IdM version that comes with Red Hat Enterprise Linux 7.2 includes a feature called KDC proxy. Several years ago Microsoft authored a standard that allows for proxying the Kerberos protocol over HTTPS. KDC proxy is the open source implementation of this protocol. This solution avoids the need to open a Kerberos port in the firewall and leads to a tighter firewall configuration that is in the spirit of the PCI DSS standard.

The solution still requires opening an LDAP port so that clients can download identity information. For purposes of identity lookup the IdM server in the DMZ can act as a proxy between clients in the DMZ and Active Directory (AD) servers behind the firewall. The firewall rule in this case can be set to allow connection only from the IdM server host in the DMZ to AD inside the firewall thus significantly limiting the attack surface. Placing an IdM server in the DMZ to serve clients there enables a more secure integration of those systems into an AD fabric.

The other aspect that is worth mentioning is IPSec VPNs. The IPSec VPN specification has been extended to allow for Kerberos authentication. The implementation of IPSec VPN (libreswan) is underway. This enhancement combined with placing IdM outside the firewall will allow a VPN user to authenticate against an IdM server first using, for example, OTP authentication over Kerberos, to then acquire proof of authentication (ticket), and (finally) to connect to the VPN server without being prompted. Such an approach, when integrated with desktop login, would allow for signing into the network and logging into the system at the same time – eliminating multiple steps and prompts.

Questions about how Identity Management relates to requirement one?  Reach out using the comments section (below).

by Dmitri Pal at August 31, 2016 08:31 PM

August 30, 2016

Red Hat Blog

Identity Management and Related Technologies and their Applicability to PCI DSS

The Payment Card Industry Data Security Standard (PCI DSS) is not new. It has existed for several years and provides security guidelines and best practices for the storage and processing of personal cardholder data. This article takes a look at PCI DSS 3.2 (published in April of 2016) and shows how Identity Management in Red Hat Enterprise Linux (IdM) and related technologies can help customers to address PCI DSS requirements to achieve and stay compliant with the standard. If you need a copy of the PCI DSS document it can be acquired from the document library at the following site:

In October of 2015 Red Hat published a paper that gives an overview of the PCI DSS standard and shows how Red Hat Satellite and other parts of the Red Hat portfolio can help customers to address their PCI compliance challenges. In this post I would like to expand on this paper and drill down into more detail about the Identity Management solution Red Hat provides and how it can be leveraged to achieve PCI DSS compliance in conjunction with other technologies as covered in the paper.

Note that this post assumes familiarity with the Red Hat IdM solution. If you’re not “up-to-speed” – please review our Identity Management documentation. Also, my previous blog posts provide a good foundation for the problem space and understanding of the solution. Identity Management in Red Hat Enterprise Linux is an open source solution based on the FreeIPA community project. There is a public instance of the FreeIPA server running in the cloud that you can connect to and explore using the following link:

Since the standard is quite big I will break this article into a series of individual posts – addressing one section at a time. The following table will help in terms of mapping each section of the PCI document to each follow-up post.


Requirement Number Requirement Description Link to Blog Post / Reference
1 Install and maintain a firewall configuration to protect cardholder data. PCI Series: Requirement 1 – Install and Maintain a Firewall Configuration to Protect Cardholder Data
2 Do not use vendor-supplied defaults for system passwords and other security parameters. PCI Series: Requirement 2 – Do Not Use Vendor-Supplied Defaults for System Passwords and Other Security Parameters
3 Protect stored cardholder data. PCI Series: Requirement 3 – Protect Stored Cardholder Data
4 Encrypt transmission of cardholder data across open, public networks. The same approach as discussed for requirement number two (2) can be employed to meet requirements in this part of the PCI DSS standard.
5 Protect all systems against malware and regularly update anti-virus software or programs. Red Hat Identity Management is not directly related to this section. Reference / review section five (5) of the PCI DSS standard.
6 Develop and maintain secure systems and applications. PCI Series: Requirement 6 – Develop and Maintain Secure Systems and Applications
7 Restrict access to cardholder data by business need to know. PCI Series: Requirement 7 – Restrict Access to Cardholder Data by Business Need to Know
8 Identify and authenticate access to system components. PCI Series: Requirement 8 – Identify and Authenticate Access to System Components
9 Restrict physical access to cardholder data. Red Hat Identity Management is not directly related to this section. Reference / review section nine (9) of the PCI DSS standard.
10 Track and monitor all access to network resources and cardholder data. PCI Series: Requirement 10 – Track and Monitor All Access to Network Resources and Cardholder Data
11 Regularly test security systems and processes.
Requirements 11 and 12 talk about testing of the security controls. This includes scanning and monitoring and best practices around the security policy itself that organizations should create and maintain. Red Hat Identity Management is not directly related to these sections.
12 Maintain a policy that addresses information security for all personnel.


It’s worth mentioning that while this series is focused on IdM and its ecosystem – there are other parts of Red Hat portfolio that would allow for addressing some of the PCI DSS requirements that we did not drill down into here. For example, the OpenSCAP scanner that’s integrated into Red Hat Satellite 6 allows for the regular detection of unaddressed CVEs and misconfigurations according to a defined policy. To get more information about these technologies and how they help to address PCI DSS requirements please see the Achieving and Maintaining PCI DSS Compliance with Red Hat paper on the Red Hat site.

In closing – stay tuned for my future posts on PCI DSS.  If they’re already live – you’ll see active links in the table (above).  General questions about PCI DSS and IdM?  Feel free to reach out using the comments section (below).

by Dmitri Pal at August 30, 2016 07:17 PM

Alexander Bokovoy

Creating permissions in FreeIPA

FreeIPA has quite flexible system to define access rights for any resources in the LDAP store. The system consists of three different parts:

  • a permission object
  • a privilege object, and
  • a role object.

Permission object specifies the target of the access grant: what attributes of which objects in LDAP would be subject of the checks.

A privilege allows to combine several permissions together in a logical task. A role defines who can have access to privileges.

An example below is a somewhat complex use of the permission system to allow groups of administrators to manage specific hosts. We want administrators in group ‘my-admins’ to manage hosts in ‘my-hostgroup’ but otherwise have no other privileges.

Let’s start with a host group ‘my-hostgroup’:

# ipa hostgroup-add my-hostgroup
Added hostgroup "my-hostgroup"
  Host-group: my-hostgroup

And with a group ‘my-admins’:

# ipa group-add my-admins
Added group "my-admins"
  Group name: my-admins
  GID: 903200040

A member of ‘my-admins’ should be able to edit all attributes of the hosts in the host group ‘my-hostgroup’.

To manage permissions, use ipa permission family of commands. You need to create a basic permission which applies to hosts:

# ipa permission-add manage-my-hostgroup --right=all --bindtype=permission --type=host
Added permission "manage-my-hostgroup"
  Permission name: manage-my-hostgroup
  Granted rights: all
  Bind rule type: permission
  Subtree: cn=computers,cn=accounts,dc=ipa,dc=ad,dc=test
  Type: host
  Permission flags: V2, SYSTEM

A permission automatically generates an access control item (ACI) in the LDAP. To check all low-level details of the permission, use --all and --raw options:

# ipa permission-show --all --raw manage-my-hostgroup
  dn: cn=manage-my-hostgroup,cn=permissions,cn=pbac,dc=ipa,dc=ad,dc=test
  cn: manage-my-hostgroup
  ipapermright: all
  ipapermbindruletype: permission
  ipapermlocation: cn=computers,cn=accounts,dc=ipa,dc=ad,dc=test
  ipapermtargetfilter: (objectclass=ipahost)
  ipapermissiontype: V2
  ipapermissiontype: SYSTEM
  aci: (targetfilter = "(objectclass=ipahost)")
       (version 3.0; acl "permission:manage-my-hostgroup";
                     allow (all)
                     groupdn = "ldap:///cn=manage-my-hostgroup,cn=permissions,cn=pbac,dc=ipa,dc=ad,dc=test";)
  objectclass: ipapermission
  objectclass: top
  objectclass: groupofnames
  objectclass: ipapermissionv2

As you can see, it applies to hosts: cn=computers,cn=accounts,$SUFFIX subtree, and target filter is set to (objectclass=ipahost). So it would apply to any host. To further limit the permission, you have to add more target filters.

To define raw target filter, we need to know a DN of the hostgroup that will be our target limit:

# ipa hostgroup-show --raw --all my-hostgroup
  dn: cn=my-hostgroup,cn=hostgroups,cn=accounts,dc=ipa,dc=ad,dc=test
  cn: my-hostgroup
  ipaUniqueID: 6d8c72f2-6e6d-11e6-b9e4-525400bf08fe
  mepManagedEntry: cn=my-hostgroup,cn=ng,cn=alt,dc=ipa,dc=ad,dc=test
  objectClass: ipahostgroup
  objectClass: ipaobject
  objectClass: nestedGroup
  objectClass: groupOfNames
  objectClass: top
  objectClass: mepOriginEntry

Using the DN of the my-hostgroup, we can now add a filter to the permission:

# ipa permission-mod manage-my-hostgroup --filter '(memberOf=cn=my-hostgroup,cn=hostgroups,cn=accounts,dc=ipa,dc=ad,dc=test)'
Modified permission "manage-my-hostgroup"
  Permission name: manage-my-hostgroup
  Granted rights: all
  Bind rule type: permission
  Subtree: cn=computers,cn=accounts,dc=ipa,dc=ad,dc=test
  Extra target filter: (memberOf=cn=my-hostgroup,cn=hostgroups,cn=accounts,dc=ipa,dc=ad,dc=test)
  Type: host
  Permission flags: V2, SYSTEM

Take a look at the permission in detail:

# ipa permission-show --all --raw manage-my-hostgroup
  dn: cn=manage-my-hostgroup,cn=permissions,cn=pbac,dc=ipa,dc=ad,dc=test
  cn: manage-my-hostgroup
  ipapermright: all
  ipapermbindruletype: permission
  ipapermlocation: cn=computers,cn=accounts,dc=ipa,dc=ad,dc=test
  ipapermtargetfilter: (objectclass=ipahost)
  ipapermtargetfilter: (memberOf=cn=my-hostgroup,cn=hostgroups,cn=accounts,dc=ipa,dc=ad,dc=test)
  ipapermissiontype: V2
  ipapermissiontype: SYSTEM
  aci: (targetfilter = "(&(memberOf=cn=my-hostgroup,cn=hostgroups,cn=accounts,dc=ipa,dc=ad,dc=test)(objectclass=ipahost))")
       (version 3.0;acl "permission:manage-my-hostgroup";
        allow (all) groupdn = "ldap:///cn=manage-my-hostgroup,cn=permissions,cn=pbac,dc=ipa,dc=ad,dc=test";)
  objectclass: ipapermission
  objectclass: top
  objectclass: groupofnames
  objectclass: ipapermissionv2

Our ACI says: “Allow any changes to be done in all objects of objectclass ipahost that belong to a host group my-hostgroup to members of the permission group manage-my-hostgroup

Now you can add the manage-my-hostgroup permission to a new privilege and add that privilege to a role, and then assign users of the group my-admins to that role. Those users will be able to manage hosts targeted by the permission.

Start with a privilege:

# ipa privilege-add 'manage-hostgroup-my-hostgroup'
Added privilege "manage-hostgroup-my-hostgroup"
  Privilege name: manage-hostgroup-my-hostgroup

# ipa privilege-add-permission 'manage-hostgroup-my-hostgroup'
[permission]: manage-my-hostgroup
  Privilege name: manage-hostgroup-my-hostgroup
  Permissions: manage-my-hostgroup
Number of permissions added 1

Finally, create a role and add a privilege to it, and then add members that could use the privilege:

# ipa role-add role-manage-hostgroup-my-hostgroup
Added role "role-manage-hostgroup-my-hostgroup"
  Role name: role-manage-hostgroup-my-hostgroup

# ipa role-add-privilege role-manage-hostgroup-my-hostgroup
[privilege]: manage-hostgroup-my-hostgroup
  Role name: role-manage-hostgroup-my-hostgroup
  Privileges: manage-hostgroup-my-hostgroup
Number of privileges added 1

# ipa role-add-member role-manage-hostgroup-my-hostgroup --groups=my-admins
  Role name: role-manage-hostgroup-my-hostgroup
  Member groups: my-admins
  Privileges: manage-hostgroup-my-hostgroup
Number of members added 1

If we look at the original permission, we can see it is now an indirect member of a role:

# ipa permission-show manage-my-hostgroup
  Permission name: manage-my-hostgroup
  Granted rights: all
  Bind rule type: permission
  Subtree: cn=computers,cn=accounts,dc=ipa,dc=ad,dc=test
  Extra target filter: (memberOf=cn=my-hostgroup,cn=hostgroups,cn=accounts,dc=ipa,dc=ad,dc=test)
  Type: host
  Permission flags: V2, SYSTEM
  Granted to Privilege: manage-hostgroup-my-hostgroup
  Indirect Member of roles: role-manage-hostgroup-my-hostgroup

When user is added to the my-admins group, it automatically assumes a role that allows to manage the host group:

# ipa user-add hadmin
First name: Joe
Last name: Doe
Added user "hadmin"
  User login: hadmin
  First name: Joe
  Last name: Doe
  Full name: Joe Doe
  Display name: Joe Doe
  Initials: JD
  Home directory: /home/hadmin
  GECOS: Joe Doe
  Login shell: /bin/sh
  Principal name: hadmin@IPA.AD.TEST
  Principal alias: hadmin@IPA.AD.TEST
  Email address:
  UID: 903200041
  GID: 903200041
  Password: False
  Member of groups: ipausers
  Kerberos keys available: False

# ipa group-add-member my-admins --users=hadmin
  Group name: my-admins
  GID: 903200040
  Member users: hadmin
  Roles: role-manage-hostgroup-my-hostgroup
Number of members added 1

In real life scenario we would probably like to tune our permission a bit more. For example, we definitely don’t want to allow full access to all attributes of the host – if users can write to objectclass attribute, they can turn that host into anything else in LDAP. But before tuning it, we need to see if our permission actually works:

# kinit hadmin
Password for hadmin@IPA.AD.TEST:

# ipa host-mod my-host --random
ipa: ERROR: Insufficient access: Insufficient 'write' privilege to the 'userPassword' 
            attribute of entry ',cn=computers,cn=accounts,dc=ipa,dc=ad,dc=test'.

Oops, it does not work – we cannot write to a userPassword attribute of the host. What is wrong? To answer this question we need to look at the documentation of the LDAP server FreeIPA builds upon: 389-ds. Red Hat Directory Server Administration Guide says the following in the section “Targeting Entries or Attributes Using LDAP Filters”:

Note Although using LDAP filters can be useful when you are targeting entries and attributes that are spread across the directory, the results are sometimes unpredictable because filters do not directly name the object for which you are managing access. The set of entries targeted by a filtered ACI is likely to change as attributes are added or deleted. Therefore, if you use LDAP filters in ACIs, you should verify that they target the correct entries and attributes by using the same filter in an ldapsearch operation.

The documentation doesn’t tell this explicitly, but when targetattr is missing, the default for matching target attributes when matching with target filter for modification is none, not *. This is done to deny modrdn (entry rename).

To allow modification of the host entries, we need to list attributes which can be modified by our host group admins. The list below is an example only: it allows to set meta-data about the host, change one-time enrollment password, assigned ID view, add certificates and SSH public keys. One needs to carefully review what attributes should be allowed to modify.

# kinit admin
Password for admin@IPA.AD.TEST: 

# ipa permission-mod manage-my-hostgroup --attrs={'userPassword','description','l',\
Modified permission "manage-my-hostgroup"
  Permission name: manage-my-hostgroup
  Granted rights: all
  Effective attributes: description, ipaassignedidview, ipakrbauthzdata, ipasshpubkey,
                        l, macaddress, nshardwareplatform, nsosversion, userPassword,
                        usercertificate, userclass
  Bind rule type: permission
  Subtree: cn=computers,cn=accounts,dc=ipa,dc=ad,dc=test
  Extra target filter: (memberOf=cn=my-hostgroup,cn=hostgroups,cn=accounts,dc=ipa,dc=ad,dc=test)
  Type: host
  Permission flags: V2, SYSTEM
  Granted to Privilege: manage-hostgroup-my-hostgroup
  Indirect Member of roles: role-manage-hostgroup-my-hostgroup

With these changes, our admin can now set a random one-time password:

# kinit hadmin
Password for hadmin@IPA.AD.TEST:

# ipa host-mod my-host --random
Modified host "my-host"
  Host name:
  Random password: 5Krkbj_eW7UR@SUxj0lx22
  Principal name: host/
  Principal alias: host/
  Password: True
  Member of host-groups: my-hostgroup
  Keytab: False
  Managed by:

However, this is not all. The permission we created above doesn’t answer a very important question: how the host my-host would appear in the host group? We surely want to be able to add and remove hosts from the host group. But if we create a permission that allows per-hostgroup admins to add and remove members of the host group at will, they could take over any host – by simply adding it in the host group they manage.

An easiest way to solve this problem, no surprise, is organizational: do not give host group admin rights to include hosts to the hostgroup or delete them, only allow them to manage what’s in the host group.

A separation of rights requires to create a separate permission for ‘add’/’del’ rights against ‘member’ attribute that would allow to include/remove hosts. That’s easy but it would not allow us to limit what hosts could be added/removed from the host group.

Unfortunately, to make that possible, permission-add/permission-mod should be extended to allow specifying target attribute’s values like described in the RHDS Administration Guide.

Even then to define something like this, we’d need to have specific naming of hosts to be able to specify a pattern as a ‘member’ attribute value.

An alternative is to use automembership rules, defined with ipa automember family of commands. It might work with predictable host names but would probably be hard to implement in case of host names coming out of existing cloud provider where you don’t have control over the undercloud.

This is why I’m saying it is an organizational issue, not really a technical one.

August 30, 2016 05:00 AM

August 12, 2016

Fraser Tweedale

Smart card login with YubiKey NEO

In this post I give an overview of smart cards and their potential advantages, and share my adventures in using a Yubico YubiKey NEO device for smart card authentication with FreeIPA and SSSD.

Smart card overview

Smart cards with cryptographic processors and secure key storage (private key generated on-device and cannot be extracted) are an increasingly popular technology for secure system and service login, as well as for signing and encryption applications (e.g. code signing, OpenPGP). They may offer a security advantage over traditional passwords because private key operations typically require the user to enter a PIN. Therefore the smart card is two factors in one: both something I have and something I know.

The inability to extract the private key from a smart card also provides an advantage over software HOTP/TOTP tokens which, in the absense of other security measures such as encrypted filesystem on the mobile device, allow an attacker to extract the OTP seed. And because public key cryptography is used, there is no OTP seed or password hash sitting on a server, waiting to be exfiltrated and subjected to offline attacks.

For authentication applications, a smart card carries an X.509 certificate alongside a private key. A login application would read the certificate from the card and validate it against trusted CAs (e.g. a company’s CA for issuing smart cards). Typically an OCSP or CRL check would also be performed. The login application then challenges the card to sign a nonce, and validates the signature with the public key from the certificate. A valid signature attests that the bearer of the smart card is indeed the subject of the certificate. Finally, the certificate is then mapped to a user either by looking for an exact certificate match or by extracting information about the user from the certificate.

Test environment

In my smart card investigations I had a FreeIPA server with a single Fedora 24 desktop host enrolled. alice was the user I tested with. To begin with, she had no certificates and used her password to log in.

I was doing all of my testing on virtual machines, so I had to enable USB passthrough for the YubiKey device. This is straightforward but you have to ensure the IOMMU is enabled in both BIOS and kernel (for Intel CPUs add intel_iommu=on to the kernel command line in GRUB).

In virt-manager, after you have created the VM (it doesn’t need to be running) you can Add Hardware in the Details view, then choose the YubiKey NEO device. There are no doubt virsh incantations or other ways to establish the passthrough.

Finally, on the host I stopped the pcscd smart card daemon to prevent it from interfering with passthrough:

# systemctl stop pcscd.service pcscd.socket

Provisioning the YubiKey

For general smart card provisioning steps, I recommend Nathan Kinder’s post on the topic. But the YubiKey NEO is special with its own steps to follow! First install the ykpers and yubico-piv-tool packages:

sudo dnf install -y ykpers yubico-piv-tool

If we run yubico-piv-tool to find out the version of the PIV applet, we run into a problem because a new YubiKey comes configured in OTP mode:

[dhcp-40-8:~] ftweedal% yubico-piv-tool -a version
Failed to connect to reader.

The YubiKey NEO supports a variety of operation modes, including hybrid modes:

0    OTP device only.
1    CCID device only.
2    OTP/CCID composite device.
3    U2F device only.
4    OTP/U2F composite device.
5    U2F/CCID composite device.
6    OTP/U2F/CCID composite device.

(You can also add 80 to any of the modes to configure touch to eject, or touch to switch modes for hybrid modes).

We need to put the YubiKey into CCID (Chip Card Interface Device, a standard USB protocol for smart cards) mode. I originally configured the YubiKey in mode 86 but could not get the card to work properly with USB passthrough to the virtual machine. Whether this was caused by the eject behaviour or the fact that it was a hybrid mode I do not know, but reconfiguring it to mode 1 (CCID only) allowed me to use the card on the guest.

[dhcp-40-8:~] ftweedal% ykpersonalize -m 1
Firmware version 3.4.6 Touch level 1541 Program sequence 1

The USB mode will be set to: 0x1

Commit? (y/n) [n]: y

Now yubico-piv-tool can see the card:

[dhcp-40-8:~] ftweedal% yubico-piv-tool -a version
Application version 1.0.4 found.

Now we can initialise the YubiKey by setting a new management key, PIN and PIN Unblocking Key (PUK). As you can probably guess, the management key protects actions like generating keys and importing certificates, the PIN protects private key operations in regular use, the the PUK is kind of in between, allowing the PIN to be reset if the maximum attempts are exceeded. The current (default) PIN and PUK need to be given in order to reset them.

% KEY=`dd if=/dev/random bs=1 count=24 2>/dev/null | hexdump -v -e '/1 "%02X"'`
% echo $KEY
% yubico-piv-tool -a set-mgm-key -n $KEY
Successfully set new management key.

% PIN=`dd if=/dev/random bs=1 count=6 2>/dev/null | hexdump -v -e '/1 "%u"'|cut -c1-6`
% echo $PIN
% yubico-piv-tool -a change-pin -P 123456 -N $PIN
Successfully changed the pin code.

% PUK=`dd if=/dev/random bs=1 count=6 2>/dev/null | hexdump -v -e '/1 "%u"'|cut -c1-8`
% echo $PUK
% yubico-piv-tool -a change-puk -P 12345678 -N $PUK
Successfully changed the puk code.

Next we must generate a private/public keypair on the smart card. Various slots are available for different purposes, with different PIN-checking behaviour. The Certificate slots page on the Yubico wiki gives the full details. We will use slot 9e which is for Card Authentication (PIN is not needed for private key operations). It is necessary to provide the management key on the command line, but the program also prompts for it (I’m not sure why this is the case).

% yubico-piv-tool -k $KEY -a generate -s 9e
Enter management key: CC044321D49AC1FC40146AD049830DB09C5AFF05CD843766
-----END PUBLIC KEY-----
Successfully generated a new private key.

We then use this key to create a certificate signing request (CSR) via yubico-piv-tool. Although slot 9e does not require the PIN, other slots do require it, so I’ve included the verify-pin action for completeness:

% yubico-piv-tool -a verify-pin \
    -a request-certificate -s 9e -S "/CN=alice/"
Enter PIN: 167246
Successfully verified PIN.
Please paste the public key...
-----END PUBLIC KEY-----

yubico-piv-tool -a request-certificate is not very flexible; for example, it cannot create a CSR with request extensions such as including the user’s email address or Kerberos principal name in the Subject Alternative Name extension. For such non-trivial use cases, openssl req or other programs can be used instead, with a PKCS #11 module providing acesss to the smart card’s signing capability. Nathan Kinder’s post provides full details.

With CSR in hand, alice can now request a certificate from the IPA CA. I have covered this procedure in previous articles so I’ll skip it here, except to add that it is necessary to use a profile that saves the newly issued certificate to the subject’s userCertificate LDAP attribute. This is how SSSD matches certificates in smart cards with users.

Once we have the certificate (in file alice.pem) we can import it onto the card:

% yubico-piv-tool -k $KEY -a import-certificate -s 9e -i alice.pem
Enter management key: CC044321D49AC1FC40146AD049830DB09C5AFF05CD843766
Successfully imported a new certificate.

Configuring smart card login

OpenSC provides a PKCS #11 module for interfacing with PIV smart cards, among other things:

# dnf install -y opensc

Enable smart card authentication in /etc/sssd.conf:

pam_cert_auth = True

Then restart SSSD:

# systemctl restart sssd

Next, enable the OpenSC PKCS #11 module in the system NSS database:

# modutil -dbdir /etc/pki/nssdb \
    -add "OpenSC" -libfile

We also need to add the IPA CA cert to the system NSSDB. This will allow SSSD to validate certificates from smart cards. If smart card certificates are issued by a sub-CA or an external CA, import that CA’s certificate instead.

# certutil -d /etc/ipa/nssdb -L -n 'IPA.LOCAL IPA CA' -a \
  | certutil -d /etc/pki/nssdb -A -n 'IPA.LOCAL IPA CA' -t 'CT,C,C'

One hiccup I had was that SSSD could not talk to the OCSP server indicated in the Authority Information Access extension on the certificate (due to my DNS not being set up correctly). I had to tell SSSD not to perform OCSP checks. The sssd.conf snippet follows. Do not do this in a production environment.

certificate_verification = no_ocsp

That’s pretty much all there is to it. After this, I was able to log in as alice using the YubiKey NEO. When logging in with the card inserted, instead of being prompted for a password, GDM prompts for the PIN. Enter the pin, and it lets you in!

Screenshot of login PIN prompt


I mentioned (or didn’t mention) a few standards related to smart card authentication. A quick review of them is warranted:

  • CCID is a USB smart card interface standard.
  • PIV (Personal Identify Verification) is a smart card standard from NIST. It defines the slots, PIN behaviour, etc.
  • PKCS #15 is a token information format. OpenSC provides an PKCS #15 emulation layer for PIV cards.
  • PKCS #11 is a software interface to cryptographic tokens. Token and HSM vendors provide PKCS #11 modules for their devices. OpenSC provides a PKCS #11 interface to PKCS #15 tokens (including emulated PIV tokens).

It is appropriate to mention pam_pkcs11, which is also part of the OpenSC project, as an alternative to SSSD. More configuration is involved, but if you don’t have (or don’t want) an external identity management system it looks like a good approach.

You might remember that I was using slot 9e which doesn’t require a PIN, yet I was still prompted for a PIN when logging in. There are a couple of issues to tease apart here. The first issue is that although PIV cards do not require the PIN for private key operations on slot 9e, the PKCS #11 module does not correctly report this. As an alternative to OpenSC, Yubico provide their own PKCS #11 module called YKCS11 as part of yubico-piv-tool but modutil did not like it. Nevertheless, a peek at its source code leads me to believe that it too declares that the PIN is required regardless of the slot in use. I could not find much discussion of this discrepancy so I will raise some tickets and hopefully it can be addressed.

The second issue is that SSSD requires the PIN and uses it to log into the token, even if the token says that a PIN is not required. Again, I will start a discussion to see if this is really the intended behaviour (perhaps it is).

The YubiKey NEO features a wireless (NFC) interface. I haven’t played with it yet, but all the smart card features are available over that interface. This lends weight to fixing the issues preventing PIN-less usage.

A final thought I have about the user experience is that it would be nice if user information could be derived or looked up based on the certificate(s) in the smart card, and a user automatically selected, instead of having to first specify "I am alice" or whoever. The information is there on the card after all, and it is one less step for users to perform. If PIN-less usage can be addressed, it would mean that a user can just approach a machine, plug in their smart card and hi ho, off to work they go. There are some indications that this does work with GDM and pam_pkcs11, so if you know how to get it going with SSSD I would love to know!

by ftweedal at August 12, 2016 02:55 AM

August 11, 2016

Adam Young

Tripleo HA Federation Proof-of-Concept

Keystone has supported identity federation for several releases. I have been working on a proof-of-concept integration of identity federation in a TripleO deployment. I was able to successfully login to Horizon via WebSSO, and want to share my notes.

A federation deployment requires changes to the network topology, Keystone, the HTTPD service, and Horizon. The various OpenStack deployment tools will have their own ways of applying these changes. While this proof-of-concept can’t be called production-ready, it does demonstrate that TripleO can support Federation using SAML. From this proof-of-concept, we should be to deduce the necessary steps needed for a production deployment.


  • Single physical node – Large enough to run multiple virtual machines.  I only ended up using 3, but scaled up to 6 at one point and ran out of resources.  Tested with 8 CPUs and 32 GB RAM.
  • Centos 7.2 – Running as the base operating system.
  • FreeIPA – Particularly, the CentOS repackage of Red Hat Identity Management. Running on the base OS.
  • Keycloak – Actually an alpha build of Red Hat SSO, running on the base OS. This was fronted by Apache HTTPD, and proxied through ajp://localhost:8109. This gave me HTTPS support using the CA Certificate from the IPA server.  This will be important later when the controller nodes need to talk to the identity provider to set up metadata.
  • Tripleo Quickstart – deployed in HA mode, using an undercloud.
    • ./ –config config/general_config/ha.yml ayoung-dell-t1700.test

In addition, I did some sanity checking of the cluster, but deploying the overcloud using the quickstart helper script, and tore it down using heat stack-delete overcloud.

Reproducing Results

When doing development testing, you can expect to rebuild and teardown your cloud on a regular basis.  When you redeploy, you want to make sure that the changes are just the delta from what you tried last time.  As the number of artifacts grew, I found I needed to maintain a repository of files that included the environment passed to openstack overcloud deploy.  To manage these, I create a git repository in /home/stack/deployment. Inside that directory, I copied the and deploy_env.yml files generated by the overcloud, and modified them accordingly.

In my version of, I wanted to remove the deploy_env.yml generation, to avoid confusion during later deployments.  I also wanted to preserve the environment file across deployments (and did not want it in /tmp). This file has three parts: the Keystone configuration values, HTTPS/Network setup, and configuration for a single node deployment. This last part was essential for development, as chasing down fixes across three HA nodes was time-consuming and error prone. The DNS server value I used is particular to my deployment, and reflects the IPA server running on the base host.

For reference, I’ve included those files at the end of this post.

Identity Provider Registration and Metadata

While it would have been possible to run the registration of the identity provider on one of the nodes, the Heat-managed deployment process does not provide a clean way to gather those files and package them for deployment to other nodes.  While I deployed on a single node for development, it took me a while to realize that I could do that, and had already worked out an approach to call the registration from the undercloud node, and produce a tarball.

As a result, I created a script, again to allow for reproducing this in the future:


basedir=$(dirname $0)
ipa_domain=`hostname -d`

keycloak-httpd-client-install \
   --client-originate-method registration \
   --force \
   --mellon-https-port 5000 \
   --mellon-hostname openstack.$ipa_domain  \
   --mellon-root '/v3' \
   --keycloak-server-url https://identity.$ipa_domain  \
   --keycloak-auth-role root-admin \
   --keycloak-admin-password  $rhsso_master_admin_password \
   --app-name v3 \
   --keycloak-realm openstack \
   --mellon-https-port 5000 \
   --log-file $basedir/rhsso.log \
   --httpd-dir $basedir/rhsso/etc/httpd \
   -l "/v3/auth/OS-FEDERATION/websso/saml2" \
   -l "/v3/auth/OS-FEDERATION/identity_providers/rhsso/protocols/saml2/websso" \
   -l "/v3/OS-FEDERATION/identity_providers/rhsso/protocols/saml2/auth"

This does not quite generate the right paths, as it turns out that the $basename is not quite what we want, so I had to post-edit the generated file: rhsso/etc/httpd/conf.d/v3_mellon_keycloak_openstack.conf

Specifically, the path:

has to be changed to:

While I created a tarball that I then manually deployed, the preferred approach would be to use tripleo-heat-templates/puppet/deploy-artifacts.yaml to deploy them. The problem I faced is that the generated files include Apache module directives from mod_auth_mellon.  If mod_auth_mellon has not been installed into the controller, the Apache server won’t start, and the deployment will fail.

Federation Operations

The Federation setup requires a few calls. I documented them in Rippowam, and attempted to reproduce them locally using Ansible and the Rippowam code. I was not a purist though, as A) I needed to get this done and B) the end solution is not going to use Ansible anyway. The general steps I performed:

  • yum install mod_auth_mellon
  • Copy over the metadata tarball, expand it, and tweak the configuration (could be done prior to building the tarball).
  • Run the following commands.
openstack identity provider create --remote-id https://identity.{{ ipa_domain }}/auth/realms/openstack
openstack mapping create --rules ./mapping_rhsso_saml2.json rhsso_mapping
openstack federation protocol create --identity-provider rhsso --mapping rhsso_mapping saml2

The Mapping file is the one from Rippowm

The keystone service calls only need to be performed once, as they are stored in the database. The expansion of the tarball needs to be performed on every node.


As in previous Federation setups, I needed to modify the values used for WebSSO. The values I ended up setting in /etc/openstack-dashboard/local_settings resembled this:

OPENSTACK_KEYSTONE_URL = "https://openstack.ayoung-dell-t1700.test:5000/v3"
    ("saml2", _("Rhsso")),
    ("credentials", _("Keystone Credentials")),

Important: Make sure that the auth URL is using a FQDN name that matches the value in the signed certificate.

Redirect Support for SAML

The several differences between how HTTPD and HA Proxy operate require us to perform certain configuration modifications.  Keystone runs internally over HTTP, not HTTPS.  However, the SAML Identity Providers are public, and are transmitting cryptographic data, and need to be protected using HTTPS.  As a result, HA Proxy needs to expose an HTTPS-based endpoint for the Keystone public service.  In addition, the redirects that come from mod_auth_mellon need to reflect the public protocol, hostname, and port.

The solution I ended up with involved changes on both sides:

In haproxy.cfg, I modified the keystone public stanza so it looks like this:

listen keystone_public
bind transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind transparent
redirect scheme https code 301 if { hdr(host) -i } !{ ssl_fc }
rsprep ^Location:\ http://(.*) Location:\ https://\1

While this was necessary, it also proved to be insufficient. When the signed assertion from the Identity Provider is posted to the Keystone server, mod_auth_mellon checks that the destination value matches what it expects the hostname should be. Consequently, in order to get this to match in the file:


I had to set the following:

ServerName https://openstack.ayoung-dell-t1700.test

Note that the protocol is set to https even though the Keystone server is handling HTTP. This might break elswhere. If if does, then the Keystone configuration in Apache may have to be duplicated.

Federation Mapping

For the WebSSO login to successfully complete, the user needs to have a role on at least one project. The Rippowam mapping file maps the user to the Member role in the demo group, so the most straightforward steps to complete are to add a demo group, add a demo project, and assign the Member role on the demo project to the demo group. All this should be done with a v3 token:

openstack group create demo
openstack role create Member
openstack project create demo
openstack role add --group demo --project demo Member

Complete helper files

Below are the complete files that were too long to put inline.

# Simple overcloud deploy script

set -eux

# Source in undercloud credentials.
source /home/stack/stackrc

# Wait until there are hypervisors available.
while true; do
    count=$(openstack hypervisor stats show -c count -f value)
    if [ $count -gt 0 ]; then


# Deploy the overcloud!
openstack overcloud deploy --debug --templates --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e $HOME/deployment/network-environment.yaml --control-scale 3 --neutron-network-type vxlan --neutron-tunnel-types vxlan -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server -e $HOME/deployment/deploy_env.yaml   --force-postconfig "$@"    || deploy_status=1

# We don't always get a useful error code from the openstack deploy command,
# so check `heat stack-list` for a CREATE_FAILED status.
if heat stack-list | grep -q 'CREATE_FAILED'; then

    for failed in $(heat resource-list \
        --nested-depth 5 overcloud | grep FAILED |
        grep 'StructuredDeployment ' | cut -d '|' -f3)
    do heat deployment-show $failed > failed_deployment_$failed.log

exit $deploy_status


    keystone::using_domain_config: true
        value: true
        value: external,password,token,oauth1,saml2
        value: http://openstack.ayoung-dell-t1700.test/dashboard/auth/websso/
        value: /etc/keystone/sso_callback_template.html
        value: MELLON_IDP

    # In releases before Mitaka, HeatWorkers doesn't modify
    # num_engine_workers, so handle via heat::config 
        value: 1
    heat::api_cloudwatch::enabled: false
    heat::api_cfn::enabled: false
  HeatWorkers: 1
  CeilometerWorkers: 1
  CinderWorkers: 1
  GlanceWorkers: 1
  KeystoneWorkers: 1
  NeutronWorkers: 1
  NovaWorkers: 1
  SwiftWorkers: 1
  CloudName: openstack.ayoung-dell-t1700.test
  CloudDomain: ayoung-dell-t1700.test

  #TLS Setup from enable-tls.yaml
  PublicVirtualFixedIPs: [{'ip_address':''}]
  SSLCertificate: |
    #certificate removed for space
    -----END CERTIFICATE-----

    The contents of your certificate go here
  SSLIntermediateCertificate: ''
  SSLKey: |
    #key removed for space
    -----END RSA PRIVATE KEY-----

    AodhAdmin: {protocol: 'http', port: '8042', host: 'IP_ADDRESS'}
    AodhInternal: {protocol: 'http', port: '8042', host: 'IP_ADDRESS'}
    AodhPublic: {protocol: 'https', port: '13042', host: 'CLOUDNAME'}
    CeilometerAdmin: {protocol: 'http', port: '8777', host: 'IP_ADDRESS'}
    CeilometerInternal: {protocol: 'http', port: '8777', host: 'IP_ADDRESS'}
    CeilometerPublic: {protocol: 'https', port: '13777', host: 'CLOUDNAME'}
    CinderAdmin: {protocol: 'http', port: '8776', host: 'IP_ADDRESS'}
    CinderInternal: {protocol: 'http', port: '8776', host: 'IP_ADDRESS'}
    CinderPublic: {protocol: 'https', port: '13776', host: 'CLOUDNAME'}
    GlanceAdmin: {protocol: 'http', port: '9292', host: 'IP_ADDRESS'}
    GlanceInternal: {protocol: 'http', port: '9292', host: 'IP_ADDRESS'}
    GlancePublic: {protocol: 'https', port: '13292', host: 'CLOUDNAME'}
    GnocchiAdmin: {protocol: 'http', port: '8041', host: 'IP_ADDRESS'}
    GnocchiInternal: {protocol: 'http', port: '8041', host: 'IP_ADDRESS'}
    GnocchiPublic: {protocol: 'https', port: '13041', host: 'CLOUDNAME'}
    HeatAdmin: {protocol: 'http', port: '8004', host: 'IP_ADDRESS'}
    HeatInternal: {protocol: 'http', port: '8004', host: 'IP_ADDRESS'}
    HeatPublic: {protocol: 'https', port: '13004', host: 'CLOUDNAME'}
    HorizonPublic: {protocol: 'https', port: '443', host: 'CLOUDNAME'}
    KeystoneAdmin: {protocol: 'http', port: '35357', host: 'IP_ADDRESS'}
    KeystoneInternal: {protocol: 'http', port: '5000', host: 'IP_ADDRESS'}
    KeystonePublic: {protocol: 'https', port: '13000', host: 'CLOUDNAME'}
    NeutronAdmin: {protocol: 'http', port: '9696', host: 'IP_ADDRESS'}
    NeutronInternal: {protocol: 'http', port: '9696', host: 'IP_ADDRESS'}
    NeutronPublic: {protocol: 'https', port: '13696', host: 'CLOUDNAME'}
    NovaAdmin: {protocol: 'http', port: '8774', host: 'IP_ADDRESS'}
    NovaInternal: {protocol: 'http', port: '8774', host: 'IP_ADDRESS'}
    NovaPublic: {protocol: 'https', port: '13774', host: 'CLOUDNAME'}
    NovaEC2Admin: {protocol: 'http', port: '8773', host: 'IP_ADDRESS'}
    NovaEC2Internal: {protocol: 'http', port: '8773', host: 'IP_ADDRESS'}
    NovaEC2Public: {protocol: 'https', port: '13773', host: 'CLOUDNAME'}
    NovaVNCProxyAdmin: {protocol: 'http', port: '6080', host: 'IP_ADDRESS'}
    NovaVNCProxyInternal: {protocol: 'http', port: '6080', host: 'IP_ADDRESS'}
    NovaVNCProxyPublic: {protocol: 'https', port: '13080', host: 'CLOUDNAME'}
    SaharaAdmin: {protocol: 'http', port: '8386', host: 'IP_ADDRESS'}
    SaharaInternal: {protocol: 'http', port: '8386', host: 'IP_ADDRESS'}
    SaharaPublic: {protocol: 'https', port: '13386', host: 'CLOUDNAME'}
    SwiftAdmin: {protocol: 'http', port: '8080', host: 'IP_ADDRESS'}
    SwiftInternal: {protocol: 'http', port: '8080', host: 'IP_ADDRESS'}
    SwiftPublic: {protocol: 'https', port: '13808', host: 'CLOUDNAME'}

  OS::TripleO::NodeTLSData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/tls-cert-inject.yaml

   ControllerCount: 1 

by Adam Young at August 11, 2016 05:53 PM

August 10, 2016

Rich Megginson

How to do python dict setdefault with ruby hashes

setdefault is a very useful Python Dict method.
Python 2.7.11 (default, Jul  8 2016, 19:45:00) 
[GCC 5.3.1 20160406 (Red Hat 5.3.1-6)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> dd = {}
>>> dd.setdefault('a', {}).setdefault('b', {})['c'] = 'd'
>>> dd
{'a': {'b': {'c': 'd'}}}
>>> dd.setdefault('a', {}).setdefault('b', {})['e'] = 'f'
>>> dd
{'a': {'b': {'c': 'd', 'e': 'f'}}}
>>> dd.setdefault('g', {}).setdefault('b', {})['e'] = 'f'
>>> dd
{'a': {'b': {'c': 'd', 'e': 'f'}}, 'g': {'b': {'e': 'f'}}}

You can do the same thing in ruby with a little hackery.
irb(main):001:0> dd = {}
=> {}
irb(main):002:0> ((dd['a'] ||= {})['b'] ||= {})['c'] = 'd'
=> "d"
irb(main):003:0> dd
=> {"a"=>{"b"=>{"c"=>"d"}}}
irb(main):004:0> ((dd['a'] ||= {})['b'] ||= {})['e'] = 'f'
=> "f"
irb(main):005:0> dd
=> {"a"=>{"b"=>{"c"=>"d", "e"=>"f"}}}
irb(main):006:0> ((dd['g'] ||= {})['b'] ||= {})['e'] = 'f'
=> "f"
irb(main):007:0> dd
=> {"a"=>{"b"=>{"c"=>"d", "e"=>"f"}}, "g"=>{"b"=>{"e"=>"f"}}}

August 10, 2016 04:38 PM

August 03, 2016

James Shubin

Seen in downtown Montreal…

The Technical Blog of James was seen on an outdoor electronic display in downtown Montreal! Thanks to one of my readers for sending this in.

I guess the smart phone revolution is over, and people are taking to reading my articles on bigger screens!

I guess the smart phone revolution is over, and people are taking to reading my articles on bigger screens! The “poutine” is decent proof that this is probably Montreal.

If you’ve got access to a large electronic display, put up the blog, snap a photo, and send it my way! I’ll post it here and send you some random stickers!

Happy Hacking,


PS: If you have some comments about this blog, please don’t be shy, send them my way.

by purpleidea at August 03, 2016 05:59 AM

July 26, 2016

Fraser Tweedale

FreeIPA Lightweight CA internals

In the preceding post, I explained the use cases for the FreeIPA lightweight sub-CAs feature, how to manage CAs and use them to issue certificates, and current limitations. In this post I detail some of the internals of how the feature works, including how signing keys are distributed to replicas, and how sub-CA certificate renewal works. I conclude with a brief retrospective on delivering the feature.

Full details of the design of the feature can be found on the design page. This post does not cover everything from the design page, but we will look at the aspects that are covered from the perspective of the system administrator, i.e. "what is happening on my systems?"

Dogtag lightweight CA creation

The PKI system used by FreeIPA is called Dogtag. It is a separate project with its own interfaces; most FreeIPA certificate management features are simply reflecting a subset of the corresponding Dogtag interface, often integrating some additional access controls or identity management concepts. This is certainly the case for FreeIPA sub-CAs. The Dogtag lightweight CAs feature was implemented initially to support the FreeIPA use case, yet not all aspects of the Dogtag feature are used in FreeIPA as of v4.4, and other consumers of the Dogtag feature are likely to emerge (in particular: OpenStack).

The Dogtag lightweight CAs feature has its own design page which documents the feature in detail, but it is worth mentioning some important aspects of the Dogtag feature and their impact on how FreeIPA uses the feature.

  • Dogtag lightweight CAs are managed via a REST API. The FreeIPA framework uses this API to create and manage lightweight CAs, using the privileged RA Agent certificate to authenticate. In a future release we hope to remove the RA Agent and authenticate as the FreeIPA user using GSS-API proxy credentials.
  • Each CA in a Dogtag instance, including the "main" CA, has an LDAP entry with object class authority. The schema includes fields such as subject and issuer DN, certificate serial number, and a UUID primary key, which is randomly generated for each CA. When FreeIPA creates a CA, it stores this UUID so that it can map the FreeIPA CA’s common name (CN) to the Dogtag authority ID in certificate requests or other management operations (e.g. CA deletion).
  • The "nickname" of the lightweight CA signing key and certificate in Dogtag’s NSSDB is the nickname of the "main" CA signing key, with the lightweight CA’s UUID appended. In general operation FreeIPA does not need to know this, but the ipa-certupdate program has been enhanced to set up Certmonger tracking requests for FreeIPA-managed lightweight CAs and therefore it needs to know the nicknames.
  • Dogtag lightweight CAs may be nested, but FreeIPA as of v4.4 does not make use of this capability.

So, let’s see what actually happens on a FreeIPA server when we add a lightweight CA. We will use the sc example from the previous post. The command executed to add the CA, with its output, was:

% ipa ca-add sc --subject "CN=Smart Card CA, O=IPA.LOCAL" \
    --desc "Smart Card CA"
Created CA "sc"
  Name: sc
  Description: Smart Card CA
  Authority ID: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
  Subject DN: CN=Smart Card CA,O=IPA.LOCAL
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330

The LDAP entry added to the Dogtag database was:

dn: cn=660ad30b-7be4-4909-aa2c-2c7d874c84fd,ou=authorities,ou=ca,o=ipaca
authoritySerial: 63
objectClass: authority
objectClass: top
cn: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
authorityID: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
authorityKeyNickname: caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d87
authorityKeyHost: f24b-0.ipa.local:443
authorityEnabled: TRUE
authorityDN: CN=Smart Card CA,O=IPA.LOCAL
authorityParentDN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
authorityParentID: d3e62e89-df27-4a89-bce4-e721042be730

We see the authority UUID in the authorityID attribute as well as cn and the DN. authorityKeyNickname records the nickname of the signing key in Dogtag’s NSSDB. authorityKeyHost records which hosts possess the signing key – currently just the host on which the CA was created. authoritySerial records the serial number of the certificate (more that that later). The meaning of the rest of the fields should be clear.

If we have a peek into Dogtag’s NSSDB, we can see the new CA’s certificate:

# certutil -d /etc/pki/pki-tomcat/alias -L

Certificate Nickname              Trust Attributes

caSigningCert cert-pki-ca         CTu,Cu,Cu
auditSigningCert cert-pki-ca      u,u,Pu
Server-Cert cert-pki-ca           u,u,u
caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd u,u,u
ocspSigningCert cert-pki-ca       u,u,u
subsystemCert cert-pki-ca         u,u,u

There it is, alongside the main CA signing certificate and other certificates used by Dogtag. The trust flags u,u,u indicate that the private key is also present in the NSSDB. If we pretty print the certificate we will see a few interesting things:

# certutil -d /etc/pki/pki-tomcat/alias -L \
    -n 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd'
        Version: 3 (0x2)
        Serial Number: 63 (0x3f)
        Signature Algorithm: PKCS #1 SHA-256 With RSA Encryption
        Issuer: "CN=Certificate Authority,O=IPA.LOCAL 201606201330"
            Not Before: Fri Jul 15 05:46:00 2016
            Not After : Tue Jul 15 05:46:00 2036
        Subject: "CN=Smart Card CA,O=IPA.LOCAL"
        Signed Extensions:
            Name: Certificate Basic Constraints
            Critical: True
            Data: Is a CA with no maximum path length.

Observe that:

  • The certificate is indeed a CA.
  • The serial number (63) agrees with the CA’s LDAP entry.
  • The validity period is 20 years, the default for CAs in Dogtag. This cannot be overridden on a per-CA basis right now, but addressing this is a priority.

Finally, let’s look at the raw entry for the CA in the FreeIPA database:

dn: cn=sc,cn=cas,cn=ca,dc=ipa,dc=local
cn: sc
ipaCaIssuerDN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
objectClass: ipaca
objectClass: top
ipaCaSubjectDN: CN=Smart Card CA,O=IPA.LOCAL
ipaCaId: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
description: Smart Card CA

We can see that this entry also contains the subject and issuer DNs, and the ipaCaId attribute holds the Dogtag authority ID, which allows the FreeIPA framework to dereference the local ID (sc) to the Dogtag ID as needed. We also see that the description attribute is local to FreeIPA; Dogtag also has a description attribute for lightweight CAs but FreeIPA uses its own.

Lightweight CA replication

FreeIPA servers replicate objects in the FreeIPA directory among themselves, as do Dogtag replicas (note: in Dogtag, the term clone is often used). All Dogtag instances in a replicated environment need to observe changes to lightweight CAs (creation, modification, deletion) that were performed on another replica and update their own view so that they can respond to requests consistently. This is accomplished via an LDAP persistent search which is run in a monitor thread. Care was needed to avoid race conditions. Fortunately, the solution for LDAP-based profile storage provided a fine starting point for the authority monitor; although lightweight CAs are more complex, many of the same race conditions can occur and these were already addressed in the LDAP profile monitor implementation.

But unlike LDAP-based profiles, a lightweight CA consists of more than just an LDAP object; there is also the signing key. The signing key lives in Dogtag’s NSSDB and for security reasons cannot be transported through LDAP. This means that when a Dogtag clone observes the addition of a lightweight CA, an out-of-band mechanism to transport the signing key must also be triggered.

This mechanism is covered in the design pages but the summarised process is:

  1. A Dogtag clone observes the creation of a CA on another server and starts a KeyRetriever thread. The KeyRetriever is implemented as part of Dogtag, but it is configured to run the /usr/libexec/ipa/ipa-pki-retrieve-key program, which is part of FreeIPA. The program is invoked with arguments of the server to request the key from (this was stored in the authorityKeyHost attribute mentioned earlier), and the nickname of the key to request.
  2. ipa-pki-retrieve-key requests the key from the Custodia daemon on the source server. It authenticates as the dogtag/<requestor-hostname>@REALM service principal. If authenticated and authorised, the Custodia daemon exports the signing key from Dogtag’s NSSDB wrapped by the main CA’s private key, and delivers it to the requesting server. ipa-pki-retrieve-key outputs the wrapped key then exits.
  3. The KeyRetriever reads the wrapped key and imports (unwraps) it into the Dogtag clone’s NSSDB. It then initialises the Dogtag CA’s Signing Unit allowing the CA to service signing requests on that clone, and adds its own hostname to the CA’s authorityKeyHost attribute.

Some excerpts of the CA debug log on the clone (not the server on which the sub-CA was first created) shows this process in action. The CA debug log is found at /var/log/pki/pki-tomcat/ca/debug. Some irrelevant messages have been omitted.

[25/Jul/2016:15:45:56][authorityMonitor]: authorityMonitor: Processed change controls.
[25/Jul/2016:15:45:56][authorityMonitor]: authorityMonitor: ADD
[25/Jul/2016:15:45:56][authorityMonitor]: readAuthority: new entryUSN = 109
[25/Jul/2016:15:45:56][authorityMonitor]: CertificateAuthority init 
[25/Jul/2016:15:45:56][authorityMonitor]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[25/Jul/2016:15:45:56][authorityMonitor]: SigningUnit init: debug Certificate object not found
[25/Jul/2016:15:45:56][authorityMonitor]: CA signing key and cert not (yet) present in NSSDB
[25/Jul/2016:15:45:56][authorityMonitor]: Starting KeyRetrieverRunner thread

Above we see the authorityMonitor thread observe the addition of a CA. It adds the CA to its internal map and attempts to initialise it, which fails because the key and certificate are not available, so it starts a KeyRetrieverRunner in a new thread.

[25/Jul/2016:15:45:56][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Running ExternalProcessKeyRetriever
[25/Jul/2016:15:45:56][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: About to execute command: [/usr/libexec/ipa/ipa-pki-retrieve-key, caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd, f24b-0.ipa.local]

The KeyRetrieverRunner thread invokes ipa-pki-retrieve-key with the nickname of the key it wants, and a host from which it can retrieve it. If a CA has multiple sources, the KeyRetrieverRunner will try these in order with multiple invocations of the helper, until one succeeds. If none succeed, the thread goes to sleep and retries when it wakes up initially after 10 seconds, then backing off exponentially.

[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Importing key and cert
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Reinitialising SigningUnit
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Got token Internal Key Storage Token by name
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Found cert by nickname: 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd' with serial number: 63
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Got private key from cert
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Got public key from cert
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: in init - got CA name CN=Smart Card CA,O=IPA.LOCAL

The key retriever successfully returned the key data and import succeeded. The signing unit then gets initialised.

[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Adding self to authorityKeyHosts attribute
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: In LdapBoundConnFactory::getConn()
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: postCommit: new entryUSN = 361
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: postCommit: nsUniqueId = 4dd42782-4a4f11e6-b003b01c-c8916432
[25/Jul/2016:15:47:14][authorityMonitor]: authorityMonitor: Processed change controls.
[25/Jul/2016:15:47:14][authorityMonitor]: authorityMonitor: MODIFY
[25/Jul/2016:15:47:14][authorityMonitor]: readAuthority: new entryUSN = 361
[25/Jul/2016:15:47:14][authorityMonitor]: readAuthority: known entryUSN = 361
[25/Jul/2016:15:47:14][authorityMonitor]: readAuthority: data is current

Finally, the Dogtag clone adds itself to the CA’s authorityKeyHosts attribute. The authorityMonitor observes this change but ignores it because its view is current.

Certificate renewal

CA signing certificates will eventually expire, and therefore require renewal. Because the FreeIPA framework operates with low privileges, it cannot add a Certmonger tracking request for sub-CAs when it creates them. Furthermore, although the renewal (i.e. the actual signing of a new certificate for the CA) should only happen on one server, the certificate must be updated in the NSSDB of all Dogtag clones.

As mentioned earlier, the ipa-certupdate command has been enhanced to add Certmonger tracking requests for FreeIPA-managed lightweight CAs. The actual renewal will only be performed on whichever server is the renewal master when Certmonger decides it is time to renew the certificate (assuming that the tracking request has been added on that server).

Let’s run ipa-certupdate on the renewal master to add the tracking request for the new CA. First observe that the tracking request does not exist yet:

# getcert list -d /etc/pki/pki-tomcat/alias |grep subject
        subject: CN=CA Audit,O=IPA.LOCAL 201606201330
        subject: CN=OCSP Subsystem,O=IPA.LOCAL 201606201330
        subject: CN=CA Subsystem,O=IPA.LOCAL 201606201330
        subject: CN=Certificate Authority,O=IPA.LOCAL 201606201330
        subject: CN=f24b-0.ipa.local,O=IPA.LOCAL 201606201330

As expected, we do not see our sub-CA certificate above. After running ipa-certupdate the following tracking request appears:

Request ID '20160725222909':
        status: MONITORING
        stuck: no
        key pair storage: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd',token='NSS Certificate DB',pin set
        certificate: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd',token='NSS Certificate DB'
        CA: dogtag-ipa-ca-renew-agent
        issuer: CN=Certificate Authority,O=IPA.LOCAL 201606201330
        subject: CN=Smart Card CA,O=IPA.LOCAL
        expires: 2036-07-15 05:46:00 UTC
        key usage: digitalSignature,nonRepudiation,keyCertSign,cRLSign
        pre-save command: /usr/libexec/ipa/certmonger/stop_pkicad
        post-save command: /usr/libexec/ipa/certmonger/renew_ca_cert "caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd"
        track: yes
        auto-renew: yes

As for updating the certificate in each clone’s NSSDB, Dogtag itself takes care of that. All that is required is for the renewal master to update the CA’s authoritySerial attribute in the Dogtag database. The renew_ca_cert Certmonger post-renewal hook script performs this step. Each Dogtag clone observes the update (in the monitor thread), looks up the certificate with the indicated serial number in its certificate repository (a new entry that will also have been recently replicated to the clone), and adds that certificate to its NSSDB. Again, let’s observe this process by forcing a certificate renewal:

# getcert resubmit -i 20160725222909
Resubmitting "20160725222909" to "dogtag-ipa-ca-renew-agent".

After about 30 seconds the renewal process is complete. When we examine the certificate in the NSSDB we see, as expected, a new serial number:

# certutil -d /etc/pki/pki-tomcat/alias -L \
    -n "caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd" \
    | grep -i serial
        Serial Number: 74 (0x4a)

We also see that the renew_ca_cert script has updated the serial in Dogtag’s database:

# ldapsearch -D cn="Directory Manager" -w4me2Test -b o=ipaca \
    '(cn=660ad30b-7be4-4909-aa2c-2c7d874c84fd)' authoritySerial
dn: cn=660ad30b-7be4-4909-aa2c-2c7d874c84fd,ou=authorities,ou=ca,o=ipaca
authoritySerial: 74

Finally, if we look at the CA debug log on the clone, we’ll see that the the authority monitor observes the serial number change and updates the certificate in its own NSSDB (again, some irrelevant or low-information messages have been omitted):

[26/Jul/2016:10:43:28][authorityMonitor]: authorityMonitor: Processed change controls.
[26/Jul/2016:10:43:28][authorityMonitor]: authorityMonitor: MODIFY
[26/Jul/2016:10:43:28][authorityMonitor]: readAuthority: new entryUSN = 1832
[26/Jul/2016:10:43:28][authorityMonitor]: readAuthority: known entryUSN = 361
[26/Jul/2016:10:43:28][authorityMonitor]: CertificateAuthority init 
[26/Jul/2016:10:43:28][authorityMonitor]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[26/Jul/2016:10:43:28][authorityMonitor]: Got token Internal Key Storage Token by name
[26/Jul/2016:10:43:28][authorityMonitor]: Found cert by nickname: 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd' with serial number: 63
[26/Jul/2016:10:43:28][authorityMonitor]: Got private key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: Got public key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: CA signing unit inited
[26/Jul/2016:10:43:28][authorityMonitor]: in init - got CA name CN=Smart Card CA,O=IPA.LOCAL
[26/Jul/2016:10:43:28][authorityMonitor]: Updating certificate in NSSDB; new serial number: 74

When the authority monitor processes the change, it reinitialises the CA including its signing unit. Then it observes that the serial number of the certificate in its NSSDB differs from the serial number from LDAP. It pulls the certificate with the new serial number from its certificate repository, imports it into NSSDB, then reinitialises the signing unit once more and sees the correct serial number:

[26/Jul/2016:10:43:28][authorityMonitor]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[26/Jul/2016:10:43:28][authorityMonitor]: Got token Internal Key Storage Token by name
[26/Jul/2016:10:43:28][authorityMonitor]: Found cert by nickname: 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd' with serial number: 74
[26/Jul/2016:10:43:28][authorityMonitor]: Got private key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: Got public key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: CA signing unit inited
[26/Jul/2016:10:43:28][authorityMonitor]: in init - got CA name CN=Smart Card CA,O=IPA.LOCAL

Currently this update mechanism is only used for lightweight CAs, but it would work just as well for the main CA too, and we plan to switch at some stage so that the process is consistent for all CAs.

Wrapping up

I hope you have enjoyed this tour of some of the lightweight CA internals, and in particular seeing how the design actually plays out on your systems in the real world.

FreeIPA lightweight CAs has been the most complex and challenging project I have ever undertaken. It took the best part of a year from early design and proof of concept, to implementing the Dogtag lightweight CAs feature, then FreeIPA integration, and numerous bug fixes, refinements or outright redesigns along the way. Although there are still some rough edges, some important missing features and, I expect, many an RFE to come, I am pleased with what has been delivered and the overall design.

Thanks are due to all of my colleagues who contributed to the design and review of the feature; each bit of input from all of you has been valuable. I especially thank Ade Lee and Endi Dewata from the Dogtag team for their help with API design and many code reviews over a long period of time, and from the FreeIPA team Jan Cholasta and Martin Babinsky for a their invaluable input into the design, and much code review and testing. I could not have delivered this feature without your help; thank you for your collaboration!

by ftweedal at July 26, 2016 02:01 AM

July 25, 2016

Fraser Tweedale

Lightweight Sub-CAs in FreeIPA 4.4

Last year FreeIPA 4.2 brought us some great new certificate management features, including custom certificate profiles and user certificates. The upcoming FreeIPA 4.4 release builds upon this groundwork and introduces lightweight sub-CAs, a feature that lets admins to mint new CAs under the main FreeIPA CA and allows certificates for different purposes to be issued in different certificate domains. In this post I will review the use cases and demonstrate the process of creating, managing and issuing certificates from sub-CAs. (A follow-up post will detail some of the mechanisms that operate behind the scenes to make the feature work.)

Use cases

Currently, all certificates issued by FreeIPA are issued by a single CA. Say you want to issue certificates for various purposes: regular server certificates, and user certificates for VPN authentication, and authentication to a particular web service. Currently, assuming the certificate bore the appropriate Key Usage and Extended Key Usages extensions (with the default profile, they do), a certificate issued for one of these purposes could be used for all of the other purposes.

Issuing certificates for particular purposes (especially client authentication scenarios) from a sub-CA allows an administrator to configure the endpoint authenticating the clients to use the immediate issuer certificate for validation client certificates. Therefore, if you had a sub-CA for issuing VPN authentication certificates, and a different sub-CA for issuing certificates for authenticating to the web service, one could configure these services to accept certificates issued by the relevant CA only. Thus, where previously the scope of usability may have been unacceptably broad, administrators now have more fine-grained control over how certificates can be used.

Finally, another important consideration is that while revoking the main IPA CA is usually out of the question, it is now possible to revoke an intermediate CA certificate. If you create a CA for a particular organisational unit (e.g. some department or working group) or service, if or when that unit or service ceases to operate or exist, the related CA certificate can be revoked, rendering certificates issued by that CA useless, as long as relying endpoints perform CRL or OCSP checks.

Creating and managing sub-CAs

In this scenario, we will add a sub-CA that will be used to issue certificates for users’ smart cards. We assume that a profile for this purpose already exists, called userSmartCard.

To begin with, we are authenticated as admin or another user that has CA management privileges. Let’s see what CAs FreeIPA already knows about:

% ipa ca-find
1 CA matched
  Name: ipa
  Description: IPA CA
  Authority ID: d3e62e89-df27-4a89-bce4-e721042be730
  Subject DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
Number of entries returned 1

We can see that FreeIPA knows about the ipa CA. This is the "main" CA in the FreeIPA infrastructure. Depending on how FreeIPA was installed, it could be a root CA or it could be chained to an external CA. The ipa CA entry is added automatically when installing or upgrading to FreeIPA 4.4.

Now, let’s add a new sub-CA called sc:

% ipa ca-add sc --subject "CN=Smart Card CA, O=IPA.LOCAL" \
    --desc "Smart Card CA"
Created CA "sc"
  Name: sc
  Description: Smart Card CA
  Authority ID: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
  Subject DN: CN=Smart Card CA,O=IPA.LOCAL
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330

The --subject option gives the full Subject Distinguished Name for the new CA; it is mandatory, and must be unique among CAs managed by FreeIPA. An optional description can be given with --desc. In the output we see that the Issuer DN is that of the IPA CA.

Having created the new CA, we must add it to one or more CA ACLs to allow it to be used. CA ACLs were added in FreeIPA 4.2 for defining policies about which profiles could be used for issuing certificates to which subject principals (note: the subject principal is not necessarily the principal performing the certificate request). In FreeIPA 4.4 the CA ACL concept has been extended to also include which CA is being asked to issue the certificate.

We will add a CA ACL called user-sc-userSmartCard and associate it with all users, with the userSmartCard profile, and with the sc CA:

% ipa caacl-add user-sc-userSmartCard --usercat=all
Added CA ACL "user-sc-userSmartCard"
  ACL name: user-sc-userSmartCard
  Enabled: TRUE
  User category: all

% ipa caacl-add-profile user-sc-userSmartCard --certprofile userSmartCard
  ACL name: user-sc-userSmartCard
  Enabled: TRUE
  User category: all
  CAs: sc
  Profiles: userSmartCard
Number of members added 1

% ipa caacl-add-ca user-sc-userSmartCard --ca sc
  ACL name: user-sc-userSmartCard
  Enabled: TRUE
  User category: all
  CAs: sc
Number of members added 1

A CA ACL can reference multiple CAs individually, or, like we saw with users above, we can associate a CA ACL with all CAs by setting --cacat=all when we create the CA ACL, or via the ipa ca-mod command.

A special behaviour of CA ACLs with respect to CAs must be mentioned: if a CA ACL is associated with no CAs (either individually or by category), then it allows access to the ipa CA (and only that CA). This behaviour, though inconsistent with other aspects of CA ACLs, is for compatibility with pre-sub-CAs CA ACLs. An alternative approach is being discussed and could be implemented before the final release.

Requesting certificates from sub-CAs

The ipa cert-request command has learned the --ca argument for directing the certificate request to a particular sub-CA. If it is not given, it defaults to ipa.

alice already has a CSR for the key in her smart card, so now she can request a certificate from the sc CA:

% ipa cert-request --principal alice \
    --profile userSmartCard --ca sc /path/to/csr.req
  Certificate: MIIDmDCCAoCgAwIBAgIBQDANBgkqhkiG9w0BA...
  Subject: CN=alice,O=IPA.LOCAL
  Issuer: CN=Smart Card CA,O=IPA.LOCAL
  Not Before: Fri Jul 15 05:57:04 2016 UTC
  Not After: Mon Jul 16 05:57:04 2018 UTC
  Fingerprint (MD5): 6f:67:ab:4e:0c:3d:37:7e:e6:02:fc:bb:5d:fe:aa:88
  Fingerprint (SHA1): 0d:52:a7:c4:e1:b9:33:56:0e:94:8e:24:8b:2d:85:6e:9d:26:e6:aa
  Serial number: 64
  Serial number (hex): 0x40

Certmonger has also learned the -X/--issuer option for specifying that the request be directed to the named issuer. There is a clash of terminology here; the "CA" terminology in Certmonger is already used to refer to a particular CA "endpoint". Various kinds of CAs and multiple instances thereof are supported. But now, with Dogtag and FreeIPA, a single CA may actually host many CAs. Conceptually this is similar to HTTP virtual hosts, with the -X option corresponding to the Host: header for disambiguating the CA to be used.

If the -X option was given when creating the tracking request, the Certmonger FreeIPA submit helper uses its value in the --ca option to ipa cert-request. These requests are subject to CA ACLs.


It is worth mentioning a few of the limitations of the sub-CAs feature, as it will be delivered in FreeIPA 4.4.

All sub-CAs are signed by the ipa CA; there is no support for "nesting" CAs. This limitation is imposed by FreeIPA – the lightweight CAs feature in Dogtag does not have this limitation. It could be easily lifted in a future release, if there is a demand for it.

There is no support for introducing unrelated CAs into the infrastructure, either by creating a new root CA or by importing an unrelated external CA. Dogtag does not have support for this yet, either, but the lightweight CAs feature was designed so that this would be possible to implement. This is also why all the commands and argument names mention "CA" instead of "Sub-CA". I expect that there will be demand for this feature at some stage in the future.

Currently, the key type and size are fixed at RSA 2048. Same is true in Dogtag, and this is a fairly high priority to address. Similarly, the validity period is fixed, and we will need to address this also, probably by allowing custom CA profiles to be used.


The Sub-CAs feature will round out FreeIPA’s certificate management capabilities making FreeIPA a more attractive solution for organisations with sophisticated certificate requirements. Multiple security domains can be created for issuing certificates with different purposes or scopes. Administrators have a simple interface for creating and managing CAs, and rules for how those CAs can be used.

There are some limitations which may be addressed in a future release; the ability to control key type/size and CA validity period will be the highest priority among them.

This post examined the use cases and high-level user/administrator experience of sub-CAs. In the next post, I will detail some of the machinery that makes the sub-CAs feature work.

by ftweedal at July 25, 2016 02:32 AM

July 23, 2016

Rich Megginson

How to find build-time vs. run-time dependencies of a gem

Using ruby 2.2.5p319 (2016-04-26 revision 54774) [x86_64-linux]
gem2rpm 0.11.3
gem 2.4.8

I'm trying to convert gems to rpms. Unfortunately, gem2rpm -d does not separate/classify the dependencies. What I really need is a separate list of run-time dependencies. I can get this with gem spec --ruby. For example:
$ gem spec --ruby systemd-journal-1.2.2.gem
# -*- encoding: utf-8 -*-
# stub: systemd-journal 1.2.2 ruby lib do |s| = "systemd-journal"
  s.version = "1.2.2"
  if s.respond_to? :specification_version then
    s.specification_version = 4

    if >='1.2.0') then
      s.add_runtime_dependency(%q<ffi>, ["~> 1.9.0"])
      s.add_development_dependency(%q<rspec>, ["~> 3.1"])
      s.add_development_dependency(%q<simplecov>, ["~> 0.9"])
      s.add_development_dependency(%q<rubocop>, ["~> 0.26"])
      s.add_development_dependency(%q<rake>, ["~> 10.3"])
      s.add_development_dependency(%q<yard>, ["~> 0.8.7"])
      s.add_development_dependency(%q<pry>, ["~> 0.10"])

So I need to add Requires: rubygem(ffi) to the spec.

July 23, 2016 02:17 AM

July 21, 2016

Rob Crittenden

novajoin microservice integration

novajoin is a project for Openstack and IPA integration. It is a service that will allow instances created in nova to be added to IPA and a host OTP generated automatically. This OTP will then be passed into the instance to be used for enrollment during the cloud-init stage.

The end result is that a new instance will seamlessly be enrolled as an IPA client upon first boot.

Additionally, a class can be associated with an instance using Glance metadata so that IPA automember rules will automatically assign this new host to the appropriate hostgroups. Once that is done you can setup HBAC and sudo rules to grant the appropriate permissons/capabilities for all hosts in that group.

In short it can simplify administration significantly.

In the current iteration, novajoin consists of two pieces: a REST microservice and an AMQP notification listener.

The REST microservice is used to return dynamically generated metadata that will be added to the information that describes a given nova instance. This metadata is available at first boot and this is how novajoin injects the OTP into the instance for use with ipa-client-install. The framework for this change is being implemented in nova in this review: .

The REST server just handles the  metadata, cloud-init does the rest. A cloud-init script is provided which glues the two together. It installs the needed packages, retrieves the metadata, then calls ipa-client-install with the requisite options.

The other server is an AMQP listener that will identify when an IPA-enrolled instance is deleted and remove host from IPA . It may eventually handle floating IP changes as well, automatically updating IPA DNS entries. The issue here is knowing what hostname to assign to the floating IP.

Glance images can have metadata as well which describes the image, such as OS distribution and version. If these have been set then novajoin will include this in the IPA entry it creates.

The basic flow looks something like this:

  1. Boot instance in nova. Add IPA metadata, specifying ipa_enroll True and optionally ipa_hostclass
  2. Instance boots. During cloud-init it will retrieve metadata
  3. During metadata retrieval ipa host-add is executed, adding the host to IPA and generating an OTP and any image metadata available.
  4. OTP and FQDN is returned in the metadata
  5. Our cloud-init script is called to install the IPA client packages and retrieve the OTP and FQDN
  6. Call ipa-client-install –hostname FQDN –password OTP

This leaves us with an IPA-enrolled client which can have permissions granted via HBAC and sudo rules (like who is allowed to log into this instance, what sudo commands are allowed, etc).

by rcritten at July 21, 2016 06:09 PM

Red Hat Blog

Thinking Through an Identity Management Deployment

As the number of production deployments of Identity Management (IdM) grows and as many more pilots and proof of concepts come into being, it becomes (more and more) important to talk about best practices. Every production deployment needs to deal with things like failover, scalability, and performance.  In turn, there are a few practical questions that need to be answered, namely:

  • How many replicas do I need?
  • How should these replicas be distributed between my datacenters?
  • How should these replicas be connected to each other?

The answer to these questions depends on the specifics of your environment. But before we dive into how to determine the answers to these questions it is important to realise that replicas (for example) N and M can have one replication agreement to replicate main identity data and another replication agreement to replicate certificate information. These two replication channels are completely independent. The reason for this is that the Certificate Authority (CA) component of IdM is optional. If you do not use it then you do not have any certificates to replicate and thus you can skip configuration of the replication topology for your CAs.

IdM is built with a general assumption that the CA component, if used, will be installed on some machines and not on others. However, practice shows that having different images or deployment scripts for different replicas is more overhead as compared to having a single full image and thus having CAs installed on every replica. If you prefer a CA on every replica then you can use the same topology for main and CA related replication agreements. Unfortunately, up until recent times, there was no tool that would allow someone to visualize the layout of your deployment and manage replication agreements in an intuitive fashion. To address this problem the FreeIPA project added a topology management tool that provides a nice graphical view. Take a look at the following demo that was shown at the Identity Management booth at Red Hat Summit (2016).

Another important challenge to consider is that not all replicas are the same – even if they each have the same components installed. The first server that you install becomes the tracker for certificates and keys and is responsible for CRL generation. Only one system in the whole deployment can bear this responsibility. This means that one should:

  • Know which server was deployed first.
  • If something happens to that server – transition its tracking and CRL generation responsibility to some other server.
  • Make sure you know which server is now responsible for these special functions.

In the future we expect the topology user interface to help with this task – but this capability is yet not implemented.

Having covered some of the “groundwork” in terms of replication – we can now jump into a simple list of questions that will help you to determine the best parameters for your deployment.

How many datacenters do you have?

Let’s, for example, imagine that you have three datacenters in different geographies Datacenter A, Datacenter B, and Datacenter C.

How many clients do you have in each datacenter and what operating systems (and versions) do they run?

Let’s use the data in the following table for reference:

Datacenter Total # of Servers Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 Red Hat Enterprise Linux 7 UNIX Application(s)
A 10K 2K 6K 1K 1K 50
B 6K 1K 3K 2K
C 7K 3K 3K 1K 30

Clients can also be divided into several buckets by type:

  • Caching clients – clients that use SSSD and cache a lot of information so that they do not need to query the server all the time.
  • Moderate clients – clients that do not use SSSD or some other caching mechanism and query servers on every authentication (but don’t query more information than they actually need).
  • Chatty clients – these are the clients that do a lot of queries and don’t necessarily cache information or care if they request more information than is needed.

Moderate and chatty clients may have a significant impact on your environment but, until you determine that you have such a client, you can assume that you do not have any. If you determine that some clients or applications are chatty – it might make sense to budget an extra replica or two for your datacenter(s).

The recommended clients to server ratio is about 2-3K clients per server, assuming that users authenticate multiple times over the course of the day but not every minute.

Datacenter Total # of Servers Caching Clients Moderate Clients Chatty Clients Replicas
A 10K 9K 1K 10 5
B 6K 5K 1K 0 2
C 7K 6K 1K 5 3

For Datacenter A we have about 9K clients that do caching well. That amounts to about 3-4 replicas. Three would be insufficient if there were many users logging in. So we will assume to employ four replicas. One extra replica should be able to serve the rest of the clients and a number of chatty applications so five looks like a good number.

For Datacenter B two replicas should be enough. If you see issues with that amount you can add another replica later.

In Datacenter C one would need a couple of replicas for caching clients and at least one for the remaining moderate and chatty clients – a total of three seems like a good number.

The whole deployment amounts to 10 replicas. As of Red Hat Enterprise Linux 7.2 topologies with up to 20 replicas are supported.

So far we have managed to answer the first two questions. The last one – about the topology – can be solved by adhering to the following rules:

  1. Connect a replica to at least two other replicas.
  2. Do not connect a replica to more than four other replicas.

Note that these first two recommendations are not hard requirements. Under some conditions it might make sense to have a single replication agreement or to have five. The maximum of four replication agreements was established as a way to prevent the replication overhead to start causing performance issues on the node and degrade its ability to serve clients.

  1. Connect datacenters with each other so that a datacenter is connected to at least a couple of other datacenters.
  2. Connect datacenters with at least a pair of replication agreements.
  3. Have at least two servers per datacenter.

In following these rules it is quite easy to create a topology that resembles the following:


As one can see the topology meets all of the above listed guidelines.

In general, if one has datacenters of a similar size, the topology per datacenter can be the same. In fact, it might make it easier to start with the following diagram and add or remove replicas on an as needed basis.


As always – your comments, experiences, and feedback are welcome.

by Dmitri Pal at July 21, 2016 03:25 PM

July 19, 2016

Ben Lipton

Thinking about templating for automatic CSR generation



I am working on a project (ticket, design) to simplify creating certificates in FreeIPA. Currently administrators must generate a Certificate Signing Request (CSR) matching the type of certificate they wish to issue. They submit this CSR to FreeIPA using the ipa cert-request command, and if all the specified fields match the data FreeIPA has about the certificate subject, a cert will be issued. This seems a bit silly; if FreeIPA has this information already, can’t it automatically generate a CSR with the correct data?

However, different certificate applications require different data, so the Certificate Profile (a concept from the Dogtag CA that specifies the fields in the cert, constraints on their values, and how the final values should be constructed) needs to contain information on how to transform the data in FreeIPA into the fields of the certificate. Further, different administrators may want to use different tools to manage their private keys, so we must be able to communicate these certificate field values back in formats understood by different utilities such as openssl and certutil. Those tools will be responsible for generating the actual CSR from the provided configuration.

As suggested in the Mapping Rules design, the first implementation of this system used python to implement the low-level formatting rules, such as return the user’s email address, prefixed by the string ‘email:’. However, it is a goal of this project to allow new rules to be added at runtime, so these rules must be text-based rather than part of the code. This post will try to imagine what the rules would look like if implemented using the Jinja2 templating language.


We must at a minimum be able to generate two different types of configuration, the openssl config file:

[ req ]
prompt = no
encrypt_key = no

distinguished_name = dn
req_extensions = exts

[ dn ]

[ exts ]

[ SAN ]

[ SANdn ]

and the certutil command line:

certutil -R -a -s "CN=user,O=DOMAIN.EXAMPLE.COM" --extSAN ",dn:UID=user;CN=users;DC=example;DC=com"

Some interesting things to note about these formats:

  • The contents of an extension can be constructed from multiple sources, such as an email address and a distinguished name.
  • The openssl format is hierarchical. Some parameters, such as req_extensions and dirName always refer to the name of a new config section. Others can optionally refer to a config section using an @.
  • In openssl, the certificate subject is created under the [req] section, while extensions are created under their own section.
  • Openssl has a quirky way of denoting distinguished names. They are ordered from least to most specific (opposite how LDAP presents them). And if two AVAs have the same attribute type, they must be prefixed with different strings ending in . (or : or ,) as the config file format will otherwise discard all but one.
  • Certutil is also a bit quirky about distinguished names in the Subject Alt Name extension. Because the argument to the extSAN flag is comma-delimited, the components of the DN must be separated using a different delimiter, like a semicolon.


Two-pass data interpolation

((user data -> data rules) -> syntax rules) -> output

One way we can approach constructing one extension from multiple sources it to use two sets of rules - one rule for each data item that provides a value for the extension, and one rule specifying the name and syntax of the extension as a whole. We evaulate the data rules first, then feed the values produced into the associated syntax rules to get the final output for that extension. Finally, the extension output is passed to the formatter, to produce the final output. We wish to express the data and syntax rules using the templating language, but the formatters (one for each CSR generation tool) will be implemented as python classes.

So how do we represent openssl sections in this scheme? The formatter needs to accept input in a (very limited) markup language, which defines where the sections are, what goes into them, and perhaps whether a given line should be placed under [req] or [exts]. Even with the features of the formatter markup very limited, it would still be possible for a user to accidentally or intentionally inject some markup that would make it impossible to generate a certificate for them. So, some kind of escaping is also needed, but it would be jinja2 template markup escaping, not the HTML escaping that jinja2 already supports.

Example data rules:


Example syntax rules:

--extSAN {{values|join(',')}}
subjectAltName=@{{'{% section %}'}}{{values|join('\n')}}{{'{% endsection %}'}}

That’s a lot of braces! We have to escape the section and endsection tags sequences so they will appear verbatim in the final template, producing something like:

subjectAltName=@{% section %}email={{}}
URI={{subject.inetuserhttpurl}}{% endsection %}

If we used a different type of markup for the user data interpolation and for denoting sections, the escaping would not be necessary; however, we would still need to preprocess the values to escape any jinja2 markup that comes from the user data, and we would still have two types of markup being used in parallel.

Note, too, that the section tag does not exist yet in jinja2; it would need to be implemented as an extension.

Two-pass template interpolation

(user data -> (data rules -> syntax rules)) -> output

Alternatively, we can do the substitution on the templates themselves before interpolating user data, building up one big template that we then render with the data from the database. This is safer because the user-specified data never gets interpreted as a template, so we don’t have to worry about escaping the user data or limiting the features of the template language. On the other hand, this may be challenging for the rule writer, because one must keep in mind the number of times a given rule will be run through the templating engine to get the escaping correct. Data rules will be used as templates only once (consuming user data) but syntax rules will be used as templates once to incorporate the data rules into the templates, and then again when the user data is included. Thus, any template tags relating to the processing of user data (such as, I imagine, ones for specifying openssl sections) need to be escaped.

Surprisingly, this hardly changes the way the rules are written! All of the example rules given above would still be valid, but the values would be the data rules themselves rather than data rules with interpolated user data. And of course, the values would not be escaped beforehand.

Template-based hierarchical rules

user data -> collected rules -> output

One way to get away from escaping and multiple evaluations is to redesign the template so that the order of its elements no longer matters. That is, the hierarchical relationships between data items, certificate extensions, and the CSR as a whole could be encoded using jinja2 tags. It’s probably easiest to explain this idea with an example:

{% group req %}
{% entry req %}extensions={% group exts %}{% endentry %}
{% entry req %}distinguished_name={% group subjectDN %}{% endentry %}
{% entry subjectDN %}O={{config.ipacertificatesubjectbase}}\nCN={{subject.username}}{% endentry %}
{% entry exts %}subjectAltName=@{% group SAN %}{% endentry %}
{% entry SAN %}email={{}}{% endentry %}
{% entry SAN %}URI={{subject.inetuserhttpurl}}{% endentry %}

The config for certutil would be quite similar:

certutil -R -a {% group opts %}
{% entry opts %}-s {% group subjectDN %}{% endentry %}
{% entry opts %}--extSAN {% group SAN %}{% endentry %}
{% entry subjectDN %}CN={{subject.username}},O={{config.ipacertificatesubjectbase}}{% endentry %}
{% entry SAN %}email:{{}}{% endentry %}
{% entry SAN %}uri:{{subject.inetuserhttpurl}}{% endentry %}

Each CSR generation helper would have its own notion of “groups,” which would be implemented as jinja2 extensions. The entries of a group would be collected and inserted into the group in whatever way was appropriate for that helper. Each line of these templates would be either a cert mapping rule referenced in the cert profile, or something built into the formatter for the CSR generation helper. There would be no distinction between data rules and syntax rules, and the order that rules appeared in the template would be irrelevant because the tags specified the hierarchy.

This approach has some downsides, too:

  1. Section names are now specified in the rules, which means there could be conflicts between different rules, and that a rule can only ever be included in a particular section. If two sections need the same data, two different rules are needed.
  2. Some types of groups are formatted differently from others (e.g. in certutil, opts is space-separated, while SAN is comma-separated. It’s not entirely clear where this should be encoded, and how.

Concern #1 is probably an ok tradeoff, as it’s not clear how broadly reusable rules will be anyway. However, #2 would need to be addressed in any actual implementation.

Formatter-based hierarchical rules

user data -> low-level rule -> formatting code -> group objects
group objects -> higher-level rule -> formatting code -> group objects
group objects -> top-level rule -> output

Instead of linking rules together into a hierarchy using tags, leaving it to the templating engine to interpret that structure, we could encode the structure in the rule entities themselves and use multiple evaluations to handle the hierarchy in the formatter, before the data even gets to the templating engine. Each rule would be stored with the name of the group within which it should be rendered, as well as the names of any groups that the rule includes. For example, to adapt the rule {% entry exts %}subjectAltName=@{% group SAN %}{% endentry %} to this schema, we would say that it is an element of the “exts” group, and provides the “SAN” group. By linking up group elements to group providers, we construct a tree of rules.

The formatter would evaluate these rules beginning at the leaves and passing the results of child nodes into variables in the parent node templates. The formatter is responsible for determining what exactly gets passed into the parent node, such as an object representing an openssl config section, or just a list of formatted strings. Parent nodes decide how to present the passed objects, such as by comma-separating the strings or referencing the name of the section. This addresses concern #2 from the previous approach, because the tools of the jinja2 language are now available for expressing how to format the results of groups of rules.

Example leaf rules:

group: SAN
template: email={{}}
group: subjectDN
template: O={{config.ipacertificatesubjectbase}}\nCN={{subject.username}}

Example parent rules:

group: opts
groupProvided: SAN
template: --extSAN {{ SAN|join(',') }}
group: exts
groupProvided: SAN
template: subjectAltName=@{{ SAN.section_name }}

This has several advantages over the two-pass interpolation approaches:

  1. Profiles are simpler to configure, because they just contain a list of references to rules rather than a structured list of groups of rules.
  2. Profiles are also simpler to implement, with no sub-objects in the database.
  3. It’s no longer necessary to pay attention to escaping when writing rules. Each rule is used as a template exactly once, and complex structures are handled by the formatter code rather than template tags so tags don’t need to be passed along.
  4. User data is never used as a template, which reduces the attack surface.

However, there are also some potential concerns:

  1. Whether the openssl and certutil hierarchies for rules are compatible (i.e. can the parent group can be listed in the mapping rule or must it be in the transformation rule?)
  2. Are there any instances where something needs to be a group but can’t be its own openssl section? How would we convey this to the openssl formatter?
  3. Conversely, are there cases where we would want to be able to create a section without creating a new rule? For example, a DN in a subject alternative name needs to be its own section. Do we then need rules just for filling out parts of that DN?


Although hierarchical rules seem like an interesting solution to avoid escaping and simplify the configuration in the cert profile itself, I think the interpolation approaches are easier to understand and explain, which is valuable for this already unexpectedly-complex feature.

Even though it is a little counter-intuitive, I lean towards the template interpolation solution rather than the straightforward data interpolation one because it doesn’t incorporate user data until the last step. This would make it incompatible with the existing python-based rules, but those are going to be replaced anyway, and in fact they may be vulnerable to injection attacks as well. Escaping of tags that are to be interpreted by the formatter will still be inconvenient, but we may be able to provide extensions to the template language to make that easier.

If you are interested in discussing any of these options, feel free to email me directly at the address below, or share your thoughts with the freeipa-devel mailing list. Thanks!

July 19, 2016 12:00 AM

Powered by Planet