FreeIPA Identity Management planet - technical blogs

August 12, 2016

Fraser Tweedale

Smart card login with YubiKey NEO

In this post I give an overview of smart cards and their potential advantages, and share my adventures in using a Yubico YubiKey NEO device for smart card authentication with FreeIPA and SSSD.

Smart card overview

Smart cards with cryptographic processors and secure key storage (private key generated on-device and cannot be extracted) are an increasingly popular technology for secure system and service login, as well as for signing and encryption applications (e.g. code signing, OpenPGP). They may offer a security advantage over traditional passwords because private key operations typically require the user to enter a PIN. Therefore the smart card is two factors in one: both something I have and something I know.

The inability to extract the private key from a smart card also provides an advantage over software HOTP/TOTP tokens which, in the absense of other security measures such as encrypted filesystem on the mobile device, allow an attacker to extract the OTP seed. And because public key cryptography is used, there is no OTP seed or password hash sitting on a server, waiting to be exfiltrated and subjected to offline attacks.

For authentication applications, a smart card carries an X.509 certificate alongside a private key. A login application would read the certificate from the card and validate it against trusted CAs (e.g. a company’s CA for issuing smart cards). Typically an OCSP or CRL check would also be performed. The login application then challenges the card to sign a nonce, and validates the signature with the public key from the certificate. A valid signature attests that the bearer of the smart card is indeed the subject of the certificate. Finally, the certificate is then mapped to a user either by looking for an exact certificate match or by extracting information about the user from the certificate.

Test environment

In my smart card investigations I had a FreeIPA server with a single Fedora 24 desktop host enrolled. alice was the user I tested with. To begin with, she had no certificates and used her password to log in.

I was doing all of my testing on virtual machines, so I had to enable USB passthrough for the YubiKey device. This is straightforward but you have to ensure the IOMMU is enabled in both BIOS and kernel (for Intel CPUs add intel_iommu=on to the kernel command line in GRUB).

In virt-manager, after you have created the VM (it doesn’t need to be running) you can Add Hardware in the Details view, then choose the YubiKey NEO device. There are no doubt virsh incantations or other ways to establish the passthrough.

Finally, on the host I stopped the pcscd smart card daemon to prevent it from interfering with passthrough:

# systemctl stop pcscd.service pcscd.socket

Provisioning the YubiKey

For general smart card provisioning steps, I recommend Nathan Kinder’s post on the topic. But the YubiKey NEO is special with its own steps to follow! First install the ykpers and yubico-piv-tool packages:

sudo dnf install -y ykpers yubico-piv-tool

If we run yubico-piv-tool to find out the version of the PIV applet, we run into a problem because a new YubiKey comes configured in OTP mode:

[dhcp-40-8:~] ftweedal% yubico-piv-tool -a version
Failed to connect to reader.

The YubiKey NEO supports a variety of operation modes, including hybrid modes:

0    OTP device only.
1    CCID device only.
2    OTP/CCID composite device.
3    U2F device only.
4    OTP/U2F composite device.
5    U2F/CCID composite device.
6    OTP/U2F/CCID composite device.

(You can also add 80 to any of the modes to configure touch to eject, or touch to switch modes for hybrid modes).

We need to put the YubiKey into CCID (Chip Card Interface Device, a standard USB protocol for smart cards) mode. I originally configured the YubiKey in mode 86 but could not get the card to work properly with USB passthrough to the virtual machine. Whether this was caused by the eject behaviour or the fact that it was a hybrid mode I do not know, but reconfiguring it to mode 1 (CCID only) allowed me to use the card on the guest.

[dhcp-40-8:~] ftweedal% ykpersonalize -m 1
Firmware version 3.4.6 Touch level 1541 Program sequence 1

The USB mode will be set to: 0x1

Commit? (y/n) [n]: y

Now yubico-piv-tool can see the card:

[dhcp-40-8:~] ftweedal% yubico-piv-tool -a version
Application version 1.0.4 found.

Now we can initialise the YubiKey by setting a new management key, PIN and PIN Unblocking Key (PUK). As you can probably guess, the management key protects actions like generating keys and importing certificates, the PIN protects private key operations in regular use, the the PUK is kind of in between, allowing the PIN to be reset if the maximum attempts are exceeded. The current (default) PIN and PUK need to be given in order to reset them.

% KEY=`dd if=/dev/random bs=1 count=24 2>/dev/null | hexdump -v -e '/1 "%02X"'`
% echo $KEY
CC044321D49AC1FC40146AD049830DB09C5AFF05CD843766
% yubico-piv-tool -a set-mgm-key -n $KEY
Successfully set new management key.

% PIN=`dd if=/dev/random bs=1 count=6 2>/dev/null | hexdump -v -e '/1 "%u"'|cut -c1-6`
% echo $PIN
167246
% yubico-piv-tool -a change-pin -P 123456 -N $PIN
Successfully changed the pin code.

% PUK=`dd if=/dev/random bs=1 count=6 2>/dev/null | hexdump -v -e '/1 "%u"'|cut -c1-8`
% echo $PUK
24985117
% yubico-piv-tool -a change-puk -P 12345678 -N $PUK
Successfully changed the puk code.

Next we must generate a private/public keypair on the smart card. Various slots are available for different purposes, with different PIN-checking behaviour. The Certificate slots page on the Yubico wiki gives the full details. We will use slot 9e which is for Card Authentication (PIN is not needed for private key operations). It is necessary to provide the management key on the command line, but the program also prompts for it (I’m not sure why this is the case).

% yubico-piv-tool -k $KEY -a generate -s 9e
Enter management key: CC044321D49AC1FC40146AD049830DB09C5AFF05CD843766
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEApT5tb99jr7qA8zN66Dbl
fu/Jh+F0nZvp7FXZRJQH12KgEeX4Lzu1S10b1HQ0lpHZWcqPQh2wbHaC8U7uYSLW
LqsjmFeJrskAerVAAH8v+tzy6DKlJKaLjAt8qWEJ1UWf5stJO3r9RD6Z80rOYPXT
MsKxmsb22v5lbvZTa0mILQeP2e6m4rwPKluQrODYkQkQcYIfedQggmYwo7Cxl5Lu
smtes1/FeUlJ+DG3mga3TrZd1Fb+wDJqQU3ghLul9qLNdPYyxdwDKSWkIOt5UusZ
2A8qECKZ8Wzv0IGI0bReSZYHKjhdm4aMMNubtKDuem/nUwBebRHFGU8zXTSFXeAd
gQIDAQAB
-----END PUBLIC KEY-----
Successfully generated a new private key.

We then use this key to create a certificate signing request (CSR) via yubico-piv-tool. Although slot 9e does not require the PIN, other slots do require it, so I’ve included the verify-pin action for completeness:

% yubico-piv-tool -a verify-pin \
    -a request-certificate -s 9e -S "/CN=alice/"
Enter PIN: 167246
Successfully verified PIN.
Please paste the public key...
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEApT5tb99jr7qA8zN66Dbl
fu/Jh+F0nZvp7FXZRJQH12KgEeX4Lzu1S10b1HQ0lpHZWcqPQh2wbHaC8U7uYSLW
LqsjmFeJrskAerVAAH8v+tzy6DKlJKaLjAt8qWEJ1UWf5stJO3r9RD6Z80rOYPXT
MsKxmsb22v5lbvZTa0mILQeP2e6m4rwPKluQrODYkQkQcYIfedQggmYwo7Cxl5Lu
smtes1/FeUlJ+DG3mga3TrZd1Fb+wDJqQU3ghLul9qLNdPYyxdwDKSWkIOt5UusZ
2A8qECKZ8Wzv0IGI0bReSZYHKjhdm4aMMNubtKDuem/nUwBebRHFGU8zXTSFXeAd
gQIDAQAB
-----END PUBLIC KEY-----
-----BEGIN CERTIFICATE REQUEST-----
MIICUzCCAT0CAQAwEDEOMAwGA1UEAwwFYWxpY2UwggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQClPm1v32OvuoDzM3roNuV+78mH4XSdm+nsVdlElAfXYqAR
5fgvO7VLXRvUdDSWkdlZyo9CHbBsdoLxTu5hItYuqyOYV4muyQB6tUAAfy/63PLo
MqUkpouMC3ypYQnVRZ/my0k7ev1EPpnzSs5g9dMywrGaxvba/mVu9lNrSYgtB4/Z
7qbivA8qW5Cs4NiRCRBxgh951CCCZjCjsLGXku6ya16zX8V5SUn4MbeaBrdOtl3U
Vv7AMmpBTeCEu6X2os109jLF3AMpJaQg63lS6xnYDyoQIpnxbO/QgYjRtF5Jlgcq
OF2bhoww25u0oO56b+dTAF5tEcUZTzNdNIVd4B2BAgMBAAGgADALBgkqhkiG9w0B
AQsDggEBADvyL13ayXRDWmRJ1dSi4lE9l128fy3Lt/1XoAC1D+000hWkXOPA+K8j
gR/Yg99K9v3U2wm6wtk2taEeogc4TebVawXezjw/hu4wq2sta3zVVJC9+yRrUeai
P+Gvj0KNesXK5MyHGpeiPb3SA/2GYYK04suM6a1vpA+sBvrca39klpgBrYY0N/9s
VE4gBBNhQa9jN8E9VMQXEPxYVH1tDrp7bRxg6V5spJb2oit6H+7Pe7xSC95ByCXw
Msprhk+B2nkrVaco5R/ZOG0jZdMOMOJXCuTbWKOaCDEN5hsLNdua6uBpiDCJ5v1I
l0Xmf53DC7jceF/XgZ0LUzbGzTDcr3o=
-----END CERTIFICATE REQUEST-----

yubico-piv-tool -a request-certificate is not very flexible; for example, it cannot create a CSR with request extensions such as including the user’s email address or Kerberos principal name in the Subject Alternative Name extension. For such non-trivial use cases, openssl req or other programs can be used instead, with a PKCS #11 module providing acesss to the smart card’s signing capability. Nathan Kinder’s post provides full details.

With CSR in hand, alice can now request a certificate from the IPA CA. I have covered this procedure in previous articles so I’ll skip it here, except to add that it is necessary to use a profile that saves the newly issued certificate to the subject’s userCertificate LDAP attribute. This is how SSSD matches certificates in smart cards with users.

Once we have the certificate (in file alice.pem) we can import it onto the card:

% yubico-piv-tool -k $KEY -a import-certificate -s 9e -i alice.pem
Enter management key: CC044321D49AC1FC40146AD049830DB09C5AFF05CD843766
Successfully imported a new certificate.

Configuring smart card login

OpenSC provides a PKCS #11 module for interfacing with PIV smart cards, among other things:

# dnf install -y opensc

Enable smart card authentication in /etc/sssd.conf:

[pam]
pam_cert_auth = True

Then restart SSSD:

# systemctl restart sssd

Next, enable the OpenSC PKCS #11 module in the system NSS database:

# modutil -dbdir /etc/pki/nssdb \
    -add "OpenSC" -libfile opensc-pkcs11.so

We also need to add the IPA CA cert to the system NSSDB. This will allow SSSD to validate certificates from smart cards. If smart card certificates are issued by a sub-CA or an external CA, import that CA’s certificate instead.

# certutil -d /etc/ipa/nssdb -L -n 'IPA.LOCAL IPA CA' -a \
  | certutil -d /etc/pki/nssdb -A -n 'IPA.LOCAL IPA CA' -t 'CT,C,C'

One hiccup I had was that SSSD could not talk to the OCSP server indicated in the Authority Information Access extension on the certificate (due to my DNS not being set up correctly). I had to tell SSSD not to perform OCSP checks. The sssd.conf snippet follows. Do not do this in a production environment.

[sssd]
...
certificate_verification = no_ocsp

That’s pretty much all there is to it. After this, I was able to log in as alice using the YubiKey NEO. When logging in with the card inserted, instead of being prompted for a password, GDM prompts for the PIN. Enter the pin, and it lets you in!

Screenshot of login PIN prompt

Conclusion

I mentioned (or didn’t mention) a few standards related to smart card authentication. A quick review of them is warranted:

  • CCID is a USB smart card interface standard.
  • PIV (Personal Identify Verification) is a smart card standard from NIST. It defines the slots, PIN behaviour, etc.
  • PKCS #15 is a token information format. OpenSC provides an PKCS #15 emulation layer for PIV cards.
  • PKCS #11 is a software interface to cryptographic tokens. Token and HSM vendors provide PKCS #11 modules for their devices. OpenSC provides a PKCS #11 interface to PKCS #15 tokens (including emulated PIV tokens).

It is appropriate to mention pam_pkcs11, which is also part of the OpenSC project, as an alternative to SSSD. More configuration is involved, but if you don’t have (or don’t want) an external identity management system it looks like a good approach.

You might remember that I was using slot 9e which doesn’t require a PIN, yet I was still prompted for a PIN when logging in. There are a couple of issues to tease apart here. The first issue is that although PIV cards do not require the PIN for private key operations on slot 9e, the opensc-pkcs11.so PKCS #11 module does not correctly report this. As an alternative to OpenSC, Yubico provide their own PKCS #11 module called YKCS11 as part of yubico-piv-tool but modutil did not like it. Nevertheless, a peek at its source code leads me to believe that it too declares that the PIN is required regardless of the slot in use. I could not find much discussion of this discrepancy so I will raise some tickets and hopefully it can be addressed.

The second issue is that SSSD requires the PIN and uses it to log into the token, even if the token says that a PIN is not required. Again, I will start a discussion to see if this is really the intended behaviour (perhaps it is).

The YubiKey NEO features a wireless (NFC) interface. I haven’t played with it yet, but all the smart card features are available over that interface. This lends weight to fixing the issues preventing PIN-less usage.

A final thought I have about the user experience is that it would be nice if user information could be derived or looked up based on the certificate(s) in the smart card, and a user automatically selected, instead of having to first specify "I am alice" or whoever. The information is there on the card after all, and it is one less step for users to perform. If PIN-less usage can be addressed, it would mean that a user can just approach a machine, plug in their smart card and hi ho, off to work they go. There are some indications that this does work with GDM and pam_pkcs11, so if you know how to get it going with SSSD I would love to know!

by ftweedal at August 12, 2016 02:55 AM

August 11, 2016

Adam Young

Tripleo HA Federation Proof-of-Concept

Keystone has supported identity federation for several releases. I have been working on a proof-of-concept integration of identity federation in a TripleO deployment. I was able to successfully login to Horizon via WebSSO, and want to share my notes.

A federation deployment requires changes to the network topology, Keystone, the HTTPD service, and Horizon. The various OpenStack deployment tools will have their own ways of applying these changes. While this proof-of-concept can’t be called production-ready, it does demonstrate that TripleO can support Federation using SAML. From this proof-of-concept, we should be to deduce the necessary steps needed for a production deployment.

Prerequisites

  • Single physical node – Large enough to run multiple virtual machines.  I only ended up using 3, but scaled up to 6 at one point and ran out of resources.  Tested with 8 CPUs and 32 GB RAM.
  • Centos 7.2 – Running as the base operating system.
  • FreeIPA – Particularly, the CentOS repackage of Red Hat Identity Management. Running on the base OS.
  • Keycloak – Actually an alpha build of Red Hat SSO, running on the base OS. This was fronted by Apache HTTPD, and proxied through ajp://localhost:8109. This gave me HTTPS support using the CA Certificate from the IPA server.  This will be important later when the controller nodes need to talk to the identity provider to set up metadata.
  • Tripleo Quickstart – deployed in HA mode, using an undercloud.
    • ./quickstart.sh –config config/general_config/ha.yml ayoung-dell-t1700.test

In addition, I did some sanity checking of the cluster, but deploying the overcloud using the quickstart helper script, and tore it down using heat stack-delete overcloud.

Reproducing Results

When doing development testing, you can expect to rebuild and teardown your cloud on a regular basis.  When you redeploy, you want to make sure that the changes are just the delta from what you tried last time.  As the number of artifacts grew, I found I needed to maintain a repository of files that included the environment passed to openstack overcloud deploy.  To manage these, I create a git repository in /home/stack/deployment. Inside that directory, I copied the overcloud-deploy.sh and deploy_env.yml files generated by the overcloud, and modified them accordingly.

In my version of overcloud-deploy.sh, I wanted to remove the deploy_env.yml generation, to avoid confusion during later deployments.  I also wanted to preserve the environment file across deployments (and did not want it in /tmp). This file has three parts: the Keystone configuration values, HTTPS/Network setup, and configuration for a single node deployment. This last part was essential for development, as chasing down fixes across three HA nodes was time-consuming and error prone. The DNS server value I used is particular to my deployment, and reflects the IPA server running on the base host.

For reference, I’ve included those files at the end of this post.

Identity Provider Registration and Metadata

While it would have been possible to run the registration of the identity provider on one of the nodes, the Heat-managed deployment process does not provide a clean way to gather those files and package them for deployment to other nodes.  While I deployed on a single node for development, it took me a while to realize that I could do that, and had already worked out an approach to call the registration from the undercloud node, and produce a tarball.

As a result, I created a script, again to allow for reproducing this in the future:

register_sp_rhsso.sh

#!/bin/sh 

basedir=$(dirname $0)
ipa_domain=`hostname -d`
rhsso_master_admin_password=FreeIPA4All

keycloak-httpd-client-install \
   --client-originate-method registration \
   --force \
   --mellon-https-port 5000 \
   --mellon-hostname openstack.$ipa_domain  \
   --mellon-root '/v3' \
   --keycloak-server-url https://identity.$ipa_domain  \
   --keycloak-auth-role root-admin \
   --keycloak-admin-password  $rhsso_master_admin_password \
   --app-name v3 \
   --keycloak-realm openstack \
   --mellon-https-port 5000 \
   --log-file $basedir/rhsso.log \
   --httpd-dir $basedir/rhsso/etc/httpd \
   -l "/v3/auth/OS-FEDERATION/websso/saml2" \
   -l "/v3/auth/OS-FEDERATION/identity_providers/rhsso/protocols/saml2/websso" \
   -l "/v3/OS-FEDERATION/identity_providers/rhsso/protocols/saml2/auth"

This does not quite generate the right paths, as it turns out that the $basename is not quite what we want, so I had to post-edit the generated file: rhsso/etc/httpd/conf.d/v3_mellon_keycloak_openstack.conf

Specifically, the path:
./rhsso/etc/httpd/saml2/v3_keycloak_openstack_idp_metadata.xml

has to be changed to:
/etc/httpd/saml2/v3_keycloak_openstack_idp_metadata.xml

While I created a tarball that I then manually deployed, the preferred approach would be to use tripleo-heat-templates/puppet/deploy-artifacts.yaml to deploy them. The problem I faced is that the generated files include Apache module directives from mod_auth_mellon.  If mod_auth_mellon has not been installed into the controller, the Apache server won’t start, and the deployment will fail.

Federation Operations

The Federation setup requires a few calls. I documented them in Rippowam, and attempted to reproduce them locally using Ansible and the Rippowam code. I was not a purist though, as A) I needed to get this done and B) the end solution is not going to use Ansible anyway. The general steps I performed:

  • yum install mod_auth_mellon
  • Copy over the metadata tarball, expand it, and tweak the configuration (could be done prior to building the tarball).
  • Run the following commands.
openstack identity provider create --remote-id https://identity.{{ ipa_domain }}/auth/realms/openstack
openstack mapping create --rules ./mapping_rhsso_saml2.json rhsso_mapping
openstack federation protocol create --identity-provider rhsso --mapping rhsso_mapping saml2

The Mapping file is the one from Rippowm

The keystone service calls only need to be performed once, as they are stored in the database. The expansion of the tarball needs to be performed on every node.

Dashboard

As in previous Federation setups, I needed to modify the values used for WebSSO. The values I ended up setting in /etc/openstack-dashboard/local_settings resembled this:

OPENSTACK_KEYSTONE_URL = "https://openstack.ayoung-dell-t1700.test:5000/v3"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"
WEBSSO_ENABLED = True
WEBSSO_INITIAL_CHOICE = "saml2"
WEBSSO_CHOICES = (
    ("saml2", _("Rhsso")),
    ("credentials", _("Keystone Credentials")),
)

Important: Make sure that the auth URL is using a FQDN name that matches the value in the signed certificate.

Redirect Support for SAML

The several differences between how HTTPD and HA Proxy operate require us to perform certain configuration modifications.  Keystone runs internally over HTTP, not HTTPS.  However, the SAML Identity Providers are public, and are transmitting cryptographic data, and need to be protected using HTTPS.  As a result, HA Proxy needs to expose an HTTPS-based endpoint for the Keystone public service.  In addition, the redirects that come from mod_auth_mellon need to reflect the public protocol, hostname, and port.

The solution I ended up with involved changes on both sides:

In haproxy.cfg, I modified the keystone public stanza so it looks like this:

listen keystone_public
bind 10.0.0.4:13000 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 10.0.0.4:5000 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 172.16.2.4:5000 transparent
redirect scheme https code 301 if { hdr(host) -i 10.0.0.4 } !{ ssl_fc }
rsprep ^Location:\ http://(.*) Location:\ https://\1

While this was necessary, it also proved to be insufficient. When the signed assertion from the Identity Provider is posted to the Keystone server, mod_auth_mellon checks that the destination value matches what it expects the hostname should be. Consequently, in order to get this to match in the file:

/etc/httpd/conf.d/10-keystone_wsgi_main.conf

I had to set the following:

<VirtualHost 172.16.2.6:5000>
ServerName https://openstack.ayoung-dell-t1700.test

Note that the protocol is set to https even though the Keystone server is handling HTTP. This might break elswhere. If if does, then the Keystone configuration in Apache may have to be duplicated.

Federation Mapping

For the WebSSO login to successfully complete, the user needs to have a role on at least one project. The Rippowam mapping file maps the user to the Member role in the demo group, so the most straightforward steps to complete are to add a demo group, add a demo project, and assign the Member role on the demo project to the demo group. All this should be done with a v3 token:

openstack group create demo
openstack role create Member
openstack project create demo
openstack role add --group demo --project demo Member

Complete helper files

Below are the complete files that were too long to put inline.

overcloud-deploy.sh

#!/bin/bash
# Simple overcloud deploy script

set -eux

# Source in undercloud credentials.
source /home/stack/stackrc

# Wait until there are hypervisors available.
while true; do
    count=$(openstack hypervisor stats show -c count -f value)
    if [ $count -gt 0 ]; then
        break
    fi
done

deploy_status=0

# Deploy the overcloud!
openstack overcloud deploy --debug --templates --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e $HOME/deployment/network-environment.yaml --control-scale 3 --neutron-network-type vxlan --neutron-tunnel-types vxlan -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server pool.ntp.org -e $HOME/deployment/deploy_env.yaml   --force-postconfig "$@"    || deploy_status=1

# We don't always get a useful error code from the openstack deploy command,
# so check `heat stack-list` for a CREATE_FAILED status.
if heat stack-list | grep -q 'CREATE_FAILED'; then
    deploy_status=1

    for failed in $(heat resource-list \
        --nested-depth 5 overcloud | grep FAILED |
        grep 'StructuredDeployment ' | cut -d '|' -f3)
    do heat deployment-show $failed > failed_deployment_$failed.log
    done
fi

exit $deploy_status

deploy-env.yml

parameter_defaults:
  controllerExtraConfig:
    keystone::using_domain_config: true
    keystone::config::keystone_config:
      identity/domain_configurations_from_database:
        value: true
      auth/methods:
        value: external,password,token,oauth1,saml2
      federation/trusted_dashboard:
        value: http://openstack.ayoung-dell-t1700.test/dashboard/auth/websso/
      federation/sso_callback_template:
        value: /etc/keystone/sso_callback_template.html
      federation/remote_id_attribute:
        value: MELLON_IDP

    # In releases before Mitaka, HeatWorkers doesn't modify
    # num_engine_workers, so handle via heat::config 
    heat::config::heat_config:
      DEFAULT/num_engine_workers:
        value: 1
    heat::api_cloudwatch::enabled: false
    heat::api_cfn::enabled: false
  HeatWorkers: 1
  CeilometerWorkers: 1
  CinderWorkers: 1
  GlanceWorkers: 1
  KeystoneWorkers: 1
  NeutronWorkers: 1
  NovaWorkers: 1
  SwiftWorkers: 1
  CloudName: openstack.ayoung-dell-t1700.test
  CloudDomain: ayoung-dell-t1700.test
  DnsServers: 10.18.57.26


  #TLS Setup from enable-tls.yaml
  PublicVirtualFixedIPs: [{'ip_address':'10.0.0.4'}]
  SSLCertificate: |
    -----BEGIN CERTIFICATE-----
    #certificate removed for space
    -----END CERTIFICATE-----

    The contents of your certificate go here
  SSLIntermediateCertificate: ''
  SSLKey: |
    -----BEGIN RSA PRIVATE KEY-----
    #key removed for space
    -----END RSA PRIVATE KEY-----

  EndpointMap:
    AodhAdmin: {protocol: 'http', port: '8042', host: 'IP_ADDRESS'}
    AodhInternal: {protocol: 'http', port: '8042', host: 'IP_ADDRESS'}
    AodhPublic: {protocol: 'https', port: '13042', host: 'CLOUDNAME'}
    CeilometerAdmin: {protocol: 'http', port: '8777', host: 'IP_ADDRESS'}
    CeilometerInternal: {protocol: 'http', port: '8777', host: 'IP_ADDRESS'}
    CeilometerPublic: {protocol: 'https', port: '13777', host: 'CLOUDNAME'}
    CinderAdmin: {protocol: 'http', port: '8776', host: 'IP_ADDRESS'}
    CinderInternal: {protocol: 'http', port: '8776', host: 'IP_ADDRESS'}
    CinderPublic: {protocol: 'https', port: '13776', host: 'CLOUDNAME'}
    GlanceAdmin: {protocol: 'http', port: '9292', host: 'IP_ADDRESS'}
    GlanceInternal: {protocol: 'http', port: '9292', host: 'IP_ADDRESS'}
    GlancePublic: {protocol: 'https', port: '13292', host: 'CLOUDNAME'}
    GnocchiAdmin: {protocol: 'http', port: '8041', host: 'IP_ADDRESS'}
    GnocchiInternal: {protocol: 'http', port: '8041', host: 'IP_ADDRESS'}
    GnocchiPublic: {protocol: 'https', port: '13041', host: 'CLOUDNAME'}
    HeatAdmin: {protocol: 'http', port: '8004', host: 'IP_ADDRESS'}
    HeatInternal: {protocol: 'http', port: '8004', host: 'IP_ADDRESS'}
    HeatPublic: {protocol: 'https', port: '13004', host: 'CLOUDNAME'}
    HorizonPublic: {protocol: 'https', port: '443', host: 'CLOUDNAME'}
    KeystoneAdmin: {protocol: 'http', port: '35357', host: 'IP_ADDRESS'}
    KeystoneInternal: {protocol: 'http', port: '5000', host: 'IP_ADDRESS'}
    KeystonePublic: {protocol: 'https', port: '13000', host: 'CLOUDNAME'}
    NeutronAdmin: {protocol: 'http', port: '9696', host: 'IP_ADDRESS'}
    NeutronInternal: {protocol: 'http', port: '9696', host: 'IP_ADDRESS'}
    NeutronPublic: {protocol: 'https', port: '13696', host: 'CLOUDNAME'}
    NovaAdmin: {protocol: 'http', port: '8774', host: 'IP_ADDRESS'}
    NovaInternal: {protocol: 'http', port: '8774', host: 'IP_ADDRESS'}
    NovaPublic: {protocol: 'https', port: '13774', host: 'CLOUDNAME'}
    NovaEC2Admin: {protocol: 'http', port: '8773', host: 'IP_ADDRESS'}
    NovaEC2Internal: {protocol: 'http', port: '8773', host: 'IP_ADDRESS'}
    NovaEC2Public: {protocol: 'https', port: '13773', host: 'CLOUDNAME'}
    NovaVNCProxyAdmin: {protocol: 'http', port: '6080', host: 'IP_ADDRESS'}
    NovaVNCProxyInternal: {protocol: 'http', port: '6080', host: 'IP_ADDRESS'}
    NovaVNCProxyPublic: {protocol: 'https', port: '13080', host: 'CLOUDNAME'}
    SaharaAdmin: {protocol: 'http', port: '8386', host: 'IP_ADDRESS'}
    SaharaInternal: {protocol: 'http', port: '8386', host: 'IP_ADDRESS'}
    SaharaPublic: {protocol: 'https', port: '13386', host: 'CLOUDNAME'}
    SwiftAdmin: {protocol: 'http', port: '8080', host: 'IP_ADDRESS'}
    SwiftInternal: {protocol: 'http', port: '8080', host: 'IP_ADDRESS'}
    SwiftPublic: {protocol: 'https', port: '13808', host: 'CLOUDNAME'}

resource_registry:
  OS::TripleO::NodeTLSData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/tls-cert-inject.yaml

parameters:
   ControllerCount: 1 

by Adam Young at August 11, 2016 05:53 PM

August 10, 2016

Rich Megginson

How to do python dict setdefault with ruby hashes

setdefault is a very useful Python Dict method.
>python
Python 2.7.11 (default, Jul  8 2016, 19:45:00) 
[GCC 5.3.1 20160406 (Red Hat 5.3.1-6)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> dd = {}
>>> dd.setdefault('a', {}).setdefault('b', {})['c'] = 'd'
>>> dd
{'a': {'b': {'c': 'd'}}}
>>> dd.setdefault('a', {}).setdefault('b', {})['e'] = 'f'
>>> dd
{'a': {'b': {'c': 'd', 'e': 'f'}}}
>>> dd.setdefault('g', {}).setdefault('b', {})['e'] = 'f'
>>> dd
{'a': {'b': {'c': 'd', 'e': 'f'}}, 'g': {'b': {'e': 'f'}}}

You can do the same thing in ruby with a little hackery.
>irb
irb(main):001:0> dd = {}
=> {}
irb(main):002:0> ((dd['a'] ||= {})['b'] ||= {})['c'] = 'd'
=> "d"
irb(main):003:0> dd
=> {"a"=>{"b"=>{"c"=>"d"}}}
irb(main):004:0> ((dd['a'] ||= {})['b'] ||= {})['e'] = 'f'
=> "f"
irb(main):005:0> dd
=> {"a"=>{"b"=>{"c"=>"d", "e"=>"f"}}}
irb(main):006:0> ((dd['g'] ||= {})['b'] ||= {})['e'] = 'f'
=> "f"
irb(main):007:0> dd
=> {"a"=>{"b"=>{"c"=>"d", "e"=>"f"}}, "g"=>{"b"=>{"e"=>"f"}}}

August 10, 2016 04:38 PM

August 03, 2016

James Shubin

Seen in downtown Montreal…

The Technical Blog of James was seen on an outdoor electronic display in downtown Montreal! Thanks to one of my readers for sending this in.

I guess the smart phone revolution is over, and people are taking to reading my articles on bigger screens!

I guess the smart phone revolution is over, and people are taking to reading my articles on bigger screens! The “poutine” is decent proof that this is probably Montreal.

If you’ve got access to a large electronic display, put up the blog, snap a photo, and send it my way! I’ll post it here and send you some random stickers!

Happy Hacking,

James

PS: If you have some comments about this blog, please don’t be shy, send them my way.


by purpleidea at August 03, 2016 05:59 AM

July 26, 2016

Fraser Tweedale

FreeIPA Lightweight CA internals

In the preceding post, I explained the use cases for the FreeIPA lightweight sub-CAs feature, how to manage CAs and use them to issue certificates, and current limitations. In this post I detail some of the internals of how the feature works, including how signing keys are distributed to replicas, and how sub-CA certificate renewal works. I conclude with a brief retrospective on delivering the feature.

Full details of the design of the feature can be found on the design page. This post does not cover everything from the design page, but we will look at the aspects that are covered from the perspective of the system administrator, i.e. "what is happening on my systems?"

Dogtag lightweight CA creation

The PKI system used by FreeIPA is called Dogtag. It is a separate project with its own interfaces; most FreeIPA certificate management features are simply reflecting a subset of the corresponding Dogtag interface, often integrating some additional access controls or identity management concepts. This is certainly the case for FreeIPA sub-CAs. The Dogtag lightweight CAs feature was implemented initially to support the FreeIPA use case, yet not all aspects of the Dogtag feature are used in FreeIPA as of v4.4, and other consumers of the Dogtag feature are likely to emerge (in particular: OpenStack).

The Dogtag lightweight CAs feature has its own design page which documents the feature in detail, but it is worth mentioning some important aspects of the Dogtag feature and their impact on how FreeIPA uses the feature.

  • Dogtag lightweight CAs are managed via a REST API. The FreeIPA framework uses this API to create and manage lightweight CAs, using the privileged RA Agent certificate to authenticate. In a future release we hope to remove the RA Agent and authenticate as the FreeIPA user using GSS-API proxy credentials.
  • Each CA in a Dogtag instance, including the "main" CA, has an LDAP entry with object class authority. The schema includes fields such as subject and issuer DN, certificate serial number, and a UUID primary key, which is randomly generated for each CA. When FreeIPA creates a CA, it stores this UUID so that it can map the FreeIPA CA’s common name (CN) to the Dogtag authority ID in certificate requests or other management operations (e.g. CA deletion).
  • The "nickname" of the lightweight CA signing key and certificate in Dogtag’s NSSDB is the nickname of the "main" CA signing key, with the lightweight CA’s UUID appended. In general operation FreeIPA does not need to know this, but the ipa-certupdate program has been enhanced to set up Certmonger tracking requests for FreeIPA-managed lightweight CAs and therefore it needs to know the nicknames.
  • Dogtag lightweight CAs may be nested, but FreeIPA as of v4.4 does not make use of this capability.

So, let’s see what actually happens on a FreeIPA server when we add a lightweight CA. We will use the sc example from the previous post. The command executed to add the CA, with its output, was:

% ipa ca-add sc --subject "CN=Smart Card CA, O=IPA.LOCAL" \
    --desc "Smart Card CA"
---------------
Created CA "sc"
---------------
  Name: sc
  Description: Smart Card CA
  Authority ID: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
  Subject DN: CN=Smart Card CA,O=IPA.LOCAL
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330

The LDAP entry added to the Dogtag database was:

dn: cn=660ad30b-7be4-4909-aa2c-2c7d874c84fd,ou=authorities,ou=ca,o=ipaca
authoritySerial: 63
objectClass: authority
objectClass: top
cn: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
authorityID: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
authorityKeyNickname: caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d87
 4c84fd
authorityKeyHost: f24b-0.ipa.local:443
authorityEnabled: TRUE
authorityDN: CN=Smart Card CA,O=IPA.LOCAL
authorityParentDN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
authorityParentID: d3e62e89-df27-4a89-bce4-e721042be730

We see the authority UUID in the authorityID attribute as well as cn and the DN. authorityKeyNickname records the nickname of the signing key in Dogtag’s NSSDB. authorityKeyHost records which hosts possess the signing key – currently just the host on which the CA was created. authoritySerial records the serial number of the certificate (more that that later). The meaning of the rest of the fields should be clear.

If we have a peek into Dogtag’s NSSDB, we can see the new CA’s certificate:

# certutil -d /etc/pki/pki-tomcat/alias -L

Certificate Nickname              Trust Attributes
                                  SSL,S/MIME,JAR/XPI

caSigningCert cert-pki-ca         CTu,Cu,Cu
auditSigningCert cert-pki-ca      u,u,Pu
Server-Cert cert-pki-ca           u,u,u
caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd u,u,u
ocspSigningCert cert-pki-ca       u,u,u
subsystemCert cert-pki-ca         u,u,u

There it is, alongside the main CA signing certificate and other certificates used by Dogtag. The trust flags u,u,u indicate that the private key is also present in the NSSDB. If we pretty print the certificate we will see a few interesting things:

# certutil -d /etc/pki/pki-tomcat/alias -L \
    -n 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd'
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 63 (0x3f)
        Signature Algorithm: PKCS #1 SHA-256 With RSA Encryption
        Issuer: "CN=Certificate Authority,O=IPA.LOCAL 201606201330"
        Validity:
            Not Before: Fri Jul 15 05:46:00 2016
            Not After : Tue Jul 15 05:46:00 2036
        Subject: "CN=Smart Card CA,O=IPA.LOCAL"
        ...
        Signed Extensions:
            ...
            Name: Certificate Basic Constraints
            Critical: True
            Data: Is a CA with no maximum path length.
            ...

Observe that:

  • The certificate is indeed a CA.
  • The serial number (63) agrees with the CA’s LDAP entry.
  • The validity period is 20 years, the default for CAs in Dogtag. This cannot be overridden on a per-CA basis right now, but addressing this is a priority.

Finally, let’s look at the raw entry for the CA in the FreeIPA database:

dn: cn=sc,cn=cas,cn=ca,dc=ipa,dc=local
cn: sc
ipaCaIssuerDN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
objectClass: ipaca
objectClass: top
ipaCaSubjectDN: CN=Smart Card CA,O=IPA.LOCAL
ipaCaId: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
description: Smart Card CA

We can see that this entry also contains the subject and issuer DNs, and the ipaCaId attribute holds the Dogtag authority ID, which allows the FreeIPA framework to dereference the local ID (sc) to the Dogtag ID as needed. We also see that the description attribute is local to FreeIPA; Dogtag also has a description attribute for lightweight CAs but FreeIPA uses its own.

Lightweight CA replication

FreeIPA servers replicate objects in the FreeIPA directory among themselves, as do Dogtag replicas (note: in Dogtag, the term clone is often used). All Dogtag instances in a replicated environment need to observe changes to lightweight CAs (creation, modification, deletion) that were performed on another replica and update their own view so that they can respond to requests consistently. This is accomplished via an LDAP persistent search which is run in a monitor thread. Care was needed to avoid race conditions. Fortunately, the solution for LDAP-based profile storage provided a fine starting point for the authority monitor; although lightweight CAs are more complex, many of the same race conditions can occur and these were already addressed in the LDAP profile monitor implementation.

But unlike LDAP-based profiles, a lightweight CA consists of more than just an LDAP object; there is also the signing key. The signing key lives in Dogtag’s NSSDB and for security reasons cannot be transported through LDAP. This means that when a Dogtag clone observes the addition of a lightweight CA, an out-of-band mechanism to transport the signing key must also be triggered.

This mechanism is covered in the design pages but the summarised process is:

  1. A Dogtag clone observes the creation of a CA on another server and starts a KeyRetriever thread. The KeyRetriever is implemented as part of Dogtag, but it is configured to run the /usr/libexec/ipa/ipa-pki-retrieve-key program, which is part of FreeIPA. The program is invoked with arguments of the server to request the key from (this was stored in the authorityKeyHost attribute mentioned earlier), and the nickname of the key to request.
  2. ipa-pki-retrieve-key requests the key from the Custodia daemon on the source server. It authenticates as the dogtag/<requestor-hostname>@REALM service principal. If authenticated and authorised, the Custodia daemon exports the signing key from Dogtag’s NSSDB wrapped by the main CA’s private key, and delivers it to the requesting server. ipa-pki-retrieve-key outputs the wrapped key then exits.
  3. The KeyRetriever reads the wrapped key and imports (unwraps) it into the Dogtag clone’s NSSDB. It then initialises the Dogtag CA’s Signing Unit allowing the CA to service signing requests on that clone, and adds its own hostname to the CA’s authorityKeyHost attribute.

Some excerpts of the CA debug log on the clone (not the server on which the sub-CA was first created) shows this process in action. The CA debug log is found at /var/log/pki/pki-tomcat/ca/debug. Some irrelevant messages have been omitted.

[25/Jul/2016:15:45:56][authorityMonitor]: authorityMonitor: Processed change controls.
[25/Jul/2016:15:45:56][authorityMonitor]: authorityMonitor: ADD
[25/Jul/2016:15:45:56][authorityMonitor]: readAuthority: new entryUSN = 109
[25/Jul/2016:15:45:56][authorityMonitor]: CertificateAuthority init 
[25/Jul/2016:15:45:56][authorityMonitor]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[25/Jul/2016:15:45:56][authorityMonitor]: SigningUnit init: debug Certificate object not found
[25/Jul/2016:15:45:56][authorityMonitor]: CA signing key and cert not (yet) present in NSSDB
[25/Jul/2016:15:45:56][authorityMonitor]: Starting KeyRetrieverRunner thread

Above we see the authorityMonitor thread observe the addition of a CA. It adds the CA to its internal map and attempts to initialise it, which fails because the key and certificate are not available, so it starts a KeyRetrieverRunner in a new thread.

[25/Jul/2016:15:45:56][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Running ExternalProcessKeyRetriever
[25/Jul/2016:15:45:56][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: About to execute command: [/usr/libexec/ipa/ipa-pki-retrieve-key, caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd, f24b-0.ipa.local]

The KeyRetrieverRunner thread invokes ipa-pki-retrieve-key with the nickname of the key it wants, and a host from which it can retrieve it. If a CA has multiple sources, the KeyRetrieverRunner will try these in order with multiple invocations of the helper, until one succeeds. If none succeed, the thread goes to sleep and retries when it wakes up initially after 10 seconds, then backing off exponentially.

[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Importing key and cert
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Reinitialising SigningUnit
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Got token Internal Key Storage Token by name
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Found cert by nickname: 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd' with serial number: 63
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Got private key from cert
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Got public key from cert
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: in init - got CA name CN=Smart Card CA,O=IPA.LOCAL

The key retriever successfully returned the key data and import succeeded. The signing unit then gets initialised.

[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Adding self to authorityKeyHosts attribute
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: In LdapBoundConnFactory::getConn()
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: postCommit: new entryUSN = 361
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: postCommit: nsUniqueId = 4dd42782-4a4f11e6-b003b01c-c8916432
[25/Jul/2016:15:47:14][authorityMonitor]: authorityMonitor: Processed change controls.
[25/Jul/2016:15:47:14][authorityMonitor]: authorityMonitor: MODIFY
[25/Jul/2016:15:47:14][authorityMonitor]: readAuthority: new entryUSN = 361
[25/Jul/2016:15:47:14][authorityMonitor]: readAuthority: known entryUSN = 361
[25/Jul/2016:15:47:14][authorityMonitor]: readAuthority: data is current

Finally, the Dogtag clone adds itself to the CA’s authorityKeyHosts attribute. The authorityMonitor observes this change but ignores it because its view is current.

Certificate renewal

CA signing certificates will eventually expire, and therefore require renewal. Because the FreeIPA framework operates with low privileges, it cannot add a Certmonger tracking request for sub-CAs when it creates them. Furthermore, although the renewal (i.e. the actual signing of a new certificate for the CA) should only happen on one server, the certificate must be updated in the NSSDB of all Dogtag clones.

As mentioned earlier, the ipa-certupdate command has been enhanced to add Certmonger tracking requests for FreeIPA-managed lightweight CAs. The actual renewal will only be performed on whichever server is the renewal master when Certmonger decides it is time to renew the certificate (assuming that the tracking request has been added on that server).

Let’s run ipa-certupdate on the renewal master to add the tracking request for the new CA. First observe that the tracking request does not exist yet:

# getcert list -d /etc/pki/pki-tomcat/alias |grep subject
        subject: CN=CA Audit,O=IPA.LOCAL 201606201330
        subject: CN=OCSP Subsystem,O=IPA.LOCAL 201606201330
        subject: CN=CA Subsystem,O=IPA.LOCAL 201606201330
        subject: CN=Certificate Authority,O=IPA.LOCAL 201606201330
        subject: CN=f24b-0.ipa.local,O=IPA.LOCAL 201606201330

As expected, we do not see our sub-CA certificate above. After running ipa-certupdate the following tracking request appears:

Request ID '20160725222909':
        status: MONITORING
        stuck: no
        key pair storage: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd',token='NSS Certificate DB',pin set
        certificate: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd',token='NSS Certificate DB'
        CA: dogtag-ipa-ca-renew-agent
        issuer: CN=Certificate Authority,O=IPA.LOCAL 201606201330
        subject: CN=Smart Card CA,O=IPA.LOCAL
        expires: 2036-07-15 05:46:00 UTC
        key usage: digitalSignature,nonRepudiation,keyCertSign,cRLSign
        pre-save command: /usr/libexec/ipa/certmonger/stop_pkicad
        post-save command: /usr/libexec/ipa/certmonger/renew_ca_cert "caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd"
        track: yes
        auto-renew: yes

As for updating the certificate in each clone’s NSSDB, Dogtag itself takes care of that. All that is required is for the renewal master to update the CA’s authoritySerial attribute in the Dogtag database. The renew_ca_cert Certmonger post-renewal hook script performs this step. Each Dogtag clone observes the update (in the monitor thread), looks up the certificate with the indicated serial number in its certificate repository (a new entry that will also have been recently replicated to the clone), and adds that certificate to its NSSDB. Again, let’s observe this process by forcing a certificate renewal:

# getcert resubmit -i 20160725222909
Resubmitting "20160725222909" to "dogtag-ipa-ca-renew-agent".

After about 30 seconds the renewal process is complete. When we examine the certificate in the NSSDB we see, as expected, a new serial number:

# certutil -d /etc/pki/pki-tomcat/alias -L \
    -n "caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd" \
    | grep -i serial
        Serial Number: 74 (0x4a)

We also see that the renew_ca_cert script has updated the serial in Dogtag’s database:

# ldapsearch -D cn="Directory Manager" -w4me2Test -b o=ipaca \
    '(cn=660ad30b-7be4-4909-aa2c-2c7d874c84fd)' authoritySerial
dn: cn=660ad30b-7be4-4909-aa2c-2c7d874c84fd,ou=authorities,ou=ca,o=ipaca
authoritySerial: 74

Finally, if we look at the CA debug log on the clone, we’ll see that the the authority monitor observes the serial number change and updates the certificate in its own NSSDB (again, some irrelevant or low-information messages have been omitted):

[26/Jul/2016:10:43:28][authorityMonitor]: authorityMonitor: Processed change controls.
[26/Jul/2016:10:43:28][authorityMonitor]: authorityMonitor: MODIFY
[26/Jul/2016:10:43:28][authorityMonitor]: readAuthority: new entryUSN = 1832
[26/Jul/2016:10:43:28][authorityMonitor]: readAuthority: known entryUSN = 361
[26/Jul/2016:10:43:28][authorityMonitor]: CertificateAuthority init 
[26/Jul/2016:10:43:28][authorityMonitor]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[26/Jul/2016:10:43:28][authorityMonitor]: Got token Internal Key Storage Token by name
[26/Jul/2016:10:43:28][authorityMonitor]: Found cert by nickname: 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd' with serial number: 63
[26/Jul/2016:10:43:28][authorityMonitor]: Got private key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: Got public key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: CA signing unit inited
[26/Jul/2016:10:43:28][authorityMonitor]: in init - got CA name CN=Smart Card CA,O=IPA.LOCAL
[26/Jul/2016:10:43:28][authorityMonitor]: Updating certificate in NSSDB; new serial number: 74

When the authority monitor processes the change, it reinitialises the CA including its signing unit. Then it observes that the serial number of the certificate in its NSSDB differs from the serial number from LDAP. It pulls the certificate with the new serial number from its certificate repository, imports it into NSSDB, then reinitialises the signing unit once more and sees the correct serial number:

[26/Jul/2016:10:43:28][authorityMonitor]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[26/Jul/2016:10:43:28][authorityMonitor]: Got token Internal Key Storage Token by name
[26/Jul/2016:10:43:28][authorityMonitor]: Found cert by nickname: 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd' with serial number: 74
[26/Jul/2016:10:43:28][authorityMonitor]: Got private key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: Got public key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: CA signing unit inited
[26/Jul/2016:10:43:28][authorityMonitor]: in init - got CA name CN=Smart Card CA,O=IPA.LOCAL

Currently this update mechanism is only used for lightweight CAs, but it would work just as well for the main CA too, and we plan to switch at some stage so that the process is consistent for all CAs.

Wrapping up

I hope you have enjoyed this tour of some of the lightweight CA internals, and in particular seeing how the design actually plays out on your systems in the real world.

FreeIPA lightweight CAs has been the most complex and challenging project I have ever undertaken. It took the best part of a year from early design and proof of concept, to implementing the Dogtag lightweight CAs feature, then FreeIPA integration, and numerous bug fixes, refinements or outright redesigns along the way. Although there are still some rough edges, some important missing features and, I expect, many an RFE to come, I am pleased with what has been delivered and the overall design.

Thanks are due to all of my colleagues who contributed to the design and review of the feature; each bit of input from all of you has been valuable. I especially thank Ade Lee and Endi Dewata from the Dogtag team for their help with API design and many code reviews over a long period of time, and from the FreeIPA team Jan Cholasta and Martin Babinsky for a their invaluable input into the design, and much code review and testing. I could not have delivered this feature without your help; thank you for your collaboration!

by ftweedal at July 26, 2016 02:01 AM

July 25, 2016

Fraser Tweedale

Lightweight Sub-CAs in FreeIPA 4.4

Last year FreeIPA 4.2 brought us some great new certificate management features, including custom certificate profiles and user certificates. The upcoming FreeIPA 4.4 release builds upon this groundwork and introduces lightweight sub-CAs, a feature that lets admins to mint new CAs under the main FreeIPA CA and allows certificates for different purposes to be issued in different certificate domains. In this post I will review the use cases and demonstrate the process of creating, managing and issuing certificates from sub-CAs. (A follow-up post will detail some of the mechanisms that operate behind the scenes to make the feature work.)

Use cases

Currently, all certificates issued by FreeIPA are issued by a single CA. Say you want to issue certificates for various purposes: regular server certificates, and user certificates for VPN authentication, and authentication to a particular web service. Currently, assuming the certificate bore the appropriate Key Usage and Extended Key Usages extensions (with the default profile, they do), a certificate issued for one of these purposes could be used for all of the other purposes.

Issuing certificates for particular purposes (especially client authentication scenarios) from a sub-CA allows an administrator to configure the endpoint authenticating the clients to use the immediate issuer certificate for validation client certificates. Therefore, if you had a sub-CA for issuing VPN authentication certificates, and a different sub-CA for issuing certificates for authenticating to the web service, one could configure these services to accept certificates issued by the relevant CA only. Thus, where previously the scope of usability may have been unacceptably broad, administrators now have more fine-grained control over how certificates can be used.

Finally, another important consideration is that while revoking the main IPA CA is usually out of the question, it is now possible to revoke an intermediate CA certificate. If you create a CA for a particular organisational unit (e.g. some department or working group) or service, if or when that unit or service ceases to operate or exist, the related CA certificate can be revoked, rendering certificates issued by that CA useless, as long as relying endpoints perform CRL or OCSP checks.

Creating and managing sub-CAs

In this scenario, we will add a sub-CA that will be used to issue certificates for users’ smart cards. We assume that a profile for this purpose already exists, called userSmartCard.

To begin with, we are authenticated as admin or another user that has CA management privileges. Let’s see what CAs FreeIPA already knows about:

% ipa ca-find
------------
1 CA matched
------------
  Name: ipa
  Description: IPA CA
  Authority ID: d3e62e89-df27-4a89-bce4-e721042be730
  Subject DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
----------------------------
Number of entries returned 1
----------------------------

We can see that FreeIPA knows about the ipa CA. This is the "main" CA in the FreeIPA infrastructure. Depending on how FreeIPA was installed, it could be a root CA or it could be chained to an external CA. The ipa CA entry is added automatically when installing or upgrading to FreeIPA 4.4.

Now, let’s add a new sub-CA called sc:

% ipa ca-add sc --subject "CN=Smart Card CA, O=IPA.LOCAL" \
    --desc "Smart Card CA"
---------------
Created CA "sc"
---------------
  Name: sc
  Description: Smart Card CA
  Authority ID: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
  Subject DN: CN=Smart Card CA,O=IPA.LOCAL
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330

The --subject option gives the full Subject Distinguished Name for the new CA; it is mandatory, and must be unique among CAs managed by FreeIPA. An optional description can be given with --desc. In the output we see that the Issuer DN is that of the IPA CA.

Having created the new CA, we must add it to one or more CA ACLs to allow it to be used. CA ACLs were added in FreeIPA 4.2 for defining policies about which profiles could be used for issuing certificates to which subject principals (note: the subject principal is not necessarily the principal performing the certificate request). In FreeIPA 4.4 the CA ACL concept has been extended to also include which CA is being asked to issue the certificate.

We will add a CA ACL called user-sc-userSmartCard and associate it with all users, with the userSmartCard profile, and with the sc CA:

% ipa caacl-add user-sc-userSmartCard --usercat=all
------------------------------------
Added CA ACL "user-sc-userSmartCard"
------------------------------------
  ACL name: user-sc-userSmartCard
  Enabled: TRUE
  User category: all

% ipa caacl-add-profile user-sc-userSmartCard --certprofile userSmartCard
  ACL name: user-sc-userSmartCard
  Enabled: TRUE
  User category: all
  CAs: sc
  Profiles: userSmartCard
-------------------------
Number of members added 1
-------------------------

% ipa caacl-add-ca user-sc-userSmartCard --ca sc
  ACL name: user-sc-userSmartCard
  Enabled: TRUE
  User category: all
  CAs: sc
-------------------------
Number of members added 1
-------------------------

A CA ACL can reference multiple CAs individually, or, like we saw with users above, we can associate a CA ACL with all CAs by setting --cacat=all when we create the CA ACL, or via the ipa ca-mod command.

A special behaviour of CA ACLs with respect to CAs must be mentioned: if a CA ACL is associated with no CAs (either individually or by category), then it allows access to the ipa CA (and only that CA). This behaviour, though inconsistent with other aspects of CA ACLs, is for compatibility with pre-sub-CAs CA ACLs. An alternative approach is being discussed and could be implemented before the final release.

Requesting certificates from sub-CAs

The ipa cert-request command has learned the --ca argument for directing the certificate request to a particular sub-CA. If it is not given, it defaults to ipa.

alice already has a CSR for the key in her smart card, so now she can request a certificate from the sc CA:

% ipa cert-request --principal alice \
    --profile userSmartCard --ca sc /path/to/csr.req
  Certificate: MIIDmDCCAoCgAwIBAgIBQDANBgkqhkiG9w0BA...
  Subject: CN=alice,O=IPA.LOCAL
  Issuer: CN=Smart Card CA,O=IPA.LOCAL
  Not Before: Fri Jul 15 05:57:04 2016 UTC
  Not After: Mon Jul 16 05:57:04 2018 UTC
  Fingerprint (MD5): 6f:67:ab:4e:0c:3d:37:7e:e6:02:fc:bb:5d:fe:aa:88
  Fingerprint (SHA1): 0d:52:a7:c4:e1:b9:33:56:0e:94:8e:24:8b:2d:85:6e:9d:26:e6:aa
  Serial number: 64
  Serial number (hex): 0x40

Certmonger has also learned the -X/--issuer option for specifying that the request be directed to the named issuer. There is a clash of terminology here; the "CA" terminology in Certmonger is already used to refer to a particular CA "endpoint". Various kinds of CAs and multiple instances thereof are supported. But now, with Dogtag and FreeIPA, a single CA may actually host many CAs. Conceptually this is similar to HTTP virtual hosts, with the -X option corresponding to the Host: header for disambiguating the CA to be used.

If the -X option was given when creating the tracking request, the Certmonger FreeIPA submit helper uses its value in the --ca option to ipa cert-request. These requests are subject to CA ACLs.

Limitations

It is worth mentioning a few of the limitations of the sub-CAs feature, as it will be delivered in FreeIPA 4.4.

All sub-CAs are signed by the ipa CA; there is no support for "nesting" CAs. This limitation is imposed by FreeIPA – the lightweight CAs feature in Dogtag does not have this limitation. It could be easily lifted in a future release, if there is a demand for it.

There is no support for introducing unrelated CAs into the infrastructure, either by creating a new root CA or by importing an unrelated external CA. Dogtag does not have support for this yet, either, but the lightweight CAs feature was designed so that this would be possible to implement. This is also why all the commands and argument names mention "CA" instead of "Sub-CA". I expect that there will be demand for this feature at some stage in the future.

Currently, the key type and size are fixed at RSA 2048. Same is true in Dogtag, and this is a fairly high priority to address. Similarly, the validity period is fixed, and we will need to address this also, probably by allowing custom CA profiles to be used.

Conclusion

The Sub-CAs feature will round out FreeIPA’s certificate management capabilities making FreeIPA a more attractive solution for organisations with sophisticated certificate requirements. Multiple security domains can be created for issuing certificates with different purposes or scopes. Administrators have a simple interface for creating and managing CAs, and rules for how those CAs can be used.

There are some limitations which may be addressed in a future release; the ability to control key type/size and CA validity period will be the highest priority among them.

This post examined the use cases and high-level user/administrator experience of sub-CAs. In the next post, I will detail some of the machinery that makes the sub-CAs feature work.

by ftweedal at July 25, 2016 02:32 AM

July 23, 2016

Rich Megginson

How to find build-time vs. run-time dependencies of a gem

Using ruby 2.2.5p319 (2016-04-26 revision 54774) [x86_64-linux]
gem2rpm 0.11.3
gem 2.4.8

I'm trying to convert gems to rpms. Unfortunately, gem2rpm -d does not separate/classify the dependencies. What I really need is a separate list of run-time dependencies. I can get this with gem spec --ruby. For example:
$ gem spec --ruby systemd-journal-1.2.2.gem
# -*- encoding: utf-8 -*-
# stub: systemd-journal 1.2.2 ruby lib

Gem::Specification.new do |s|
  s.name = "systemd-journal"
  s.version = "1.2.2"
...
  if s.respond_to? :specification_version then
    s.specification_version = 4

    if Gem::Version.new(Gem::VERSION) >= Gem::Version.new('1.2.0') then
      s.add_runtime_dependency(%q<ffi>, ["~> 1.9.0"])
      s.add_development_dependency(%q<rspec>, ["~> 3.1"])
      s.add_development_dependency(%q<simplecov>, ["~> 0.9"])
      s.add_development_dependency(%q<rubocop>, ["~> 0.26"])
      s.add_development_dependency(%q<rake>, ["~> 10.3"])
      s.add_development_dependency(%q<yard>, ["~> 0.8.7"])
      s.add_development_dependency(%q<pry>, ["~> 0.10"])
    else

So I need to add Requires: rubygem(ffi) to the spec.

July 23, 2016 02:17 AM

July 21, 2016

Rob Crittenden

novajoin microservice integration

novajoin is a project for Openstack and IPA integration. It is a service that will allow instances created in nova to be added to IPA and a host OTP generated automatically. This OTP will then be passed into the instance to be used for enrollment during the cloud-init stage.

The end result is that a new instance will seamlessly be enrolled as an IPA client upon first boot.

Additionally, a class can be associated with an instance using Glance metadata so that IPA automember rules will automatically assign this new host to the appropriate hostgroups. Once that is done you can setup HBAC and sudo rules to grant the appropriate permissons/capabilities for all hosts in that group.

In short it can simplify administration significantly.

In the current iteration, novajoin consists of two pieces: a REST microservice and an AMQP notification listener.

The REST microservice is used to return dynamically generated metadata that will be added to the information that describes a given nova instance. This metadata is available at first boot and this is how novajoin injects the OTP into the instance for use with ipa-client-install. The framework for this change is being implemented in nova in this review: https://review.openstack.org/317739 .

The REST server just handles the  metadata, cloud-init does the rest. A cloud-init script is provided which glues the two together. It installs the needed packages, retrieves the metadata, then calls ipa-client-install with the requisite options.

The other server is an AMQP listener that will identify when an IPA-enrolled instance is deleted and remove host from IPA . It may eventually handle floating IP changes as well, automatically updating IPA DNS entries. The issue here is knowing what hostname to assign to the floating IP.

Glance images can have metadata as well which describes the image, such as OS distribution and version. If these have been set then novajoin will include this in the IPA entry it creates.

The basic flow looks something like this:

  1. Boot instance in nova. Add IPA metadata, specifying ipa_enroll True and optionally ipa_hostclass
  2. Instance boots. During cloud-init it will retrieve metadata
  3. During metadata retrieval ipa host-add is executed, adding the host to IPA and generating an OTP and any image metadata available.
  4. OTP and FQDN is returned in the metadata
  5. Our cloud-init script is called to install the IPA client packages and retrieve the OTP and FQDN
  6. Call ipa-client-install –hostname FQDN –password OTP

This leaves us with an IPA-enrolled client which can have permissions granted via HBAC and sudo rules (like who is allowed to log into this instance, what sudo commands are allowed, etc).

by rcritten at July 21, 2016 06:09 PM

Red Hat Blog

Thinking Through an Identity Management Deployment

As the number of production deployments of Identity Management (IdM) grows and as many more pilots and proof of concepts come into being, it becomes (more and more) important to talk about best practices. Every production deployment needs to deal with things like failover, scalability, and performance.  In turn, there are a few practical questions that need to be answered, namely:

  • How many replicas do I need?
  • How should these replicas be distributed between my datacenters?
  • How should these replicas be connected to each other?

The answer to these questions depends on the specifics of your environment. But before we dive into how to determine the answers to these questions it is important to realise that replicas (for example) N and M can have one replication agreement to replicate main identity data and another replication agreement to replicate certificate information. These two replication channels are completely independent. The reason for this is that the Certificate Authority (CA) component of IdM is optional. If you do not use it then you do not have any certificates to replicate and thus you can skip configuration of the replication topology for your CAs.

IdM is built with a general assumption that the CA component, if used, will be installed on some machines and not on others. However, practice shows that having different images or deployment scripts for different replicas is more overhead as compared to having a single full image and thus having CAs installed on every replica. If you prefer a CA on every replica then you can use the same topology for main and CA related replication agreements. Unfortunately, up until recent times, there was no tool that would allow someone to visualize the layout of your deployment and manage replication agreements in an intuitive fashion. To address this problem the FreeIPA project added a topology management tool that provides a nice graphical view. Take a look at the following demo that was shown at the Identity Management booth at Red Hat Summit (2016).

Another important challenge to consider is that not all replicas are the same – even if they each have the same components installed. The first server that you install becomes the tracker for certificates and keys and is responsible for CRL generation. Only one system in the whole deployment can bear this responsibility. This means that one should:

  • Know which server was deployed first.
  • If something happens to that server – transition its tracking and CRL generation responsibility to some other server.
  • Make sure you know which server is now responsible for these special functions.

In the future we expect the topology user interface to help with this task – but this capability is yet not implemented.

Having covered some of the “groundwork” in terms of replication – we can now jump into a simple list of questions that will help you to determine the best parameters for your deployment.

How many datacenters do you have?

Let’s, for example, imagine that you have three datacenters in different geographies Datacenter A, Datacenter B, and Datacenter C.

How many clients do you have in each datacenter and what operating systems (and versions) do they run?

Let’s use the data in the following table for reference:

Datacenter Total # of Servers Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 Red Hat Enterprise Linux 7 UNIX Application(s)
A 10K 2K 6K 1K 1K 50
B 6K 1K 3K 2K
C 7K 3K 3K 1K 30

Clients can also be divided into several buckets by type:

  • Caching clients – clients that use SSSD and cache a lot of information so that they do not need to query the server all the time.
  • Moderate clients – clients that do not use SSSD or some other caching mechanism and query servers on every authentication (but don’t query more information than they actually need).
  • Chatty clients – these are the clients that do a lot of queries and don’t necessarily cache information or care if they request more information than is needed.

Moderate and chatty clients may have a significant impact on your environment but, until you determine that you have such a client, you can assume that you do not have any. If you determine that some clients or applications are chatty – it might make sense to budget an extra replica or two for your datacenter(s).

The recommended clients to server ratio is about 2-3K clients per server, assuming that users authenticate multiple times over the course of the day but not every minute.

Datacenter Total # of Servers Caching Clients Moderate Clients Chatty Clients Replicas
A 10K 9K 1K 10 5
B 6K 5K 1K 0 2
C 7K 6K 1K 5 3

For Datacenter A we have about 9K clients that do caching well. That amounts to about 3-4 replicas. Three would be insufficient if there were many users logging in. So we will assume to employ four replicas. One extra replica should be able to serve the rest of the clients and a number of chatty applications so five looks like a good number.

For Datacenter B two replicas should be enough. If you see issues with that amount you can add another replica later.

In Datacenter C one would need a couple of replicas for caching clients and at least one for the remaining moderate and chatty clients – a total of three seems like a good number.

The whole deployment amounts to 10 replicas. As of Red Hat Enterprise Linux 7.2 topologies with up to 20 replicas are supported.

So far we have managed to answer the first two questions. The last one – about the topology – can be solved by adhering to the following rules:

  1. Connect a replica to at least two other replicas.
  2. Do not connect a replica to more than four other replicas.

Note that these first two recommendations are not hard requirements. Under some conditions it might make sense to have a single replication agreement or to have five. The maximum of four replication agreements was established as a way to prevent the replication overhead to start causing performance issues on the node and degrade its ability to serve clients.

  1. Connect datacenters with each other so that a datacenter is connected to at least a couple of other datacenters.
  2. Connect datacenters with at least a pair of replication agreements.
  3. Have at least two servers per datacenter.

In following these rules it is quite easy to create a topology that resembles the following:

image_one

As one can see the topology meets all of the above listed guidelines.

In general, if one has datacenters of a similar size, the topology per datacenter can be the same. In fact, it might make it easier to start with the following diagram and add or remove replicas on an as needed basis.

image_two

As always – your comments, experiences, and feedback are welcome.

by Dmitri Pal at July 21, 2016 03:25 PM

July 19, 2016

Ben Lipton

Thinking about templating for automatic CSR generation

Contents

Background

I am working on a project (ticket, design) to simplify creating certificates in FreeIPA. Currently administrators must generate a Certificate Signing Request (CSR) matching the type of certificate they wish to issue. They submit this CSR to FreeIPA using the ipa cert-request command, and if all the specified fields match the data FreeIPA has about the certificate subject, a cert will be issued. This seems a bit silly; if FreeIPA has this information already, can’t it automatically generate a CSR with the correct data?

However, different certificate applications require different data, so the Certificate Profile (a concept from the Dogtag CA that specifies the fields in the cert, constraints on their values, and how the final values should be constructed) needs to contain information on how to transform the data in FreeIPA into the fields of the certificate. Further, different administrators may want to use different tools to manage their private keys, so we must be able to communicate these certificate field values back in formats understood by different utilities such as openssl and certutil. Those tools will be responsible for generating the actual CSR from the provided configuration.

As suggested in the Mapping Rules design, the first implementation of this system used python to implement the low-level formatting rules, such as return the user’s email address, prefixed by the string ‘email:’. However, it is a goal of this project to allow new rules to be added at runtime, so these rules must be text-based rather than part of the code. This post will try to imagine what the rules would look like if implemented using the Jinja2 templating language.

Requirements

We must at a minimum be able to generate two different types of configuration, the openssl config file:

[ req ]
prompt = no
encrypt_key = no

distinguished_name = dn
req_extensions = exts

[ dn ]
O=DOMAIN.EXAMPLE.COM
CN=user

[ exts ]
subjectAltName=@SAN

[ SAN ]
email=user@example.com
dirName=SANdn

[ SANdn ]
1.DC=com
2.DC=example
CN=users
UID=user

and the certutil command line:

certutil -R -a -s "CN=user,O=DOMAIN.EXAMPLE.COM" --extSAN "email:user@example.com,dn:UID=user;CN=users;DC=example;DC=com"

Some interesting things to note about these formats:

  • The contents of an extension can be constructed from multiple sources, such as an email address and a distinguished name.
  • The openssl format is hierarchical. Some parameters, such as req_extensions and dirName always refer to the name of a new config section. Others can optionally refer to a config section using an @.
  • In openssl, the certificate subject is created under the [req] section, while extensions are created under their own section.
  • Openssl has a quirky way of denoting distinguished names. They are ordered from least to most specific (opposite how LDAP presents them). And if two AVAs have the same attribute type, they must be prefixed with different strings ending in . (or : or ,) as the config file format will otherwise discard all but one.
  • Certutil is also a bit quirky about distinguished names in the Subject Alt Name extension. Because the argument to the extSAN flag is comma-delimited, the components of the DN must be separated using a different delimiter, like a semicolon.

Implementations

Two-pass data interpolation

((user data -> data rules) -> syntax rules) -> output

One way we can approach constructing one extension from multiple sources it to use two sets of rules - one rule for each data item that provides a value for the extension, and one rule specifying the name and syntax of the extension as a whole. We evaulate the data rules first, then feed the values produced into the associated syntax rules to get the final output for that extension. Finally, the extension output is passed to the formatter, to produce the final output. We wish to express the data and syntax rules using the templating language, but the formatters (one for each CSR generation tool) will be implemented as python classes.

So how do we represent openssl sections in this scheme? The formatter needs to accept input in a (very limited) markup language, which defines where the sections are, what goes into them, and perhaps whether a given line should be placed under [req] or [exts]. Even with the features of the formatter markup very limited, it would still be possible for a user to accidentally or intentionally inject some markup that would make it impossible to generate a certificate for them. So, some kind of escaping is also needed, but it would be jinja2 template markup escaping, not the HTML escaping that jinja2 already supports.

Example data rules:

email={{subject.email}}
O={{config.ipacertificatesubjectbase}}\nCN={{subject.username}}

Example syntax rules:

--extSAN {{values|join(',')}}
subjectAltName=@{{'{% section %}'}}{{values|join('\n')}}{{'{% endsection %}'}}

That’s a lot of braces! We have to escape the section and endsection tags sequences so they will appear verbatim in the final template, producing something like:

subjectAltName=@{% section %}email={{subject.email}}
URI={{subject.inetuserhttpurl}}{% endsection %}

If we used a different type of markup for the user data interpolation and for denoting sections, the escaping would not be necessary; however, we would still need to preprocess the values to escape any jinja2 markup that comes from the user data, and we would still have two types of markup being used in parallel.

Note, too, that the section tag does not exist yet in jinja2; it would need to be implemented as an extension.

Two-pass template interpolation

(user data -> (data rules -> syntax rules)) -> output

Alternatively, we can do the substitution on the templates themselves before interpolating user data, building up one big template that we then render with the data from the database. This is safer because the user-specified data never gets interpreted as a template, so we don’t have to worry about escaping the user data or limiting the features of the template language. On the other hand, this may be challenging for the rule writer, because one must keep in mind the number of times a given rule will be run through the templating engine to get the escaping correct. Data rules will be used as templates only once (consuming user data) but syntax rules will be used as templates once to incorporate the data rules into the templates, and then again when the user data is included. Thus, any template tags relating to the processing of user data (such as, I imagine, ones for specifying openssl sections) need to be escaped.

Surprisingly, this hardly changes the way the rules are written! All of the example rules given above would still be valid, but the values would be the data rules themselves rather than data rules with interpolated user data. And of course, the values would not be escaped beforehand.

Template-based hierarchical rules

user data -> collected rules -> output

One way to get away from escaping and multiple evaluations is to redesign the template so that the order of its elements no longer matters. That is, the hierarchical relationships between data items, certificate extensions, and the CSR as a whole could be encoded using jinja2 tags. It’s probably easiest to explain this idea with an example:

{% group req %}
{% entry req %}extensions={% group exts %}{% endentry %}
{% entry req %}distinguished_name={% group subjectDN %}{% endentry %}
{% entry subjectDN %}O={{config.ipacertificatesubjectbase}}\nCN={{subject.username}}{% endentry %}
{% entry exts %}subjectAltName=@{% group SAN %}{% endentry %}
{% entry SAN %}email={{subject.email}}{% endentry %}
{% entry SAN %}URI={{subject.inetuserhttpurl}}{% endentry %}

The config for certutil would be quite similar:

certutil -R -a {% group opts %}
{% entry opts %}-s {% group subjectDN %}{% endentry %}
{% entry opts %}--extSAN {% group SAN %}{% endentry %}
{% entry subjectDN %}CN={{subject.username}},O={{config.ipacertificatesubjectbase}}{% endentry %}
{% entry SAN %}email:{{subject.email}}{% endentry %}
{% entry SAN %}uri:{{subject.inetuserhttpurl}}{% endentry %}

Each CSR generation helper would have its own notion of “groups,” which would be implemented as jinja2 extensions. The entries of a group would be collected and inserted into the group in whatever way was appropriate for that helper. Each line of these templates would be either a cert mapping rule referenced in the cert profile, or something built into the formatter for the CSR generation helper. There would be no distinction between data rules and syntax rules, and the order that rules appeared in the template would be irrelevant because the tags specified the hierarchy.

This approach has some downsides, too:

  1. Section names are now specified in the rules, which means there could be conflicts between different rules, and that a rule can only ever be included in a particular section. If two sections need the same data, two different rules are needed.
  2. Some types of groups are formatted differently from others (e.g. in certutil, opts is space-separated, while SAN is comma-separated. It’s not entirely clear where this should be encoded, and how.

Concern #1 is probably an ok tradeoff, as it’s not clear how broadly reusable rules will be anyway. However, #2 would need to be addressed in any actual implementation.

Formatter-based hierarchical rules

user data -> low-level rule -> formatting code -> group objects
group objects -> higher-level rule -> formatting code -> group objects
...
group objects -> top-level rule -> output

Instead of linking rules together into a hierarchy using tags, leaving it to the templating engine to interpret that structure, we could encode the structure in the rule entities themselves and use multiple evaluations to handle the hierarchy in the formatter, before the data even gets to the templating engine. Each rule would be stored with the name of the group within which it should be rendered, as well as the names of any groups that the rule includes. For example, to adapt the rule {% entry exts %}subjectAltName=@{% group SAN %}{% endentry %} to this schema, we would say that it is an element of the “exts” group, and provides the “SAN” group. By linking up group elements to group providers, we construct a tree of rules.

The formatter would evaluate these rules beginning at the leaves and passing the results of child nodes into variables in the parent node templates. The formatter is responsible for determining what exactly gets passed into the parent node, such as an object representing an openssl config section, or just a list of formatted strings. Parent nodes decide how to present the passed objects, such as by comma-separating the strings or referencing the name of the section. This addresses concern #2 from the previous approach, because the tools of the jinja2 language are now available for expressing how to format the results of groups of rules.

Example leaf rules:

group: SAN
template: email={{subject.email}}
group: subjectDN
template: O={{config.ipacertificatesubjectbase}}\nCN={{subject.username}}

Example parent rules:

group: opts
groupProvided: SAN
template: --extSAN {{ SAN|join(',') }}
group: exts
groupProvided: SAN
template: subjectAltName=@{{ SAN.section_name }}

This has several advantages over the two-pass interpolation approaches:

  1. Profiles are simpler to configure, because they just contain a list of references to rules rather than a structured list of groups of rules.
  2. Profiles are also simpler to implement, with no sub-objects in the database.
  3. It’s no longer necessary to pay attention to escaping when writing rules. Each rule is used as a template exactly once, and complex structures are handled by the formatter code rather than template tags so tags don’t need to be passed along.
  4. User data is never used as a template, which reduces the attack surface.

However, there are also some potential concerns:

  1. Whether the openssl and certutil hierarchies for rules are compatible (i.e. can the parent group can be listed in the mapping rule or must it be in the transformation rule?)
  2. Are there any instances where something needs to be a group but can’t be its own openssl section? How would we convey this to the openssl formatter?
  3. Conversely, are there cases where we would want to be able to create a section without creating a new rule? For example, a DN in a subject alternative name needs to be its own section. Do we then need rules just for filling out parts of that DN?

Conclusions

Although hierarchical rules seem like an interesting solution to avoid escaping and simplify the configuration in the cert profile itself, I think the interpolation approaches are easier to understand and explain, which is valuable for this already unexpectedly-complex feature.

Even though it is a little counter-intuitive, I lean towards the template interpolation solution rather than the straightforward data interpolation one because it doesn’t incorporate user data until the last step. This would make it incompatible with the existing python-based rules, but those are going to be replaced anyway, and in fact they may be vulnerable to injection attacks as well. Escaping of tags that are to be interpreted by the formatter will still be inconvenient, but we may be able to provide extensions to the template language to make that easier.

If you are interested in discussing any of these options, feel free to email me directly at the address below, or share your thoughts with the freeipa-devel mailing list. Thanks!

July 19, 2016 12:00 AM

July 13, 2016

Red Hat Blog

I Really Can’t Rename My Hosts!

Hello again! In this post I will be sharing some ideas about what you can do to solve a complex identity management challenge.

As the adoption of Identity Management (IdM) grows and especially in the case of heterogeneous environments where some systems are running Linux and user accounts are in the Active Directory (AD) – the question of renaming hosts becomes more and more relevant. Here is a set of requirements that we often hear from customers:

  1. I want to be able access my Linux hosts with credentials stored in Active Directory.
  2. I want to be able to centrally manage access control to my Linux hosts for user accounts stored in Active Directory.
  3. I want to be able to centrally manage privilege escalation (sudo) for user accounts stored in Active Directory.
  4. I want to be able to control automount maps for my Linux systems centrally.
  5. I want to be able to jump between my Linux hosts without requiring to enter passwords all the time (SSO).
  6. I do not want to rename my Linux hosts; they are currently a part of Active Directory DNS domain. There are business critical applications running on them… and (thus) I really can’t rename them.
  7. I want the solution to be cost effective so that I do not have to pay extra for the integration of Linux systems into my Active Directory environment.

Before we move forward it is important to clarify terminology. When we talk about single-sign-on (SSO) we are talking about the ability for a user to authenticate once and to then access different systems and resources without being challenged for authentication again. This is not the same as having a single account. In fact, all solutions as discussed in this post assume that there is a single user account and that it is stored inside Active Directory. But this is not yet SSO. SSO would be achieved if the user is challenged to provide his password once, usually during the login into his workstation, and then was able to access other systems without being prompted to enter their password again. Also, when we talk about SSO inside the enterprise, the technology that provides such capability is called Kerberos. It is implemented both on the Windows and Linux sides.

Now that we’ve clarified the SSO terminology we can look at how the above listed requirements can be met.

The following diagram shows the current the state:

image_one

Let us drill down – exploring different options – to find out how these requirements can be met.

Option 1 – Use 3rd Party Software

image_two

This solution satisfies nearly all of the above listed requirements… the sole exception being cost effectiveness. It also puts everything – including the ability to manage Linux systems – into Active Directory. Sometimes this is desirable, sometimes it is not. For more information on the use of 3rd party software see one of my other articles. The costs associated with such a solution usually generate an interest in exploring additional options.  Let’s continue onward…

Option 2 – Use Direct Integration

I’ve written about direct integration in several of my previous blog posts. The main limitation with direct integration is that while access control can be centrally managed using the basic GPO support available in SSSD, policies like sudo or automount are unmanaged. This fails to meet requirements #3 and #4.

image_three

Option 3 – Use Indirect Integration with IdM

An IdM-based solution provides a lot of benefits as has been mentioned in other sections of my blog however, in this specific case, the problem arises with the hostnames due to the SSO requirement (i.e. requirement #5). To be able to leverage SSO between the hosts with Kerberos the hosts have to be put into a DNS domain managed by IdM rather than one controlled by Active Directory (i.e. they would need to be renamed).

image_four

If the hosts (really) can’t be renamed, the Kerberos-based SSO approach will not work because IdM hosts being in an AD domain will confuse clients. The clients will request Kerberos tickets for IdM hosts from AD instead of IdM.  AD would fail to resolve Kerberos principals since these hosts are joined to IdM and have Kerberos principals from the IdM realm.

image_five

This problem is described in more detail in this document.

Deadlock? Not necessarily. There are couple options that can be explored here.

Option 3a – Use Indirect Integration with IdM and Exclude Hosts

Active Directory allows specifying external hosts. This means that if you have a small amount of hosts that can’t be renamed there is a way to explain to AD that these hosts are really from a different domain. With this setting Active Directory would know to rely on an external domain controller (in this case IdM) to resolve these names.

image_six

This, however, would only work when the number of such hosts is really small. Dozens of hosts would start to take a toll on Active Directory performance (according to specialists) and this is probably the last thing you want to accomplish.

Option 3b – Use Indirect Integration with IdM with SSH SSO

Another approach would be to complement the Kerberos-based authentication (or even completely replace it) with SSH-based SSO. The following two diagrams show how this can be accomplished.

image_seven

Linux hosts will be joined to IdM but will not use Kerberos for SSO. This would allow them to preserve their names. To meet the requirement not to challenge users again with username and password after the initial authentication – SSH keys could be issued to AD users. Users coming from their Windows workstations would use Kerberos SSO to access a jump host and would then be able to SSH to other systems using SSH key authentication. IdM provides centralized user and host SSH public key management – making such a deployment quite simple.

Alternatively, Kerberos SSO can be abandoned for those hosts (it will still work fine for other hosts in the IdM domain and services running on those hosts) and SSH key based authentication can be implemented all the way through.

image_eight

It is important to note that SSH key authentication is not formally “SSO”. It is a key-based authentication tactic. It uses a key pair – a private key as generated by the SSH tools and stored on the user workstation and a public key that can be uploaded into IdM and IdM will make it available automatically to all managed hosts on an as needed basis. Though (again) it is not exactly “SSO”, it does allow us to avoid prompting a user for their password when he or she accesses the host in question. With this in mind – many find that the SSO requirement can either be reformulated or perhaps removed entirely.

Nevertheless, here is an outline of the steps that would need to be taken to get to the situation where all of the requirements are met:

  • Install IdM
  • Establish trust with Active Directory
  • Connect the hosts without renaming to IdM
  • Optionally create a jump host in the IdM domain
  • Configure access control, automount, and privilege escalation policies (as needed)
  • Generate SSH keys (for workstation users) and share public keys with the IdM administrator so that he or she can upload them into IdM
  • Make any / all workstations use SSH with keys directly or via the jump host

Then… success! All of the requirements have been met.

I do hope that you will find this article to be useful.

As always, we are interested in your feedback, questions, and stories – do reach out using the comments section (below).

by Dmitri Pal at July 13, 2016 11:54 PM

July 08, 2016

Adam Young

Merging FreeIPA and Tripleo Undercloud Apache installs

My Experiment yesterday left me with a broken IPA install. I aim to fix that.

To get to the start state:

From my laptop, kick off a Tripleo Quickstart, stopping prior to undercloud deployment:

./quickstart.sh --teardown all -t  untagged,provision,environment,undercloud-scripts  ayoung-dell-t1700.test

SSH in to the machine …

ssh -F /home/ayoung/.quickstart/ssh.config.ansible undercloud

and set up FreeIPA;

$ cat install-ipa.sh

#!/usr/bin/bash

sudo hostnamectl set-hostname --static undercloud.ayoung-dell-t1700.test
export address=`ip -4 addr  show eth0 primary | awk '/inet/ {sub ("/24" ,"" , $2) ; print $2}'`
echo $address `hostname` | sudo tee -a /etc/hosts
sudo yum -y install ipa-server-dns
export P=FreIPA4All
sudo ipa-server-install -U -r `hostname -d|tr "[a-z]" "[A-Z]"` -p $P -a $P --setup-dns `awk '/^name/ {print "--forwarder",$2}' /etc/resolv.conf`

Backup the HTTPD config directory:

 sudo cp -a /etc/httpd/ /root

Now go continue the undercloud install

./undercloud-install.sh 

Once that is done, the undercloud passes a sanity check. Doing a diff between the two directories shows a lot of differences.

sudo diff -r /root/httpd  /etc/httpd/

All of the files in /etc/httpd/conf.d that were placed by the IPA install are gone, as are the following module files in /root/httpd/conf.modules.d

Only in /root/httpd/conf.modules.d: 00-base.conf
Only in /root/httpd/conf.modules.d: 00-dav.conf
Only in /root/httpd/conf.modules.d: 00-lua.conf
Only in /root/httpd/conf.modules.d: 00-mpm.conf
Only in /root/httpd/conf.modules.d: 00-proxy.conf
Only in /root/httpd/conf.modules.d: 00-systemd.conf
Only in /root/httpd/conf.modules.d: 01-cgi.conf
Only in /root/httpd/conf.modules.d: 10-auth_gssapi.conf
Only in /root/httpd/conf.modules.d: 10-nss.conf
Only in /root/httpd/conf.modules.d: 10-wsgi.conf

TO start, I am going to backup the existing HTTPD directory :

 sudo cp -a /etc/httpd/ /home/stack/

Te rest of this is easier to do as root, as I want some globbing. First, I’ll copy over the module config files

 sudo su
 cp /root/httpd/conf.modules.d/* /etc/httpd/conf.modules.d/
 systemctl restart httpd.service

Test Keystone

 . ./stackrc 
 openstack token issue

Get a token…good to go…ok, lets try toe conf.d files.

sudo cp /root/httpd/conf.d/* /etc/httpd/conf.d/
systemctl restart httpd.service

Then as a non admin user

$ kinit admin
Password for admin@AYOUNG-DELL-T1700.TEST: 
[stack@undercloud ~]$ ipa user-find
--------------
1 user matched
--------------
  User login: admin
  Last name: Administrator
  Home directory: /home/admin
  Login shell: /bin/bash
  UID: 776400000
  GID: 776400000
  Account disabled: False
  Password: True
  Kerberos keys available: True
----------------------------
Number of entries returned 1
----------------------------

This is a fragile deployment, as updating either FreeIPA or the Undercloud has the potential to break one or the other…or both. But it is a start.

by Adam Young at July 08, 2016 07:29 PM

De-conflicting Swift-Proxy with FreeIPA

Port 8080 is a popular port. Tomcat uses it as the default port for unencrypted traffic. FreeIA, installs Dogtag which runs in Tomcat. Swift proxy also chose that port number for its traffic. This means that if one is run on that port, the other cannot. Of the two, it is easier to change FreeIPA, as the port is only used for internal traffic, where as Swift’s port is in the service catalog and the documentation.

Changing the port in FreeIPA requires modifications in both the config directories for Dogtag and the Python code that contacts it.

The Python changes are in

/usr/lib/python2.7/site-packages/ipaplatform/base/services.py
/usr/lib/python2.7/site-packages/ipapython/dogtag.py

Look for any instance of 8080 and change them to another port that will not conflict. I chose 8181

The config changes for dogtag are in /etc/pki such as /etc/pki/pki-tomcat/ca/CS.cfg and again, change 8080 to 8181.

Restart the server with:

sudo systemctl status ipa.service

To confirm run a command that hits the CA:

 ipa cert-find

I have a ticket in with FreeIPA to try and get support for this in.

With these changes made, I tested out then installing the undercloud on the same node and it seems to work.

However, the IPA server is no longer running. The undercloud install seems to have cleared out the ipa config files from under /etc/httpd/conf.d. However, DOgtag is still running as shown by

curl localhost:8181

Next experiment will be to see if I can preserve the IPA configuration

by Adam Young at July 08, 2016 04:30 AM

June 30, 2016

Rob Crittenden

Nova join (take 2)

Rich Megginson started a project in the Openstack Nova service to enable automatic IPA enrollment when an instance is created. I extended this to add support for metadata and pushed it into github as novajoin, https://github.com/rcritten/novajoin

This used a hooks function within nova to allow one to extend certain functions (like add, delete, networking) etc. Unfortunately this was not well documented, nor apparently well-used, and the nova team wasn’t too keen on allowing full access to all nova internals, so they killed it.

The successor is an extension of the metadata plugin system, vendordata: https://review.openstack.org/#/c/317739/

The idea is to allow one to inject custom metadata dynamically over a REST call.

IPA will provide a vendordata REST service that will create a host on demand and return the OTP for that host in the metadata. Enrollment will continue to happen via a cloud-init script which fetches the metadata to get the OTP.

A separate service will listen on notifications to capture host delete events.

I’m still working on networking as there isn’t a clear line which IP should be associated with a given hostname, and when. In other words, there is still a lot of handwaving going on.

I haven’t yet pushed the new source yet but I’m going to use the same project after I tag the current bits. There is no point continuing development of the hooks-based approach since nova will kill it after the Newton release.

by rcritten at June 30, 2016 05:41 PM

June 14, 2016

Striker Leggette

Authenticating to Fedora using Active Directory credentials that lack Unix attributes

This weekend at the SouthEast LinuxFest, I had a talk about how you can authenticate to Fedora using Active Directory credentials that lack Unix attributes.  Since newer deployments of the most recent versions of Active Directory no longer give you the ability by default to configure Unix attributes, it is important to know that this is not a show stopper.

In my talk, I showed how SSSD uses ID Mapping by converting an objectSID value from a user object from binary to a human-readable number and then runs that number through an algorithm to generate a UID.  It will do the same thing for group objects so that you also have GIDs.  Besides the UID and GID, SSSD has the ability to use a ‘fallback’ mode for home directory and shell locations.  This way, you can “fill in the blanks” of missing information.

Here is an example user object we used in the demonstration to show this:

 

$ ldapsearch -LLL -h coldharbour.win.terranforge.com -D Administrator@WIN.TERRANFORGE.COM -W -b dc=win,dc=terranforge,dc=com samaccountname=youknownothing
Enter LDAP Password:

dn: CN=Jon Snow,CN=Users,DC=win,DC=terranforge,DC=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
cn: Jon Snow
sn: Snow
givenName: Jon
distinguishedName: CN=Jon Snow,CN=Users,DC=win,DC=terranforge,DC=com
instanceType: 4
whenCreated: 20160610164605.0Z
whenChanged: 20160610164605.0Z
displayName: Jon Snow
uSNCreated: 20499
uSNChanged: 20504
name: Jon Snow
objectGUID:: Y7sOFvVwRkmrKNCJiXYkSw==
userAccountControl: 66048
badPwdCount: 0
codePage: 0
countryCode: 0
badPasswordTime: 0
lastLogoff: 0
lastLogon: 0
pwdLastSet: 131100507651203267
primaryGroupID: 513
objectSid:: AQUAAAAAAAUVAAAAoKzsMxIUlCWCTFRxUQQAAA==
accountExpires: 9223372036854775807
logonCount: 0
sAMAccountName: youknownothing
sAMAccountType: 805306368
userPrincipalName: youknownothing@win.terranforge.com
objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=win,DC=terranforge,DC=
com
dSCorePropagationData: 16010101000000.0Z

# refldap://win.terranforge.com/CN=Configuration,DC=win,DC=terranforge,DC=com

 

As you can see, Jon Snow (youknownothing) lacks four of the things that POSIX compliant systems require a user to have: UID, GID, Home Directory and Shell.  However, on a Fedora 23 system that has been joined to the same AD domain, we can successfully see that the user DOES have a UID, GID, Home Directory and Shell:

 

[root@garden ~]# cat /etc/fedora-release
Fedora release 23 (Twenty Three)
[root@garden ~]# id youknownothing
uid=436801105(youknownothing) gid=436800513(domain users) groups=436800513(domain users)
[root@garden ~]# getent passwd youknownothing
youknownothing:*:436801105:436800513:Jon Snow:/home/youknownothing:/bin/bash

 

And, we can authenticate as that user to the Fedora system:

 

[root@garden ~]# ssh youknownothing@localhost
youknownothing@localhost’s password:
[youknownothing@garden ~]$ id
uid=436801105(youknownothing) gid=436800513(domain users) groups=436800513(domain users) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[youknownothing@garden ~]$

 

This happens successfully because SSSD is converting the binary SID value to a number, turning that number into a UID based off of an algorithm and then filling in whatever attributes are necessary for the POSIX-compliant system to accept the user as valid.  The only thing SSSD requires from AD to make this happen is an ‘id’, such as a username and the SID attribute.  In sssd.conf, we specify the Shell and Home Directory attributes:

 

[domain/win.terranforge.com]
id_provider = ad
ad_server = coldharbour.win.terranforge.com
default_shell=/bin/bash
fallback_homedir=/home/%u

[sssd]
services = nss, pam
config_file_version = 2
domains = win.terranforge.com

[nss]

[pam]

 

Using the ‘default_shell’ and ‘fallback_homedir’ options means that if SSSD does not find these attributes within AD, it will substitute what you give it.  In this case, /bin/bash and /home/%u.  This allows you to specify the unixHomeDir and unixShell attributes in AD for a user if you still desire to do so, and SSSD will use those.

To generate an UID and GID based off of the object’s SID value, SSSD’s ID Mapping algorithm is very similar to how Winbind’s autorid backend works.  This makes it trivial to move from older Winbind configurations to SSSD and continue to retain original UID and GID values.  Using SSSD in this fashion will make the UIDs and GIDs across all systems joined to AD consistent for each user and group, making things like file-sharing hassle-free.


by Striker at June 14, 2016 02:21 PM

Ben Lipton

FreeIPA and the 'subdir-objects' option

The subject of this blog post will be FreeIPA Ticket #5873, a request to fix the warning messages produced when compiling FreeIPA:

automake: warning: possible forward-incompatibility.
automake: At least a source file is in a subdirectory, but the 'subdir-objects'
automake: automake option hasn't been enabled.  For now, the corresponding output
automake: object file(s) will be placed in the top-level directory.  However,
automake: this behaviour will change in future Automake versions: they will
automake: unconditionally cause object files to be placed in the same subdirectory
automake: of the corresponding sources.
automake: You are advised to start using 'subdir-objects' option throughout your
automake: project, to avoid future incompatibilities.

Step 1: Add ‘subdir-objects’

It says we should enable the subdir-objects option, so let’s try it:

diff --git a/client/configure.ac b/client/configure.ac
index 58f23af..a97edd1 100644
--- a/client/configure.ac
+++ b/client/configure.ac
@@ -8,7 +8,7 @@ LT_INIT
 AC_CONFIG_HEADERS([config.h])
 AC_CONFIG_SUBDIRS([../asn1])

-AM_INIT_AUTOMAKE([foreign])
+AM_INIT_AUTOMAKE([foreign subdir-objects])

 AM_MAINTAINER_MODE

diff --git a/daemons/configure.ac b/daemons/configure.ac
index 2906def..8dce469 100644
--- a/daemons/configure.ac
+++ b/daemons/configure.ac
@@ -7,7 +7,7 @@ AC_INIT([ipa-server],
 AC_CONFIG_HEADERS([config.h])
 AC_CONFIG_SUBDIRS([../asn1])

-AM_INIT_AUTOMAKE([foreign])
+AM_INIT_AUTOMAKE([foreign subdir-objects])
 m4_ifdef([AM_SILENT_RULES], [AM_SILENT_RULES])

 AM_MAINTAINER_MODE

And the result:

make[1]: Entering directory '/home/blipton/src/freeipa/dist/freeipa-4.3.90.201606021746GIT63b597d/client'
Makefile:704: ../util/.deps/ipa_krb5.Po: No such file or directory

So what’s happening here? If we search for the missing file:

$ find -name ipa_krb5.Po
./client/$(KRB5_UTIL_DIR)/.deps/ipa_krb5.Po

we see that it’s in a very odd place, a directory literally named $(KRB5_UTIL_DIR). It turns out this is a known issue with automatic dependency tracking in automake, discussed extensively in this bug report. Basically, the config.status script (which generates Makefile from Makefile.in) is directly parsing the makefile, looking for lines that include makefiles under $(DEPDIR). It uses sed to replace $(DEPDIR) with .deps, but any other variables in the line are taken verbatim. Therefore, if the SOURCES line from which this is derived includes, say, $(KRB5_UTIL_DIR)/ipa_krb5.c, config.status ends up making the oddly-named directory mentioned above.

Step 2: No variables in paths

Ok, so we can’t use variable references in our SOURCES. What if we expand all the variables, like this?

diff --git a/client/Makefile.am b/client/Makefile.am
index 3d135a3..3c9f4bb 100644
--- a/client/Makefile.am
+++ b/client/Makefile.am
@@ -13,7 +13,7 @@ endif
 export AM_CFLAGS
 
 KRB5_UTIL_DIR=../util
-KRB5_UTIL_SRCS=$(KRB5_UTIL_DIR)/ipa_krb5.c
+KRB5_UTIL_SRCS=../util/ipa_krb5.c
 ASN1_UTIL_DIR=../asn1
 IPA_CONF_FILE=$(sysconfdir)/ipa/default.conf
 
diff --git a/client/configure.ac b/client/configure.ac
index 58f23af..a97edd1 100644
--- a/client/configure.ac
+++ b/client/configure.ac
@@ -8,7 +8,7 @@ LT_INIT
 AC_CONFIG_HEADERS([config.h])
 AC_CONFIG_SUBDIRS([../asn1])
 
-AM_INIT_AUTOMAKE([foreign])
+AM_INIT_AUTOMAKE([foreign subdir-objects])
 
 AM_MAINTAINER_MODE
 
diff --git a/daemons/configure.ac b/daemons/configure.ac
index 2906def..8dce469 100644
--- a/daemons/configure.ac
+++ b/daemons/configure.ac
@@ -7,7 +7,7 @@ AC_INIT([ipa-server],
 AC_CONFIG_HEADERS([config.h])
 AC_CONFIG_SUBDIRS([../asn1])
 
-AM_INIT_AUTOMAKE([foreign])
+AM_INIT_AUTOMAKE([foreign subdir-objects])
 m4_ifdef([AM_SILENT_RULES], [AM_SILENT_RULES])
 
 AM_MAINTAINER_MODE
diff --git a/daemons/ipa-kdb/Makefile.am b/daemons/ipa-kdb/Makefile.am
index a4ea366..a4a970a 100644
--- a/daemons/ipa-kdb/Makefile.am
+++ b/daemons/ipa-kdb/Makefile.am
@@ -2,8 +2,8 @@ NULL =
 
 KRB5_UTIL_DIR = ../../util
 IPA_UTIL_DIR = ../../../util
-KRB5_UTIL_SRCS = $(KRB5_UTIL_DIR)/ipa_krb5.c \
-		 $(KRB5_UTIL_DIR)/ipa_pwd.c
+KRB5_UTIL_SRCS = ../../util/ipa_krb5.c \
+		 ../../util/ipa_pwd.c
 
 AM_CPPFLAGS =						\
 	-I.						\
diff --git a/daemons/ipa-sam/Makefile.am b/daemons/ipa-sam/Makefile.am
index ea14661..66ffdff 100644
--- a/daemons/ipa-sam/Makefile.am
+++ b/daemons/ipa-sam/Makefile.am
@@ -7,7 +7,7 @@ SAMBA40EXTRA_LIBS = $(SAMBA40EXTRA_LIBPATH)	\
 			$(NULL)
 
 KRB5_UTIL_DIR=../../util
-KRB5_UTIL_SRCS=$(KRB5_UTIL_DIR)/ipa_krb5.c $(KRB5_UTIL_DIR)/ipa_pwd_ntlm.c
+KRB5_UTIL_SRCS=../../util/ipa_krb5.c ../../util/ipa_pwd_ntlm.c
 ASN1_UTIL_DIR=../../asn1
 
 AM_CPPFLAGS =						\
diff --git a/daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am b/daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am
index 46a6491..cf0ffbd 100644
--- a/daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am
+++ b/daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am
@@ -3,9 +3,9 @@ NULL =
 MAINTAINERCLEANFILES = *~ Makefile.in
 PLUGIN_COMMON_DIR = ../common
 KRB5_UTIL_DIR = ../../../util
-KRB5_UTIL_SRCS = $(KRB5_UTIL_DIR)/ipa_krb5.c \
-		 $(KRB5_UTIL_DIR)/ipa_pwd.c \
-		 $(KRB5_UTIL_DIR)/ipa_pwd_ntlm.c
+KRB5_UTIL_SRCS = ../../../util/ipa_krb5.c \
+		 ../../../util/ipa_pwd.c \
+		 ../../../util/ipa_pwd_ntlm.c
 ASN1_UTIL_DIR=../../../asn1
 
 AM_CPPFLAGS =							\

Now we have a different problem:

Making distclean in ipa-pwd-extop
make[3]: Entering directory '/home/blipton/src/freeipa/dist/freeipa-4.3.90.201606021827GIT4becc18/daemons/ipa-slapi-plugins/ipa-pwd-extop'
Makefile:535: ../../../util/.deps/ipa_krb5.Plo: No such file or directory
Makefile:536: ../../../util/.deps/ipa_pwd.Plo: No such file or directory
Makefile:537: ../../../util/.deps/ipa_pwd_ntlm.Plo: No such file or directory
make[3]: *** No rule to make target '../../../util/.deps/ipa_pwd_ntlm.Plo'.  Stop.

Here it turns out that because util/.deps is used by more than one Makefile in the subdirectories of daemons, it is being removed by make distclean running in the daemons/ipa-kdb directory, and then once make reaches the daemons/ipa-slapi-plugins/ipa-pwd-extop directory the needed Plo files aren’t there anymore. There is a commit that claims to fix this issue, but I’m not certain it will be ok with the same file being referenced by multiple SOURCES directives, it’s not included in any released version of automake anyway. So, we’re going to need to try something else.

Step 3: Utils gets its own configure file

It seems we’re having issues because multiple projects want to build and clean up the same files. So maybe it would be better to make utils its own project, repsonsible for building the files within that directory, and simply have the other projects depend on it. This is the same as what happens in the asn1 directory of the source tree. The following patch implements this approach:

diff --git a/Makefile b/Makefile
index 210b7ac..6e00220 100644
--- a/Makefile
+++ b/Makefile
@@ -3,7 +3,7 @@
 
 include VERSION
 
-SUBDIRS=asn1 daemons install ipapython ipalib
+SUBDIRS=util asn1 daemons install ipapython ipalib
 CLIENTDIRS=ipapython ipalib client asn1
 CLIENTPYDIRS=ipaclient ipaplatform
 
diff --git a/client/Makefile.am b/client/Makefile.am
index 3d135a3..afc2977 100644
--- a/client/Makefile.am
+++ b/client/Makefile.am
@@ -13,7 +13,7 @@ endif
 export AM_CFLAGS
 
 KRB5_UTIL_DIR=../util
-KRB5_UTIL_SRCS=$(KRB5_UTIL_DIR)/ipa_krb5.c
+KRB5_UTIL_LIBS=../util/ipa_krb5.la
 ASN1_UTIL_DIR=../asn1
 IPA_CONF_FILE=$(sysconfdir)/ipa/default.conf
 
@@ -52,7 +52,6 @@ sbin_SCRIPTS =			\
 ipa_getkeytab_SOURCES =		\
 	ipa-getkeytab.c		\
 	ipa-client-common.c	\
-	$(KRB5_UTIL_SRCS)	\
 	$(NULL)
 
 ipa_getkeytab_LDADD = 		\
@@ -63,6 +62,7 @@ ipa_getkeytab_LDADD = 		\
 	$(POPT_LIBS)		\
 	$(LIBINTL_LIBS)         \
 	$(INI_LIBS)		\
+	$(KRB5_UTIL_LIBS)	\
 	$(NULL)
 
 ipa_rmkeytab_SOURCES =		\
diff --git a/client/configure.ac b/client/configure.ac
index 58f23af..836cac4 100644
--- a/client/configure.ac
+++ b/client/configure.ac
@@ -6,9 +6,9 @@ AC_INIT([ipa-client],
 LT_INIT
 
 AC_CONFIG_HEADERS([config.h])
-AC_CONFIG_SUBDIRS([../asn1])
+AC_CONFIG_SUBDIRS([../util ../asn1])
 
-AM_INIT_AUTOMAKE([foreign])
+AM_INIT_AUTOMAKE([foreign subdir-objects])
 
 AM_MAINTAINER_MODE
 
diff --git a/daemons/configure.ac b/daemons/configure.ac
index 2906def..f27312f 100644
--- a/daemons/configure.ac
+++ b/daemons/configure.ac
@@ -5,9 +5,9 @@ AC_INIT([ipa-server],
         [https://hosted.fedoraproject.org/projects/freeipa/newticket])
 
 AC_CONFIG_HEADERS([config.h])
-AC_CONFIG_SUBDIRS([../asn1])
+AC_CONFIG_SUBDIRS([../util ../asn1])
 
-AM_INIT_AUTOMAKE([foreign])
+AM_INIT_AUTOMAKE([foreign subdir-objects])
 m4_ifdef([AM_SILENT_RULES], [AM_SILENT_RULES])
 
 AM_MAINTAINER_MODE
diff --git a/daemons/ipa-kdb/Makefile.am b/daemons/ipa-kdb/Makefile.am
index a4ea366..2f8bcfb 100644
--- a/daemons/ipa-kdb/Makefile.am
+++ b/daemons/ipa-kdb/Makefile.am
@@ -2,8 +2,8 @@ NULL =
 
 KRB5_UTIL_DIR = ../../util
 IPA_UTIL_DIR = ../../../util
-KRB5_UTIL_SRCS = $(KRB5_UTIL_DIR)/ipa_krb5.c \
-		 $(KRB5_UTIL_DIR)/ipa_pwd.c
+KRB5_UTIL_LIBS = ../../util/ipa_krb5.la \
+		 ../../util/ipa_pwd.la
 
 AM_CPPFLAGS =						\
 	-I.						\
@@ -39,7 +39,6 @@ ipadb_la_SOURCES = 		\
 	ipa_kdb_mspac.c		\
 	ipa_kdb_delegation.c	\
 	ipa_kdb_audit_as.c	\
-	$(KRB5_UTIL_SRCS)	\
 	$(NULL)
 
 ipadb_la_LDFLAGS = 		\
@@ -53,6 +52,7 @@ ipadb_la_LIBADD = 		\
 	$(NDRPAC_LIBS)		\
 	$(UNISTRING_LIBS)	\
 	$(NSS_LIBS)             \
+	$(KRB5_UTIL_LIBS)	\
 	$(NULL)
 
 if HAVE_CMOCKA
@@ -71,7 +71,6 @@ ipa_kdb_tests_SOURCES =        \
        ipa_kdb_mspac.c         \
        ipa_kdb_delegation.c    \
        ipa_kdb_audit_as.c      \
-       $(KRB5_UTIL_SRCS)       \
        $(NULL)
 ipa_kdb_tests_CFLAGS = $(CMOCKA_CFLAGS)
 ipa_kdb_tests_LDADD =          \
@@ -81,6 +80,7 @@ ipa_kdb_tests_LDADD =          \
        $(NDRPAC_LIBS)          \
        $(UNISTRING_LIBS)       \
        $(NSS_LIBS)             \
+       $(KRB5_UTIL_LIBS)       \
        -lkdb5                  \
        -lsss_idmap             \
        $(NULL)
diff --git a/daemons/ipa-sam/Makefile.am b/daemons/ipa-sam/Makefile.am
index ea14661..17d77aa 100644
--- a/daemons/ipa-sam/Makefile.am
+++ b/daemons/ipa-sam/Makefile.am
@@ -7,7 +7,7 @@ SAMBA40EXTRA_LIBS = $(SAMBA40EXTRA_LIBPATH)	\
 			$(NULL)
 
 KRB5_UTIL_DIR=../../util
-KRB5_UTIL_SRCS=$(KRB5_UTIL_DIR)/ipa_krb5.c $(KRB5_UTIL_DIR)/ipa_pwd_ntlm.c
+KRB5_UTIL_LIBS=../../util/ipa_krb5.la ../../util/ipa_pwd_ntlm.la
 ASN1_UTIL_DIR=../../asn1
 
 AM_CPPFLAGS =						\
@@ -39,7 +39,6 @@ plugin_LTLIBRARIES = 		\
 
 ipasam_la_SOURCES = 		\
 	ipa_sam.c		\
-	$(KRB5_UTIL_SRCS)	\
 	$(NULL)
 
 ipasam_la_LDFLAGS = 		\
@@ -57,6 +56,7 @@ ipasam_la_LIBADD = 		\
 	$(SAMBA40EXTRA_LIBS)	\
 	$(SSSIDMAP_LIBS)	\
 	$(ASN1_UTIL_DIR)/libipaasn1.la  \
+	$(KRB5_UTIL_LIBS)	\
 	$(NULL)
 
 EXTRA_DIST =			\
diff --git a/daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am b/daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am
index 46a6491..50c9c66 100644
--- a/daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am
+++ b/daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am
@@ -3,9 +3,9 @@ NULL =
 MAINTAINERCLEANFILES = *~ Makefile.in
 PLUGIN_COMMON_DIR = ../common
 KRB5_UTIL_DIR = ../../../util
-KRB5_UTIL_SRCS = $(KRB5_UTIL_DIR)/ipa_krb5.c \
-		 $(KRB5_UTIL_DIR)/ipa_pwd.c \
-		 $(KRB5_UTIL_DIR)/ipa_pwd_ntlm.c
+KRB5_UTIL_LIBS = ../../../util/ipa_krb5.la \
+		 ../../../util/ipa_pwd.la \
+		 ../../../util/ipa_pwd_ntlm.la
 ASN1_UTIL_DIR=../../../asn1
 
 AM_CPPFLAGS =							\
@@ -41,6 +41,7 @@ plugin_LTLIBRARIES = libipa_pwd_extop.la
 libipa_pwd_extop_la_LIBADD  = \
 	$(builddir)/../libotp/libotp.la \
 	$(ASN1_UTIL_DIR)/libipaasn1.la  \
+	$(KRB5_UTIL_LIBS)		\
 	$(NULL)
 libipa_pwd_extop_la_SOURCES = 		\
 	common.c			\
@@ -48,7 +49,6 @@ libipa_pwd_extop_la_SOURCES = 		\
 	prepost.c			\
 	ipa_pwd_extop.c			\
 	otpctrl.c			\
-	$(KRB5_UTIL_SRCS)		\
 	$(NULL)
 
 appdir = $(IPA_DATA_DIR)
diff --git a/util/Makefile.am b/util/Makefile.am
new file mode 100644
index 0000000..a848a7c
--- /dev/null
+++ b/util/Makefile.am
@@ -0,0 +1,8 @@
+#AM_CPPFLAGS = -I../util -Iasn1c
+
+noinst_LTLIBRARIES=libipa_krb5.la libipa_pwd.la libipa_pwd_ntlm.la
+noinst_HEADERS=ipa_krb5.h ipa_mspac.h ipa_pwd.h
+
+libipa_krb5_la_SOURCES=ipa_krb5.c
+libipa_pwd_la_SOURCES=ipa_pwd.c
+libipa_pwd_ntlm_la_SOURCES=ipa_pwd_ntlm.c
diff --git a/util/configure.ac b/util/configure.ac
new file mode 100644
index 0000000..2b323c1
--- /dev/null
+++ b/util/configure.ac
@@ -0,0 +1,23 @@
+AC_PREREQ(2.59)
+m4_include(../version.m4)
+AC_INIT([ipa-server],
+        IPA_VERSION,
+        [https://hosted.fedoraproject.org/projects/freeipa/newticket])
+
+AC_CONFIG_HEADERS([config.h])
+AC_PROG_CC_C99
+LT_INIT
+
+AM_INIT_AUTOMAKE([foreign])
+
+AM_MAINTAINER_MODE
+
+AC_SUBST(VERSION)
+
+# Files
+
+AC_CONFIG_FILES([
+    Makefile
+])
+
+AC_OUTPUT

But being its own project means that the utils directory is now responsible for handling its own dependencies, which was previously done by the configure.ac files in the client and daemons directories. So with the simple utils/configure.ac file introduced by this patch, the build fails due to missing dependencies:

In file included from ipa_pwd_ntlm.c:30:0:
/usr/include/dirsrv/slapi-plugin.h:30:21: fatal error: prtypes.h: No such file or directory
compilation terminated.
Makefile:427: recipe for target 'ipa_pwd_ntlm.lo' failed

We might be able to make this work by copying the necessary dependencies into the new configure.ac file. However, adding the maintenance burden of another configure script seems undesirable, so let’s see if we can take advantage of the work that’s already being done by the existing configure scripts.

Step 4: Old configure, new makefile

Configure scripts can generate more than one Makefile by adjusting the AC_CONFIG_FILES definition within configure.ac. So instead of giving utils its own configure script, what if we just make the packages that need it responsible for generating its Makefile themselves? The following patch does this:

diff --git a/Makefile b/Makefile
index 210b7ac..6e00220 100644
--- a/Makefile
+++ b/Makefile
@@ -3,7 +3,7 @@
 
 include VERSION
 
-SUBDIRS=asn1 daemons install ipapython ipalib
+SUBDIRS=util asn1 daemons install ipapython ipalib
 CLIENTDIRS=ipapython ipalib client asn1
 CLIENTPYDIRS=ipaclient ipaplatform
 
diff --git a/client/Makefile.am b/client/Makefile.am
index 3d135a3..afc2977 100644
--- a/client/Makefile.am
+++ b/client/Makefile.am
@@ -13,7 +13,7 @@ endif
 export AM_CFLAGS
 
 KRB5_UTIL_DIR=../util
-KRB5_UTIL_SRCS=$(KRB5_UTIL_DIR)/ipa_krb5.c
+KRB5_UTIL_LIBS=../util/ipa_krb5.la
 ASN1_UTIL_DIR=../asn1
 IPA_CONF_FILE=$(sysconfdir)/ipa/default.conf
 
@@ -52,7 +52,6 @@ sbin_SCRIPTS =			\
 ipa_getkeytab_SOURCES =		\
 	ipa-getkeytab.c		\
 	ipa-client-common.c	\
-	$(KRB5_UTIL_SRCS)	\
 	$(NULL)
 
 ipa_getkeytab_LDADD = 		\
@@ -63,6 +62,7 @@ ipa_getkeytab_LDADD = 		\
 	$(POPT_LIBS)		\
 	$(LIBINTL_LIBS)         \
 	$(INI_LIBS)		\
+	$(KRB5_UTIL_LIBS)	\
 	$(NULL)
 
 ipa_rmkeytab_SOURCES =		\
diff --git a/client/configure.ac b/client/configure.ac
index 58f23af..4ca9caf 100644
--- a/client/configure.ac
+++ b/client/configure.ac
@@ -8,7 +8,7 @@ LT_INIT
 AC_CONFIG_HEADERS([config.h])
 AC_CONFIG_SUBDIRS([../asn1])
 
-AM_INIT_AUTOMAKE([foreign])
+AM_INIT_AUTOMAKE([foreign subdir-objects])
 
 AM_MAINTAINER_MODE
 
@@ -220,6 +220,7 @@ dnl ---------------------------------------------------------------------------
 
 AC_CONFIG_FILES([
     Makefile
+    ../util/Makefile
     ../asn1/Makefile
     man/Makefile
 ])
diff --git a/daemons/configure.ac b/daemons/configure.ac
index 2906def..761c15c 100644
--- a/daemons/configure.ac
+++ b/daemons/configure.ac
@@ -7,7 +7,7 @@ AC_INIT([ipa-server],
 AC_CONFIG_HEADERS([config.h])
 AC_CONFIG_SUBDIRS([../asn1])
 
-AM_INIT_AUTOMAKE([foreign])
+AM_INIT_AUTOMAKE([foreign subdir-objects])
 m4_ifdef([AM_SILENT_RULES], [AM_SILENT_RULES])
 
 AM_MAINTAINER_MODE
@@ -332,6 +332,7 @@ AC_SUBST(LDFLAGS)
 AC_CONFIG_FILES([
     Makefile
     ../asn1/Makefile
+    ../util/Makefile
     ipa-kdb/Makefile
     ipa-sam/Makefile
     ipa-otpd/Makefile
diff --git a/daemons/ipa-kdb/Makefile.am b/daemons/ipa-kdb/Makefile.am
index a4ea366..2f8bcfb 100644
--- a/daemons/ipa-kdb/Makefile.am
+++ b/daemons/ipa-kdb/Makefile.am
@@ -2,8 +2,8 @@ NULL =
 
 KRB5_UTIL_DIR = ../../util
 IPA_UTIL_DIR = ../../../util
-KRB5_UTIL_SRCS = $(KRB5_UTIL_DIR)/ipa_krb5.c \
-		 $(KRB5_UTIL_DIR)/ipa_pwd.c
+KRB5_UTIL_LIBS = ../../util/ipa_krb5.la \
+		 ../../util/ipa_pwd.la
 
 AM_CPPFLAGS =						\
 	-I.						\
@@ -39,7 +39,6 @@ ipadb_la_SOURCES = 		\
 	ipa_kdb_mspac.c		\
 	ipa_kdb_delegation.c	\
 	ipa_kdb_audit_as.c	\
-	$(KRB5_UTIL_SRCS)	\
 	$(NULL)
 
 ipadb_la_LDFLAGS = 		\
@@ -53,6 +52,7 @@ ipadb_la_LIBADD = 		\
 	$(NDRPAC_LIBS)		\
 	$(UNISTRING_LIBS)	\
 	$(NSS_LIBS)             \
+	$(KRB5_UTIL_LIBS)	\
 	$(NULL)
 
 if HAVE_CMOCKA
@@ -71,7 +71,6 @@ ipa_kdb_tests_SOURCES =        \
        ipa_kdb_mspac.c         \
        ipa_kdb_delegation.c    \
        ipa_kdb_audit_as.c      \
-       $(KRB5_UTIL_SRCS)       \
        $(NULL)
 ipa_kdb_tests_CFLAGS = $(CMOCKA_CFLAGS)
 ipa_kdb_tests_LDADD =          \
@@ -81,6 +80,7 @@ ipa_kdb_tests_LDADD =          \
        $(NDRPAC_LIBS)          \
        $(UNISTRING_LIBS)       \
        $(NSS_LIBS)             \
+       $(KRB5_UTIL_LIBS)       \
        -lkdb5                  \
        -lsss_idmap             \
        $(NULL)
diff --git a/daemons/ipa-sam/Makefile.am b/daemons/ipa-sam/Makefile.am
index ea14661..17d77aa 100644
--- a/daemons/ipa-sam/Makefile.am
+++ b/daemons/ipa-sam/Makefile.am
@@ -7,7 +7,7 @@ SAMBA40EXTRA_LIBS = $(SAMBA40EXTRA_LIBPATH)	\
 			$(NULL)
 
 KRB5_UTIL_DIR=../../util
-KRB5_UTIL_SRCS=$(KRB5_UTIL_DIR)/ipa_krb5.c $(KRB5_UTIL_DIR)/ipa_pwd_ntlm.c
+KRB5_UTIL_LIBS=../../util/ipa_krb5.la ../../util/ipa_pwd_ntlm.la
 ASN1_UTIL_DIR=../../asn1
 
 AM_CPPFLAGS =						\
@@ -39,7 +39,6 @@ plugin_LTLIBRARIES = 		\
 
 ipasam_la_SOURCES = 		\
 	ipa_sam.c		\
-	$(KRB5_UTIL_SRCS)	\
 	$(NULL)
 
 ipasam_la_LDFLAGS = 		\
@@ -57,6 +56,7 @@ ipasam_la_LIBADD = 		\
 	$(SAMBA40EXTRA_LIBS)	\
 	$(SSSIDMAP_LIBS)	\
 	$(ASN1_UTIL_DIR)/libipaasn1.la  \
+	$(KRB5_UTIL_LIBS)	\
 	$(NULL)
 
 EXTRA_DIST =			\
diff --git a/daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am b/daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am
index 46a6491..50c9c66 100644
--- a/daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am
+++ b/daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am
@@ -3,9 +3,9 @@ NULL =
 MAINTAINERCLEANFILES = *~ Makefile.in
 PLUGIN_COMMON_DIR = ../common
 KRB5_UTIL_DIR = ../../../util
-KRB5_UTIL_SRCS = $(KRB5_UTIL_DIR)/ipa_krb5.c \
-		 $(KRB5_UTIL_DIR)/ipa_pwd.c \
-		 $(KRB5_UTIL_DIR)/ipa_pwd_ntlm.c
+KRB5_UTIL_LIBS = ../../../util/ipa_krb5.la \
+		 ../../../util/ipa_pwd.la \
+		 ../../../util/ipa_pwd_ntlm.la
 ASN1_UTIL_DIR=../../../asn1
 
 AM_CPPFLAGS =							\
@@ -41,6 +41,7 @@ plugin_LTLIBRARIES = libipa_pwd_extop.la
 libipa_pwd_extop_la_LIBADD  = \
 	$(builddir)/../libotp/libotp.la \
 	$(ASN1_UTIL_DIR)/libipaasn1.la  \
+	$(KRB5_UTIL_LIBS)		\
 	$(NULL)
 libipa_pwd_extop_la_SOURCES = 		\
 	common.c			\
@@ -48,7 +49,6 @@ libipa_pwd_extop_la_SOURCES = 		\
 	prepost.c			\
 	ipa_pwd_extop.c			\
 	otpctrl.c			\
-	$(KRB5_UTIL_SRCS)		\
 	$(NULL)
 
 appdir = $(IPA_DATA_DIR)
diff --git a/util/Makefile.am b/util/Makefile.am
new file mode 100644
index 0000000..a848a7c
--- /dev/null
+++ b/util/Makefile.am
@@ -0,0 +1,8 @@
+#AM_CPPFLAGS = -I../util -Iasn1c
+
+noinst_LTLIBRARIES=libipa_krb5.la libipa_pwd.la libipa_pwd_ntlm.la
+noinst_HEADERS=ipa_krb5.h ipa_mspac.h ipa_pwd.h
+
+libipa_krb5_la_SOURCES=ipa_krb5.c
+libipa_pwd_la_SOURCES=ipa_pwd.c
+libipa_pwd_ntlm_la_SOURCES=ipa_pwd_ntlm.c

But this ends badly too:

make[2]: Entering directory '/home/blipton/src/freeipa/rpmbuild/BUILD/freeipa-4.3.90.201606132126GIT475c6bc/util'
cd ../.. && make  am--refresh

Notice that make is trying to change to a directory outside of the build directory. No surprise that it doesn’t find a Makefile there. What’s going on?

The cd is triggered by one of several makefile lines that run

cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh

and in the same Makefile, top_builddir is defined to be ../... Since this build is actually taking place in the client subdirectory, the correct relative path would be ../client. But if we look at the code in config.status, we see:

ac_dir_suffix=/`$as_echo "$ac_dir" | sed 's|^\.[\\/]||'`
# A ".." for each directory in $ac_dir_suffix.
ac_top_builddir_sub=`$as_echo "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'`

As the comment says, this looks at the relative path from the configure script to the Makefile (in this case ../util/Makefile), and replaces each directory component with ... This makes sense when the Makefile is within the same source tree (i.e. when the Makefile is “two levels deep”, the path should be ../..) but with the Makefile external to the configure tree, it doesn’t work at all. Since this code is directly generated by automake, there doesn’t seem to be much we can do about this bad behavior.

Step N: do we really need to fix this now?

One interesting option to consider would be to replace these relative paths with absolute ones based on one of the variables defined in the Makefile automatically, such as $(top_srcdir). It is possible that the logic in config.status would handle these paths better as they wouldn’t include any ... However, thanks to the bug discussed way back in Step 1, variable references in SOURCES don’t work correctly! So that’s probably no help either.

According to a message on the automake mailing list, this bug is being considered a blocker for the release of automake 2.0, so despite the scary compatibility warning in the output, we shouldn’t be forced to use subdir-objects until the handling of it is fixed. In fact, there are patches in the automake repository that are supposed to fix the bug, but the last automake release was over a year ago, so no distros are using those patches and I haven’t tested against them. Since there seems to be no good way to add this option until automake 1.16, and not having the option won’t break anything until automake 2.0, it may be easiest to just leave it alone for now.

June 14, 2016 12:00 AM

June 12, 2016

Ben Lipton

Manually requesting certs from Dogtag with certmonger debug tools

This post records the results some experimentation with the Dogtag API. Specifically, we will show how to authenticate against the API using credentials that are automatically generated by FreeIPA installation, how to use debug tools distributed with certmonger to issue certificates via the API, and a method of tweaking the created cert via the API parameters passed.

Acquiring the tool

We will be using the submit-d tool, included in the certmonger source distribution but not the binary packages. First we download the source and build it:

$ git clone git://git.fedorahosted.org/git/certmonger.git
$ cd certmonger
$ sudo dnf install dbus-devel gettext-devel libidn-devel
$ ./autogen.sh
$ make

Authentication setup

FreeIPA uses a client certificate stored in the NSS database /etc/httpd/alias to authenticate to Dogtag. Unfortunately, this database uses the older DBM format, while the tool we will be using requires the newer SQLite format. So first we must create a new database and copy the cert we need into it, via a PKCS12 file:

mkdir /tmp/certs
certutil -N -d sql:/tmp/certs
sudo pk12util -o ipaCert.p12 -n ipaCert -d /etc/httpd/alias -k /etc/httpd/alias/pwdfile.txt
sudo pk12util -i ipaCert.p12 -n ipaCert -d sql:/tmp/certs

At all password prompts, hit enter. We now have two certs in the file - ipaCert, the client cert we will use for authentication, and the CA cert that signs all of the certs issued by FreeIPA:

[admin@vm-166 certs]$ certutil -L -d sql:/tmp/certs

Certificate Nickname                                         Trust Attributes
                                                             SSL,S/MIME,JAR/XPI

ipaCert                                                      u,u,u
DOMAIN.EXAMPLE.COM IPA CA                                    ,,

Mark the CA cert as trusted, otherwise the client will refuse to talk to the server:

certutil -M -t TC,, -d sql:/tmp/certs -n 'DOMAIN.EXAMPLE.COM IPA CA'

Making requests

Generate a keypair and a CSR to submit to the CA. The openssl req command will prompt for the certificate subject; fill it out however you like.

$ openssl genrsa -out test.key
$ openssl req -new -key /tmp/certs/test.key -out /tmp/certs/test.req

Use the submit-d tool to submit the request to Dogtag. If we use the caIPAserviceCert template, the request goes through immediately and we are presented with a certificate:

$ src/submit-d -u https://server.example.com:8443/ca/ee/ca -U https://server.example.com:8443/ca/agent/ca -vv -d /tmp/certs -C ipaCert -a -T caIPAserviceCert -s /tmp/certs/test.req

We can see from the output that the tool makes a POST request to the profileSubmit endpoint of the server.

On the other hand, if we use a non-IPA cert such as caServerCert, the tool makes the same call but the request will be deferred until approved:

$ src/submit-d -u https://server.example.com:8443/ca/ee/ca -U https://server.example.com:8443/ca/agent/ca -vv -d /tmp/certs -C ipaCert -a -T caServerCert -s /tmp/certs/test.req

In this case, a requestId is provided; we will use this to approve the request:

result = "<?xml version="1.0" encoding="UTF-8" standalone="no"?><XMLResponse><Status>2</Status><Error>Request Deferred - {0}</Error><RequestId>  31</RequestId></XMLResponse>"
error: Request Deferred - {0}
status: 2
requestId: 31

We can approve the request with the -A flag and receive the certificate:

$ src/submit-d -u https://server.example.com:8443/ca/ee/ca -U https://server.example.com:8443/ca/agent/ca -vv -d /tmp/certs -C ipaCert -a -T caServerCert -A 31

Tweaking certificate parameters

If we look through the output of the approval command run above, we see a call to the profileProcess endpoint on the Dogtag server. Interestingly, this call includes many of the parameters of the certificate:

GET /ca/agent/ca/profileProcess?requestId=31&op=approve&xml=true&name=CN%3Dserver.example.com%2CO%3DDOMAIN.EXAMPLE.COM&notBefore=2016-06-13+02%3A50%3A46&notAfter=2018-06-03+02%3A50%3A46&authInfoAccessCritical=false&authInfoAccessGeneralNames=Record+%230%0AMethod%3A1.3.6.1.5.5.7.48.1%0ALocation+Type%3AURIName%0ALocation%3Ahttp%3A%2F%2Fserver.example.com%3A80%2Fca%2Focsp%0AEnable%3Atrue&keyUsageCritical=true&keyUsageDigitalSignature=true&keyUsageNonRepudiation=true&keyUsageKeyEncipherment=true&keyUsageDataEncipherment=true&keyUsageKeyAgreement=false&keyUsageKeyCertSign=false&keyUsageCrlSign=false&keyUsageEncipherOnly=false&keyUsageDecipherOnly=false&exKeyUsageCritical=false&exKeyUsageOIDs=1.3.6.1.5.5.7.3.1%2C1.3.6.1.5.5.7.3.2&signingAlg=SHA256withRSA HTTP/1.1

The tool gets these parameters by parsing the output of a call to the profileReview endpoint. However, we will just re-use the same values, making a small modification to the name parameter. What happens if we submit the request again, and then approve it with the modified parameters?

src/submit-d -u https://server.example.com:8443/ca/ee/ca -U https://server.example.com:8443/ca/agent/ca -vv -d /tmp/certs -C ipaCert -a -T caServerCert -A 32 -V 'name=CN%3Dnewname%2CO%3DEXAMPLE.COM&notBefore=2016-06-13+02%3A50%3A46&notAfter=2018-06-03+02%3A50%3A46&authInfoAccessCritical=false&authInfoAccessGeneralNames=Record+%230%0AMethod%3A1.3.6.1.5.5.7.48.1%0ALocation+Type%3AURIName%0ALocation%3Ahttp%3A%2F%2Fserver.example.com%3A80%2Fca%2Focsp%0AEnable%3Atrue&keyUsageCritical=true&keyUsageDigitalSignature=true&keyUsageNonRepudiation=true&keyUsageKeyEncipherment=true&keyUsageDataEncipherment=true&keyUsageKeyAgreement=false&keyUsageKeyCertSign=false&keyUsageCrlSign=false&keyUsageEncipherOnly=false&keyUsageDecipherOnly=false&exKeyUsageCritical=false&exKeyUsageOIDs=1.3.6.1.5.5.7.3.1%2C1.3.6.1.5.5.7.3.2&signingAlg=SHA256withRSA'

Our new name shows up in the certificate!

Certificate:
    Data:
        Version:  v3
        Serial Number: 0x1E
        Signature Algorithm: SHA256withRSA - 1.2.840.113549.1.1.11
        Issuer: CN=Certificate Authority,O=DOMAIN.EXAMPLE.COM
        Validity:
            Not Before: Monday, June 13, 2016 2:50:46 AM GMT
            Not  After: Sunday, June 3, 2018 2:50:46 AM GMT
        Subject: CN=newname,O=EXAMPLE.COM

June 12, 2016 12:00 AM

May 30, 2016

Alexander Bokovoy

Single sign-on into virtual machines on Linux

This weekend I looked into making possible a single sign-on into Fedora 24 guests running on libvirt/KVM. Suppose you have a libvirt-based server where a number VMs is deployed with VMs presenting graphical workstations. This is not far from what ovirt.org does (RHEV product). You want to have both your virtualization infrastructure and OS environments in VMs to be enrolled into FreeIPA and thus accessible with single sign-on from an external client.

There are several layers of single sign-on here. Once you signed into your external client, supposedly you have valid Kerberos credentials that can be used to obtain service tickets to other services in the realm.

Second layer is the connectivity to your virtualization infrastructure. This is possible already with libvirtd/Qemu as they both support SASL authentication. It is matter of setting appropriate configuration variables in /etc/libvirt/libvirtd.conf and /etc/libvirt/qemu.conf, and tuning /etc/sasl2/libvirt.conf and /etc/sasl2/qemu.conf to allow SASL GSSAPI authentication. You also need to create appropriate services in FreeIPA (libvirt/hostname, vnc/hostname, and spice/hostname) and obtain actual keys with ipa-getkeytab. This is all described relatively well in the FreeIPA libvirt howto.

Once configured, one can authenticate with SASL GSSAPI to VNC server using virt-manager UI or other VNC clients in GNOME that use gtk-vnc library. This works pretty well but only gives you access to the actual screen of the VM, not single sign-on into an operating system in the VM.

SPICE, on other hand, does not work with SASL GSSAPI. While SPICE server embedded into Qemu can be configured easily to listen for SASL GSSAPI, SPICE client libraries in GNOME don’t actually support SASL GSSAPI. In fact, the code is there, it is copied from gtk-vnc, but it does not work.

SPICE client code lacks actual sequence to obtain Kerberos identity out of existing ticket in a default credentials cache. As result, instead of authenticating with SASL GSSAPI, a user is left with a request to enter ‘server password’ which is a concept built around existing SPICE authentication approach where both client and server share a single password.

After I fixed this problem by inquiring a Kerberos principal from the default ccache, SPICE is now working well in a way similar to VNC. I’ll submit patch to upstream once I have time for that.

This still doesn’t give any chance to actually log-in to the VM because both VNC and SPICE servers only represent fancy screens/input devices for the VMs. There is no existing way to pass through authentication via SPICE or VNC so that a software running in the VM would accept Kerberos ticket and authenticate a user based on that.

Here a concept of guest agent is introduced. Qemu has its own guest agent (here) which only supports a minimal set of commands needed to make virtualization management working. OVirt/RHEV have their own guest agent here that supports much more than QEmu’s version and has own protocol. Finally, SPICE has own agent, vdagent, that supports even more operations related to use of graphical resources within the VM.

The idea behind these agents is to have a separate trusted channel into VM that can be used to query/execute something inside the VM. OVirt guest agent supports logon into the VM by plugging into PAM stack and into graphical greeters. When user asks OVirt portal to logon into a console of a VM, a login request can also be sent to the guest agent. This request doesn’t really present a single sign-on, as user have to enter actual credentials and then guest agent injects them into GDM, KDM, or console via D-Bus.

SPICE vdagent can inject both keyboard and mouse events, and even has systemd-login integration that allows it to query sessions to know which X session is used by which user so that mouse/keyboard events are properly injected. It doesn’t though, have a way to force GNOME GDM to create a new session automatically based on the credentials authenticated by the SPICE server.

It would, perhaps, be a good path forward to hack both SPICE server and vdagent to make possible use of delegated GSSAPI credentials to perform a logon into GDM. This would require support from GDM too but as OVirt experience shows, it is possible to create a GDM plugin to help with the task. The goal is to have such logon triggered on opening of the SPICE session if GDM is running and no session is available yet. As result of such logon, valid Kerberos credentials would need to appear in the system so that further use of them would be possible. This means delegation of the credentials – something that SPICE or VNC doesn’t support either (but SASL GSSAPI allows to achieve, it is a single flag change and a policy at KDC to define).

There is another technology to remotely access other systems – RDesktop protocol. XFreeRDP project has experimental patches to support GSSAPI. They don’t work yet, but I have good progress on them to make SSO possible.

So in the end, current implementations don’t allow actual single sign-on in a way that would allow using Kerberos credentials in and out of VMs. To make that possible, more work is needed. It looks like extending vdagent to be able to pass Kerberos ccache content to SSSD via PAM sessiona and trigger that from the greeter plugins as developed in OVirt we could actually reach the point with less effort than creating something from scratch.

May 30, 2016 03:41 PM

May 26, 2016

Jakub Hrozek

pam_hbac: A PAM module to enforce IPA access control rules

Written by Jakub Hrozek and Pavel Reichl

FreeIPA is an open source identity management software. One of the nice features is the ability to limit which users can log in to which servers using Host Based Access Control (HBAC). Previously, only SSSD was able to parse and evaluate the HBAC rules – however, some users run operating systems that either do not support SSSD at all, or SSSD can only be configured in a way that doesn’t include the HBAC engine.

To provide HBAC support for these platforms, we wrote a new PAM module called pam_hbac. The first release of pam_hbac just happened and in this blog post we would like to introduce the project and show how it works.

pam_hbac connects to an IPA server, downloads the HBAC access control rules, evaluates them and decides whether access should be allowed or denied. Even though pam_hbac is a project on its own, it uses the same code to evaluate the HBAC rules as SSSD does, so the resulting access control decision will be the same for the same rules provided the input is the same as well.

Using pam_hbac should come with a disclaimer – if your operating system supports SSSD and you can use its IPA id_provider, please use SSSD instead of pam_hbac. SSSD is maintained by a large team of developers, it is included in distributions with commercial support available and has several advantages over pam_hbac, including offline caching or Kerberos authentication using the host keytab by default.

Who should use pam_hbac

There are legitimate use-cases where using a standalone PAM module like pam_hbac is required, though. These include:

  • if your IPA client runs an OS that doesn’t support SSSD at all like Solaris or its derivatives
  • if your IPA client runs an OS that does support SSSD but not its IPA provider. At the moment, this is the case with Amazon’s Linux distribution and also FreeBSD.
  • if your IPA client runs an OS that supports the IPA provider, but the IPA provider lacks the support for users from trusted Active Directory domains. This is the case for clients running RHEL-5 and leveraging the “compat” LDAP tree provided by the slapi-nis plugin of IPA.

At the moment, pam_hbac runs on Linux, FreeBSD and Solaris (tested on Oracle Solaris 11 and Omnios)

Obtaining and installing pam_hbac

Development of pam_hbac happens on github. There you can clone the git repository and compile pam_hbac yourself. Because the required dependencies or configure flags differ for each platform a bit, please refer to per-platform README files in the doc directory. As an example, there is a FreeBSD-specific README file that contains step-by-step instructions for building and installing pam_hbac on FreeBSD.

For RHEL-5 and RHEL-6, we have also provided a COPR repository with prebuilt binaries. On these platforms, you can just drop the .repo file into /etc/yum.repos.d/ and then yum install pam_hbac.

We would certainly welcome contributors who would like to provide prebuilt binaries for different platforms!

Examples

In the rest of the blog posts, we illustrate two examples of pam_hbac in action. One would be using pam_hbac on FreeBSD to grant access to a subset of IPA users and another one would be to use pam_hbac on CentOS-5 to restrict access to a single Active Directory group in a setup with trusts. For both examples, we use the same environment – the IPA domain is called IPA.TEST and is managed by an IPA server called unidirect.ipa.test. Our clients are called freebsd.ipa.test and centos5.ipa.test.

Example 1: using pam_hbac to provide access control for IPA users on FreeBSD

Even though the recent FreeBSD releases do ship SSSD, it is not built with the IPA provider by default (only through extra flags) and therefore HBAC enforcement might not be available easily. However, we can configure SSSD with the LDAP id_provider or just nss-pam-ldapd on FreeBSD and use pam_hbac for access control separately.

Our goal is to make it possible only for the bsduser to log in to the FreeBSD client machine and nobody else. Start the configuration by making sure you can resolve and authenticate the IPA users. Once that is done, we can configure pam_hbac to provide access control for the FreeBSD machine. Without access control configured, any user should be able to log in:

% su - bsduser
Password:
$ id
uid=1207000884(bsduser) gid=1207000884(bsduser) groups=1207000884(bsduser)
% ^D
% su - tuser
Password:
$ id
uid=1207000883(tuser) gid=1207000883(tuser) groups=1207000883(tuser)

The next step is to install and configure and pam_hbac. Because at the moment there are no prebuilt binary packages for FreeBSD, you’ll need to compile the module from source. The steps are documented in the FreeBSD README in pam_hbac’s git repo . After configuring the source tree, building and installing pam_hbac, you’ll end up with the module installed to /usr/local/lib/pam_hbac.so.

Because much of the information that pam_hbac reads is only accessible to an authenticated user, we need to create a special bind user that pam_hbac will authenticate as. To do so, prepare an LDIF file with the following contents:

dn: uid=hbac,cn=sysaccounts,cn=etc,$DC
objectClass: account
objectClass: simplesecurityobject
objectClass: top
uid: hbac
userPassword: $PASSWORD

Replace the $PASSWORD value with the desired password of the bind user and $DC with the base DN of your IPA server. Then add this LDIF to the IPA server:

ipaserver $ ldapadd -ZZ -H ldap://$IPA_HOSTNAME -D"cn=Directory Manager" -W < hbac_sysuser.ldif

Now we can create the configuration file for pam_hbac. The configuration options are documented in the pam_hbac.conf manpage, but in general it’s enough to point pam_hbac to the IPA server and specify the bind user and its credentials. The config file for pam_hbac on FreeBSD is located at /usr/local/etc/pam_hbac.conf:

[root@freebsd ~]# cat /usr/local/etc/pam_hbac.conf
URI = ldap://unidirect.ipa.test
BASE = dc=ipa,dc=test
BIND_DN = uid=hbac,cn=sysaccounts,cn=etc,dc=ipa,dc=test
BIND_PW = Secret123
SSL_PATH = /usr/local/etc/ipa.crt

Next, we add pam_hbac to the PAM configuration so that it enforces access control during the PAM account phase. Because pam_hbac only handles the account phase, we only add a single line to the account stack of /etc/pam.d/system to make it look like this:

account required pam_login_access.so
account required /usr/local/lib/pam_ldap.so no_warn ignore_authinfo_unavail ignore_unknown_user
account required /usr/local/lib/pam_hbac.so ignore_authinfo_unavail ignore_unknown_user
account required pam_unix.so

Finally, we can disable the allow_all rule on the server and instead only allow access to bsduser to the freebsd.ipa.test machine. Please don’t forget to add other rules in your test environment so that you can at least access your IPA masters!

ipaserver $ ipa hbacrule-add freebsd-bsd-user
ipaserver $ ipa hbacrule-add-host --hosts=freebsd.ipa.test freebsd-bsd-user
ipaserver $ ipa hbacrule-add-user --users=bsduser freebsd-bsd-user
ipaserver $ ipa hbacrule-mod --servicecat=all freebsd-bsd-user
ipaserver $ ipa hbacrule-disable allow_all

Time to test pam_hbac! First, we can make sure bsduser is still able to log in:

% su - bsduser
Password:
$ id
uid=1207000884(bsduser) gid=1207000884(bsduser) groups=1207000884(bsduser)
$ ^D

OK, now for a negative test, see if tuser is denied access:

% su - tuser
Password:
su: Sorry

Great! Looking to /var/log/auth.log reveals that it was indeed the account control module that denied access:

May 25 13:26:37 su: pam_acct_mgmt: permission denied

Example 2: using pam_hbac to provide access control for AD users on CentOS-5

One important use-case for pam_hbac is to provide access control for setups that resolve users from a trusted AD domain using the ‘legacy client’ setup in which a CentOS-5 machine is set up with id_provider=ldap pointing to the IPA server’s compat tree. Please note that if your IPA domain doesn’t have a trust relationship established with an AD domain, you can already use HBAC provided by SSSD and you don’t need pam_hbac at all in that case.

In our setup, let’s have an AD group linux_admins. Our goal will be to grant access to the CentOS-5 machine to members of linux_admins and nobody else. First, make sure your CentOS-5 client is able to resolve and authenticate AD users and the user is a member of the linux_admins group. You can use the output of ipa-advise config-redhat-sssd-before-1-9 as a starting point. Once the CentOS-5 client is set up, you’ll be able to resolve the user, id should report the group and you should be able to authenticate as that user. Because HBAC rules can only be linked to IPA POSIX groups, we also need to add the AD group as a member of an IPA external group which in turn needs to be added to an IPA POSIX group:

ipaserver $ ipa group-add --external linux_admins_ext
ipaserver $ ipa group-add-member --groups=linux_admins_ext ipa_linux_admins
ipaserver $ ipa group-add-member --external=linux_admins@win.trust.test linux_admins_ext

Try logging in:

$ su - linuxop@win.trust.test
Password:
-sh-3.2$ id
uid=300403108(linuxop@win.trust.test) gid=300403108(linuxop@win.trust.test) groups=300400513(domain users@win.trust.test),300403108(linuxop@win.trust.test),300403109(linux_admins@win.trust.test),1207000858(ipa_linux_admins) context=user_u:system_r:unconfined_t

OK, the id output reports the user is a member of the ipa_linux_admins group, so we can proceed with setting up the HBAC rules.

In order to set up the HBAC rules correctly, it’s important to understand how AD users are authenticated when using the compat tree – the SSSD client on the CentOS-5 machine does an LDAP bind using the user’s password against the IPA compat tree. This password bind is intercepted by the slapi-nis plugin running on the IPA server which in turn authenticates against the system-auth service on the IPA server itself. Therefore, it’s important that all users who should be allowed to authenticate against the compat tree are allowed access to the system-auth service on the IPA server. More details about the authentication against the compat tree can be found in the

Let’s add the system-auth rule first, together with its HBAC service that allows access to everyone to the IPA server itself using the system-auth PAM service:

ipaserver $ ipa hbacsvc-add system-auth
ipaserver $ ipa hbacrule-add system-auth-everyone
ipaserver $ ipa hbacrule-add-host --hosts=unidirect.ipa.test system-auth-everyone
ipaserver $ ipa hbacrule-mod --usercat=all system-auth-everyone

The resulting HBAC rule would look like this:

ipaserver $ ipa hbacrule-show system-auth-everyone
Rule name: system-auth-everyone
User category: all
Enabled: TRUE
Hosts: unidirect.ipa.test
Services: system-auth

To avoid allowing all access, disable the allow_all rule

ipaserver$ ipa hbacrule-disable allow_all

As a pre-flight check, you can disable the system-auth-everyone rule, then your user should be denied access and the server-side journal should show something like:

pam_sss(system-auth:account): Access denied for user linuxop@win.trust.test: 6 (Permission denied)

Enabling the rule would allow access again. Of course, don’t forget to allow access to your IPA servers and clients with other services and for other users as appropriate!

As the final step on the server, we can define the HBAC rule for our particular CentOS-5 machine. While the access was already checked against the system-auth rule on the server, that rule cannot discriminate between different hosts and applies to all authentication requests coming from the slapi-nis plugin.

Please note that the a bind user must be configured, refer to the BSD example for details.

The rule will permit all members of ipa_linux_admins group to access all PAM services on the host called centos5.ipa.test:

ipaserver $ ipa hbacrule-add centos5-ipa-linux-admins
ipaserver $ ipa hbacrule-add-host --hosts=centos5.ipa.test centos5-ipa-linux-admins
ipaserver $ ipa hbacrule-mod --servicecat=all centos5-ipa-linux-admins
ipaserver $ ipa hbacrule-add-user --groups=ipa_linux_admins centos5-ipa-linux-admins

Now we’re ready to configure the CentOS-5 client. Make sure pam_hbac is installed first – you can use our COPR repository for that. Just drop the repo file to /etc/yum.repos.d and then yum install pam_hbac.

The configuration file for CentOS-5 machine is quite similar to the one we used on BSD earlier:

URI = ldap://unidirect.ipa.test
BASE = dc=ipa,dc=test
BIND_DN = uid=hbac,cn=sysaccounts,cn=etc,dc=ipa,dc=test
BIND_PW = Secret123

The file is located at /etc/pam_hbac.conf.

Next, add pam_hbac to the /etc/pam.d/system-auth file on the CentOS-5 machine to enable HBAC rules enforcement. This snippet shows the whole account stack on my CentOS-5 machine:

account required pam_unix.so
account sufficient pam_succeed_if.so uid < 500 quiet
account [default=bad success=ok user_unknown=ignore] pam_sss.so
account sufficient pam_localuser.so
account [default=bad success=ok user_unknown=ignore] pam_hbac.so
account required pam_permit.so

You can see I also added the pam_localuser.so module just before the line with pam_hbac.so. Adding the pam_localuser.so module ensures that pam_hbac wouldn’t be called for local users defined in /etc/passwd – we only want the HBAC policies to apply to IPA and AD users.

It’s time to check the rule. As a positive check, we log in with the linuxop user:

$ su - linuxop@win.trust.test
Password:
su: warning: cannot change directory to /home/win.trust.test/linuxop: No such file or directory
-sh-3.2$

As a negative test, we can try logging in as the AD administrator perhaps:

$ su - administrator@win.trust.test
Password:
su: incorrect password

And indeed, /var/log/secure would tell us it was pam_hbac that denied access:

May 25 22:48:35 centos5 su: pam_hbac(su-l:account): returning [6]: Permission denied

Awesome, now even your CentOS-5 server enforces HBAC rules!

Troubleshooting pam_hbac

In case something doesn’t work as expected, there are several ways to debug pam_hbac and the HBAC access control in general. The first step should be to check with the ipa command line tools and the hbactest plugin. For example, this is how we’d test the bsduser‘s access to the freebsd.ipa.test machine:

ipaserver $ ipa hbactest --user=bsduser --service=su --host=freebsd.ipa.test
--------------------
Access granted: True
--------------------
Matched rules: freebsd-bsd-user
Not matched rules: centos5-ipa-linux-admins
Not matched rules: ph_test_trule
Not matched rules: system-auth-everyone

The pam_hbac module by default only logs failures. If you want to see more verbose output, add the debug parameter to the PAM service file configuration. All logging, including debug logs is done using the standard pam_syslog() PAM calls, so the location really depends on your operating system. But as an illustration, this is the tail of the debug output for the AD user case when the allowed user logs in:

May 25 22:48:47 centos5 su: pam_hbac(su-l:account): ALLOWED by rule [centos5-ipa-linux-admins].
May 25 22:48:47 centos5 su: pam_hbac(su-l:account): hbac_evaluate() >]
May 25 22:48:47 centos5 su: pam_hbac(su-l:account): Allowing access
May 25 22:48:47 centos5 su: pam_hbac(su-l:account): returning [0]: Success

The full debug output breaks down all the rules into their components and shows what matched and what did not.

Conclusion

This blog post described the first version of pam_hbac. We will continue the development to add more supported platforms – in the next version, we would like to add support for IBM AIX and Apple OS-X. There are also several bugs and minor enhancements we would like to add. Feel free to file an issue on github if there is something you would like to see improved or something doesn’t work for you!


by jhrozek at May 26, 2016 03:16 PM

May 19, 2016

Adam Young

Installing FreeIPA on a Tripleo undercloud

I’ve been talking about using FreeIPA to secure OpenStack since the Havana summit in Portland. I’m now working with Tripleo to install OpenStack. To get the IPA server installed along with Tripleo Quickstart requires a VM accessible from the Ansible playbook.

UPDATE:  This is changing rapidly.  I’ll post complete updates in a bit, but the commit below is now one in a chain, and the isntracutrions are in the git messages for the commits.  One missing step in order to run ansible is:export ANSIBLE_CONFIG=$HOME/.quickstart/tripleo-quickstart/ansible.cf

 

Build the Identity VM

  • Apply the patch to quickstart that builds the VM
  • Run quickstartm at least up to the undercloud stage. The steps below do the complete install.

Since Quickstart makes a git repo under ~/.quickstart, I’ve been using that as my repo. It avoids duplication, and makes my changes visible.

mkdir ~/.quickstart
cd ~/.quickstart
git clone https://github.com/openstack/tripleo-quickstart
cd tripleo-quickstart
git review -d 315749
~/.quickstart/tripleo-quickstart/quickstart.sh   -t all warp.lab4.eng.bos.redhat.com

If you are not set up for git review, you can pull the patch manually from Gerrit.

Set the hostname FQDN for the identity machine

ssh -F /home/ayoung/.quickstart/ssh.config.ansible identity-root hostnamectl set-hostname --static identity.warp.lab4.eng.bos.redhat.com

Add variables to the inventory file ~/.quickstart/hosts

[vms:vars]
ipa_server_password=FreeIPA4All
ipa_domain=warp.lab4.eng.bos.redhat.com
deployment_dir=/home/ayoung/.ossipee/deployments/warp.lab4.eng.bos.redhat.com
ipa_realm=WARP.LAB4.ENG.BOS.REDHAT.COM
cloud_user=stack
ipa_admin_user_password=FreeIPA4All
ipa_forwarder=
nameserver=

Activate the Venv:

. ~/.quickstart/bin/activate

Use Rippowam branch

cd ~/devel
git clone https://github.com/admiyo/rippowam
cd rippowam
git checkout origin/tripleo

Run ansible

ansible-playbook -i ~/.quickstart/hosts ~/devel/rippowam/ipa.yml

Making this VM available to the overcloud requires some network wizardry. That deserves a post itself.

by Adam Young at May 19, 2016 10:43 PM

Powered by Planet