FreeIPA Identity Management planet - technical blogs

December 22, 2017

William Brown

Using b43 firmware on Fedora Atomic Workstation

Using b43 firmware on Fedora Atomic Workstation

My Macbook Pro has a broadcom b43 wireless chipset. This is notorious for being one of the most annoying wireless adapters on linux. When you first install Fedora you don’t even see “wifi” as an option, and unless you poke around in dmesg, you won’t find how to enable b43 to work on your platform.

b43

The b43 driver requires proprietary firmware to be loaded else the wifi chip will not run. There are a number of steps for this process found on the linux wireless page . You’ll note that one of the steps is:

export FIRMWARE_INSTALL_DIR="/lib/firmware"
...
sudo b43-fwcutter -w "$FIRMWARE_INSTALL_DIR" broadcom-wl-5.100.138/linux/wl_apsta.o

So we need to be able to write and extract our firmware to /usr/lib/firmware, and then reboot and out wifi works.

Fedora Atomic Workstation

Atomic WS is similar to atomic server, that it’s a read-only ostree based deployment of fedora. This comes with a number of unique challenges and quirks but for this issue:

sudo touch /usr/lib/firmware/test
/bin/touch: cannot touch '/usr/lib/firmware/test': Read-only file system

So we can’t extract our firmware!

Normally linux also supports reading from /usr/local/lib/firmware (which on atomic IS writeable ...) but for some reason fedora doesn’t allow this path.

Solution: Layered RPMs

Atomic has support for “rpm layering”. Ontop of the ostree image (which is composed of rpms) you can supply a supplemental list of packages that are “installed” at rpm-ostree update time.

This way you still have an atomic base platform, with read-only behaviours, but you gain the ability to customise your system. To achive it, it must be possible to write to locations in /usr during rpm install.

This means our problem has a simple solution: Create a b43 rpm package. Note, that you can make this for yourself privately, but you can’t distribute it for legal reasons.

Get setup on atomic to build the packages:

rpm-ostree install rpm-build createrepo
reboot

RPM specfile:

%define debug_package %{nil}
Summary: Allow b43 fw to install on ostree installs due to bz1512452
Name: b43-fw
Version: 1.0.0
Release: 1
License: Proprietary, DO NOT DISTRIBUTE BINARY FORMS
URL: http://linuxwireless.sipsolutions.net/en/users/Drivers/b43/
Group: System Environment/Kernel

BuildRequires: b43-fwcutter

Source0: http://www.lwfinger.com/b43-firmware/broadcom-wl-5.100.138.tar.bz2

%description
Broadcom firmware for b43 chips.

%prep
%setup -q -n broadcom-wl-5.100.138

%build
true

%install
pwd
mkdir -p %{buildroot}/usr/lib/firmware
b43-fwcutter -w %{buildroot}/usr/lib/firmware linux/wl_apsta.o

%files
%defattr(-,root,root,-)
%dir %{_prefix}/lib/firmware/b43
%{_prefix}/lib/firmware/b43/*

%changelog
* Fri Dec 22 2017 William Brown <william at blackhats.net.au> - 1.0.0
- Initial version

Now you can put this into a folder like so:

mkdir -p ~/rpmbuild/{SPECS,SOURCES}
<editor> ~/rpmbuild/SPECS/b43-fw.spec
wget -O ~/rpmbuild/SOURCES/broadcom-wl-5.100.138.tar.bz2 http://www.lwfinger.com/b43-firmware/broadcom-wl-5.100.138.tar.bz2

We are now ready to build!

rpmbuild -bb ~/rpmbuild/SPECS/b43-fw.spec
createrepo ~/rpmbuild/RPMS/x86_64/

Finally, we can install this. Create a yum repos file:

[local-rpms]
name=local-rpms
baseurl=file:///home/<YOUR USERNAME HERE>/rpmbuild/RPMS/x86_64
enabled=1
gpgcheck=0
type=rpm
rpm-ostree install b43-fw

Now reboot and enjoy wifi on your Fedora Atomic Macbook Pro!

December 22, 2017 02:00 PM

December 21, 2017

Alexander Bokovoy

FOSDEM 2018 IAM devroom

FOSDEM is one of largest free software conferences in Europe. It is run by volunteers for volunteers and since 2001 gathers together more than 8000 people every year. Sure, during first years there were less visitors (I had been lucky to actually present at the first FOSDEM and also ran a workshop there) but the atmosphere didn’t change and it is still has the same classical hacker gathering feeling.

In 2018 FOSDEM will run on the weekend of February 3rd and 4th. Since the event has grown up significantly, there are multiple development rooms in addition to the main tracks. Each development room is given a room for Saturday or Sunday (or both). Each development room issues own call for proposals (CfP), chooses talks for the schedule and runs the event. FOSDEM crew films and streams all devrooms online for those who couldn’t attend them in real time but the teams behind actual devrooms are what powers the event.

In 2018 there will be 42 devrooms in addition to the main track. Think about it as 43 different conferences happening at the same time, that’s the scale and power of FOSDEM. I’m still being impressed by the power of volunteers who contribute to FOSDEM success even long since the original crew of sysadmins of Free University of Brussels decided to stop working on FOSDEM.

Identity management related topics has been always part of FOSDEM. In 2016 I was presenting in the main track about our progress with GNOME desktop readiness for enteprise environments, integration with freeIPA and other topics, including a demo of freeIPA and Ipsilon powering authentication for Owncloud and Google Apps. Some of my colleagues ran freeIPA presentation well before that too.

We wanted to have a bit more focused story telling too. Radovan Semancik tried to organize a devroom in 2016 but it wasn’t accepted. Michael Ströder tried the same in 2017. Getting a devroom proposal to pass always comes with a fair amount of luck but finally we suceeded with FOSDEM 2018. I’d like to thank you my colleague Fraser Tweedale who wrote the original proposal draft out of which grew up the effort with Identity and Access Management devroom.

We tried to keep a balance between a number of talks and a variety of topics presented. We only have 8.5 hours of schedule allocated. With 5 minutes intervals between the talks we were able to accomodate 14 talks out of 25 proposals.

The talks are structured in roughly five categories:

  • Identity and access management for operating systems
  • Application level identity and access management
  • Interoperability issues between POSIX and Active Directory environments
  • Deployment reports for open source identity management solutions
  • Security and cryptography on a system and application level

Admittedly, we’ve got one of smallest rooms (50 people) allocated but this is a start. On Saturday, February 3rd, 2018, please come to room UD2.119. And if you couldn’t be in person at FOSDEM, streaming will be available too.

See you in Brussels!

December 21, 2017 10:35 AM

December 05, 2017

Florence Blanc-Renaud

Demystifying the Certificate Authority component in FreeIPA

When I joined the FreeIPA team, I wanted to start by getting more familiar with the product from a user perspective and the first step was to install FreeIPA server.

I opened the Linux Domain Identity, Authentication, and Policy Guide, tried to figure out which options would be useful and… I froze when I reached the section Determining which CA configuration to use. I had literally no idea what the documentation meant by “Server with an integrated IdM CA” or “Server without a CA“. But I had to choose something to start with.
In this blog post, I will explain what this choice really means, and what are the consequences of picking one over the other.

Basic requirement: HTTP and LDAP server certificates

First of all, FreeIPA is composed of many services accessed through the network, among which a LDAP server and an HTTP server. These 2 services can be accessed through a standard port (in clear) or through a SSL port, meaning that they both need a server certificate.

The HTTP and LDAP server certificates are needed during the installation, because the installer will put them in the right NSS database and configure their nickname and location for the HTTP and LDAP servers to find them. There are multiple ways to obtain server certificates, but one needs to understand first the basic notions around Public Key Infrastructure (PKI).

I will use a comparison with the delivery of a passport: in order to have a passport issued with your name, you need to provide official documents (for instance a birth certificate and a photo) to the government agency that will validate the documents, make sure that you are who you claim to be, and then issue the passport. The server certificate can be compared to the passport, that will later prove your identity to whoever trusts the government agency, and the government agency can be compared to the Certificate Authority.

So in order to obtain server certificates, it is possible to:

  • request certificates to an official Certificate Authority. Many commercial or non-profit companies provide this type of service (Verisign, Let’sEncrypt, GoDaddy etc…)
  • request certificates to a home-made Certificate Authority. It is possible to create a home-made self-signed Certificate Authority with tools such as certutil or openssl. The main difference with the previous method is that people are less likely to trust your home-made CA (it’s like asking them to accept a passport that was issued by a newly founded country not recognized yet by the rest of the world). Self-signed here means that the Certificate Authority passport  is delivered by… the Certificate Authority itself!
  • install your own Certificate Authority with FreeIPA, that will sign the certificates needed by the HTTP and LDAP server.

The last option corresponds to a “Server with an integrated IdM CA” and has many advantages over the first options:

  • certificates have a limited lifetime and need to be renewed before they expire (otherwise the HTTP/LDAP servers stop working). The renewal process is time-consuming but also can be forgotten if the sysadmin does not carefully track the dates, leaving the deployment in a state where some services become unavailable.
    When FreeIPA is installed with an embedded Certificate Authority, FreeIPA automatically monitors the expiry dates of the certificates and triggers a renewal a few weeks before expiration, ensuring service continuity. This is the first advantage of the embedded CA, but not the only one.
  • FreeIPA with an embedded CA is also able to deliver certificates for the users, the hosts or the services managed by FreeIPA. Various certificates profiles can be defined, populating specific fields in each type of certificate (for instance extensions with the OCSP responder URL…)
    For more information on the embedded Certificate Authority, you can refer to Dogtag Certificate System documentation (the embedded CA in FreeIPA is a tailored version of Dogtag).

At this point, if you decide to install FreeIPA with an embedded Certificate Authority, there are 2 possible choices. The embedded CA can either be:

  • a self-signed CA: the Certificate Authority is created “from scratch” without the need for any external authority. It is the root CA, meaning that its own certificate was not delivered by anyone else but signed by itself.
  • a CA subordinate to an external CA. This means that FreeIPA CA certificate was signed by another CA, a sort of parent CA.

 

Corresponding installation options

CA-less installation

As said above, we need one certificate for the HTTP server and one for the LDAP server.  They have to be provided to ipa-server-install or ipa-replica-install with the options:

  • –http-cert-file / –http-pin: file containing the HTTP server certificate + private key and password protecting the file
  • –dirsrv-cert-file / –dirsrv-pin: file containing the LDAP server certificate + private key and password protecting the file

Installation with an embedded self-signed CA

FreeIPA CA is created during FreeIPA installation, and generates the HTTP and LDAP certificates. There is no need to provide any cert file! No options!

Installation with an externally-signed embedded CA

The installation is a 2-step process. In the first step, ipa-server-install must be called with –external-ca and generates a CSR file (Certificate Signing Request). This CSR file needs to be sent to the external CA that will perform a bunch of validations to authenticate the recipient of the certificate and issue a certificate for FreeIPA Certificate Authority.

In the second step, ipa-server-install is called with –external-cert-file to provide the certificate obtained from the external CA. The installer then configures FreeIPA certificate authority as a subCA of the external CA, and FreeIPA CA can issue the HTTP and LDAP server certificates.

 

What if…

I installed FreeIPA without any embedded CA but I change my mind?

FreeIPA allows to install an embedded CA at a later time, using ipa-ca-install. The tool provides the same options as ipa-server-install: you can either install a self-signed CA or an externally signed CA.

Important: installing an embedded CA with ipa-ca-install does not replace the HTTP and LDAP server certificates. If they were initially delivered by an external CA, they will not be automatically renewed.

I installed FreeIPA with a self-signed CA but I’d rather have an externally-signed CA?

FreeIPA allows to switch from self-signed CA to externally-signed CA using ipa-cacert-manage renew –external-ca. This is a 2-step process similar to ipa-server-install –external-ca, where the 1st step produces a CSR that needs to be supplied to an external CA. The external CA then issues a CA cert that is provided back to ipa-cacert-manage renew through the –external-cert-file option.

I installed FreeIPA with an externally signed CA but I’d rather have a self-signed CA?

FreeIPA allows to switch from externally signed CA to self-signed CA using ipa-cacert-manage renew –self-signed.

I want to replace HTTP and LDAP certificates with certificates signed by a third-party Certificate Authority?

FreeIPA provides the ipa-server-certinstall tool that will replace the current HTTP or LDAP certificates with the certs provided in the file.
Important: ipa-server-certinstall can be called to install externally signed certificates for HTTP and LDAP even if FreeIPA is installed with an embedded CA. In this case, FreeIPA CA still has the capability to issue certificates for users, hosts or services. The only difference is that HTTP and LDAP certificates are not issued by IPA CA.

 

Other FreeIPA commands related to certificates

When an embedded CA is installed, its certificate must be present in various files or NSS databases on all the FreeIPA hosts (master, replicas and clients) so that any FreeIPA machine trusts the certificates delivered by the embedded CA.

In addition to that, the HTTP and LDAP server certificates can be issued either by IPA CA or by an external CA, and the issuer can even be changed over time. In the external CA case, this means that the external CA needs to be trusted by all the FreeIPA machines for the IPA commands to work (the CLI communicates with the HTTP server using the https port, and this requires to trust the CA that issued the HTTP server certificate). Yet another CA certificate to add to files and databases on all the FreeIPA machines…

To ease this configuration, the tool ipa-certupdate is able to retrieve the CA certificates stored in LDAP (the embedded FreeIPA CA or the external CA certs), and install them in all the relevant files and NSS databases. It needs to be called when the CA cert is manually renewed or when a new external CA cert is added.

ipa-cacert-manage install is used to add a new external CA certificate in the LDAP store. It does not replace FreeIPA embedded CA but rather declares another certificate authority as trusted. This is useful when the HTTP and LDAP server certificates need to be replaced by certs signed by a new CA, not yet known by FreeIPA. After calling ipa-cacert-manage install (that puts the new CA in LDAP store), you need to call ipa-certupdate on all FreeIPA machines (to get the CA from the LDAP store and put it in the local NSS databases).

ipa cert-request is used to request new certificates for users, hosts or services. The certificate is signed by FreeIPA embedded CA (meaning that this command is available only when an embedded CA is configured).

Conclusion

By now you should be able to pick a deployment option and understand the differences between CA-less or with embedded CA, and self-signed CA or externally-signed CA.

You should also be aware that your choice is not definitive and that it is possible to install a CA at a later time or change your certificate chain from self-signed to externally-signed and vice-versa.

by floblanc at December 05, 2017 03:05 PM

December 02, 2017

Nathaniel McCallum

Sending FreeOTP Codes Over Bluetooth

One Time Passwords are everywhere these days. We have great standards, like HOTP and TOTP, which means that all our implementations can be interoperable. We also have open source implementations of these standards, such as FreeOTP. Tons of server applications and web services use OTPs every day for millions of users.

But, currently, one big user experience issue comes to the fore: typing OTP codes is error-prone and time-consuming. Can’t we do better?

On and off for the last few years, I have been working to improve this situation. The result I have come up with is a companion app to FreeOTP called Jelling that allows FreeOTP to send OTP codes directly to paired Bluetooth Low Energy devices. I hope to explain in this post how Jelling works and outline the challenges that still remain. Hopefully you, my most esteemed reader, can help us close some of the gaps that remain.

Bluetooth LE GATT

Bluetooth Low Energy (BLE) has a mode of operation called the Generic Attribute Profile (GATT). Applications define collections of services that contain characteristics which are manipulated by remote entities.1 This means we can define our own protocol using GATT and use this protocol to send a token code from a Sender to the intended Receiver. The Sender will be an Android or iOS version of the FreeOTP application. The Receiver will usually be a Windows, macOS or Linux computer.

First, we have to decide whether the Sender or the Receiver will be the Peripheral or the Central. A Peripheral sends advertisement packets which tell other devices that the specified service is available for use. The Central scans for these advertisements and, when found, can connect to the Peripheral and use its Services. Two principles are important. First, a Peripheral can only have one Central, but a Central can have many Peripherals. Second, scanning uses more power than advertising, so it should be used sparingly.

Second, we need to consider the user experience. Attempting to initialize the token sharing transaction from the Receiver might seem to make the most sense, since the user is already typically doing something on the Receiver when he wants a token code. However, this means we would need to negotiate over BLE about which token code to receive. This negotiation would use a lot of power. Further, because the tokens are authentication credentials, we must confirm on the Sender before sending them to protect from token theft. On the other hand, if the Sender selects the token first and initiates the share we don’t require negotiation or confirmation at all. Therefore, we can reduce the number of GATT requests we need to send to one.2

Third, we need to consider security. We can’t just share a token without being sure about who will receive it. This means that we must use both BLE authentication and encryption. Further, if we implemented the Sender as a Peripheral where it sent characteristic notifications including the token code, we often can’t see who is subscribed to those notifications due to platform API limitations. This doesn’t work for our case since we might care that some OTP codes are only sent to some paired devices.

All of this leads to a simple implementation. The Sender (FreeOTP) operates in Central mode and the Receiver (computer) is a Peripheral that exposes a single service (B670003C-0079-465C-9BA7-6C0539CCD67F) which, in turn, exposes a single, write-only characteristic (F4186B06-D796-4327-AF39-AC22C50BDCA8). We protect this characteristic using both BLE encryption and authentication. The Sender initiates the connection and transfers the OTP by sending a single write to the characteristic containing the OTP code it wishes to share. This means that the Receiver needs to advertise whenever it wants to receive token codes (default: always3). The Sender scans and connects only when it wants to share; this preserves battery life. The Sender chooses which token to share and which Receiver to send it to. Once the Receiver has the token, it emulates a keyboard and types wherever the cursor is. This user experience is simple and intuitive.

Project Status

The Good News

I’ve implemented Receivers for Windows, macOS and Linux. Each is implemented using its platform’s native development tooling and languages (C#, Swift and C, respectively). The Receivers advertise themselves as a BLE Peripheral which FreeOTP can connect to via pairing. Once paired, FreeOTP can send tokens to the selected Receiver.

I’ve also implemented this behavior in FreeOTP. For iOS, this is already merged to master. For Android, this lives in the bt branch which I hope to merge soon. Like always, you can click a token to show the code directly. However, now you can choose to share the token. This pops up a secondary menu which shows the available Receivers within range. Selecting a Receiver shares the token if already paired. Otherwise, it begins the pairing procedure.

The Bad News

Unfortunately, the platform support for this functionality appears to be frustratingly inconsistent. The following chart documents my success and failure. This test reflects my tests with both FreeOTP and LightBlue (a BLE GATT testing application on iOS and Android). This proves (to me) that the problem isn’t my code but rather platform incompatibilities. For more details, see the charts in the README.md of each repository.

Status Matrix

Pixel Nexus 5x iPhone 6+
Windows
macOS
Linux

Windows

Sharing works with my iPhone 6+. It does not work from either my Google Pixel or Nexus 5x. I suspect my Pixel has problems because I wasn’t able to get it to work anywhere. However, on my Nexus 5x the simple, single GATT write never causes the callback to fire in my Windows code. It does work if I disable authentication. This leads me to believe that there is a compatibility issue between Windows and Android during pairing.

macOS

The macOS BLE implementation seems to be the most mature of all the platforms. Sharing works on both my iPhone 6 and Nexus 5x. Like I mentioned above, my Pixel seems horribly broken. I even updated to 8.1 beta to see if it has been fixed. No dice. I will try on another Pixel that has been reset to factory defaults soon.

Linux

Unfortunately, Linux appears to be the most broken of all the platforms. With my iPhone 6+ I am able to successfully see the advertisement and perform service discovery. But when I perform a GATT write, pairing never begins. My Nexus 5x can see the advertisement and connect, but it fails performing service discovery. My Pixel connects only in non-BLE mode (it does seem to connect in BR/ATT mode; but that is useless).

The Call for Help!

I’d really like to bring this functionality to a FreeOTP release in the near future. But we need your help! “How?” you ask.

  1. Test Jelling on your systems. We need tests with all different kinds of phones and laptops. Test instructions are available in the README.md of each repository (Windows, macOS and Linux).

  2. Assist debugging combinations where things are broken. In particular, we need help from Bluez. If you are a Bluez developer, please contact me!

  3. Review the code. I’m not a Windows or macOS developer. If I’m doing something wrong, it is probably my fault. Bug reports are welcome. Patches are even better.


  1. Search for “Bluetooth GATT” to learn more. [return]
  2. “Here is the token code.” [return]
  3. There is a trade-off here between usability, privacy and battery usage. The later is much less of a concern in the Receiver since it has a large battery and is often plugged in. Privacy is a tough one. If you’re advertising you can be used by FreeOTP, you can also be tracked by someone physically nearby. We consider this low-risk. [return]

December 02, 2017 04:23 PM

November 22, 2017

Fraser Tweedale

Changing a CA’s Subject DN; Part II: FreeIPA

Changing a CA’s Subject DN; Part II: FreeIPA

In the previous post I explained how the CA Subject DN is an integral part of X.509 any why you should not change it. Doing so can break path validation, CRLs and OCSP, and many programs will not copye with the change. I proposed some alternative approaches that avoid these problems: re-chaining the CA, and creating subordinate CAs.

If you were thinking of changing your CA’s Subject DN, I hope that I dissuaded you. But if I failed, or you absolutely do need to change the Subject DN of your CA, where there’s a will there’s way. The purpose of this post is to explore how to do this in FreeIPA, and discuss the implications.

This is a long post. If you are really changing the CA subject DN, don’t skip anything. Otherwise don’t feel bad about skimming or jumping straight to the discussion. Even skimming the article will give you an idea of the steps involved, and how to repair the ensuing breakage.

Changing the FreeIPA CA’s Subject DN

Before writing this post, I had never even attempted to do this. I am unaware of anyone else trying or whether they were successful. But the question of how to do it has come up several times, so I decided to investigate. The format of this post follows my exploration of the topic as I poked and prodded a FreeIPA deployment, working towards the goal.

What was the goal? Let me state the goal, and some assumptions:

  • The goal is to give the FreeIPA CA a new Subject DN. The deployment should look and behave as though it were originally installed with the new Subject.
  • We want to keep the old CA certificate in the relevant certificate stores and databases, alongside the new certificate.
  • The CA is not being re-keyed (I will deal with re-keying in a future article).
  • We want to be able to do this with both self-signed and externally-signed CAs. It’s okay if the process differs.
  • It’s okay to have manual steps that the administrator must perform.

Let’s begin on the deployment’s CA renewal master.

Certmonger (first attempt)

There is a Certmonger tracking request for the FreeIPA CA, which uses the dogtag-ipa-ca-renew-agent CA helper. The getcert resubmit command lets you change the Subject DN when you resubmit a request, via the -N option. I know the internals of the CA helper and I can see that there will be problems after renewing the certificate this way. Storing the certificate in the ca_renewal LDAP container will fail. But the renewal itself might succeed so I’ll try it and see what happens:

[root@f27-2 ~]# getcert resubmit -i 20171106062742 \
  -N 'CN=IPA.LOCAL CA 2017.11.09'
Resubmitting "20171106062742" to "dogtag-ipa-ca-renew-agent".

After waiting about 10 seconds for Certmonger to do its thing, I check the state of the tracking request:

[root@f27-2 ~]# getcert list -i 20171106062742
Request ID '20171106062742':
  status: MONITORING
  CA: dogtag-ipa-ca-renew-agent
  issuer: CN=Certificate Authority,O=IPA.LOCAL 201711061603
  subject: CN=Certificate Authority,O=IPA.LOCAL 201711061603
  expires: 2037-11-06 17:26:21 AEDT
  ... (various fields omitted)

The status and expires fields show that renewal succeeded, but the certificate still has the old Subject DN. This happened because the dogtag-ipa-ca-renew-agent helper doesn’t think it is renewing the CA certificate (which is true!)

Modifying the IPA CA entry

So let’s trick the Certmonger renewal helper. dogtag-ipa-ca-renew-agent looks up the CA Subject DN in the ipaCaSubjectDn attribute of the ipa CA entry (cn=ipa,cn=cas,cn=ca,{basedn}). This attribute is not writeable via the IPA framework but you can change it using regular LDAP tools (details out of scope). If the certificate is self-signed you should also change the ipaCaIssuerDn attribute. After modifying the entry run ipa ca-show to verify that these attributes have the desired values:

[root@f27-2 ~]# ipa ca-show ipa
  Name: ipa
  Description: IPA CA
  Authority ID: cdbfeb5a-64d2-4141-98d2-98c005802fc1
  Subject DN: CN=IPA.LOCAL CA 2017.11.09
  Issuer DN: CN=IPA.LOCAL CA 2017.11.09
  Certificate: MIIDnzCCAoegAwIBAgIBCTANBgkqhkiG9w0...

Certmonger (second attempt)

Now let’s try and renew the CA certificate via Certmonger again:

[root@f27-2 ~]# getcert resubmit -i 20171106062742 \
  -N 'CN=IPA.LOCAL CA 2017.11.09'
Resubmitting "20171106062742" to "dogtag-ipa-ca-renew-agent".

Checking the getcert list output after a short wait:

[root@f27-2 ~]# getcert list -i 20171106062742
Request ID '20171106062742':
  status: MONITORING
  CA: dogtag-ipa-ca-renew-agent
  issuer: CN=Certificate Authority,O=IPA.LOCAL 201711061603
  subject: CN=IPA.LOCAL CA 2017.11.09
  expires: 2037-11-09 16:11:12 AEDT
  ... (various fields omitted)

Progress! We now have a CA certificate with the desired Subject DN. The new certificate has the old (current) issuer DN. We’ll ignore that for now.

Checking server health

Now I need to check the state of the deployment. Did anything go wrong during renewal? Is everything working?

First, I checked the Certmonger journal output to see if there were any problems. The journal contained the following messages (some fields omitted for brevity):

16:11:17 /dogtag-ipa-ca-renew-agent-submit[1662]: Forwarding request to dogtag-ipa-renew-agent
16:11:17 /dogtag-ipa-ca-renew-agent-submit[1662]: dogtag-ipa-renew-agent returned 0
16:11:19 /stop_pkicad[1673]: Stopping pki_tomcatd
16:11:20 /stop_pkicad[1673]: Stopped pki_tomcatd
16:11:22 /renew_ca_cert[1710]: Updating CS.cfg
16:11:22 /renew_ca_cert[1710]: Updating CA certificate failed: no matching entry found
16:11:22 /renew_ca_cert[1710]: Starting pki_tomcatd
16:11:34 /renew_ca_cert[1710]: Started pki_tomcatd
16:11:34 certmonger[2013]: Certificate named "caSigningCert cert-pki-ca" in token "NSS Certificate DB" in database "/etc/pki/pki-tomcat/alias" issued by CA and saved.

We can see that the renewal succeeded and Certmonger saved the new certificate in the NSSDB. Unfortunately there was an error in the renew_ca_cert post-save hook: it failed to store the new certificate in the LDAP certstore. That should be easy to resolve. I’ll make a note of that and continue checking deployment health.

Next, I checked whether Dogtag was functioning. systemctl status pki-tomcatd@pki-tomcat and the CA debug log (/var/log/pki/pki-tomcat/ca/debug) indicated that Dogtag started cleanly. Even better, the Dogtag NSSDB has the new CA certificate with the correct nickname:

[root@f27-2 ~]# certutil -d /etc/pki/pki-tomcat/alias \
  -L -n 'caSigningCert cert-pki-ca'
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 11 (0xb)
        Signature Algorithm: PKCS #1 SHA-256 With RSA Encryption
        Issuer: "CN=Certificate Authority,O=IPA.LOCAL 201711061603"
        Validity:
            Not Before: Thu Nov 09 05:11:12 2017
            Not After : Mon Nov 09 05:11:12 2037
        Subject: "CN=IPA.LOCAL CA 2017.11.09"
  ... (remaining lines omitted)

We have not yet confirmed that the Dogtag uses the new CA Subject DN as the Issuer DN on new certificates (we’ll check this later).

Now let’s check the state of IPA itself. There is a problem in communication between the IPA framework and Dogtag:

[root@f27-2 ~]# ipa ca-show ipa
ipa: ERROR: Request failed with status 500: Non-2xx response from CA REST API: 500.

A quick look in /var/log/httpd/access_log showed that it was not a general problem but only occurred when accessing a particular resource:

[09/Nov/2017:17:15:09 +1100] "GET https://f27-2.ipa.local:443/ca/rest/authorities/cdbfeb5a-64d2-4141-98d2-98c005802fc1/cert HTTP/1.1" 500 6201

That is a Dogtag lightweight authority resource for the CA identified by cdbfeb5a-64d2-4141-98d2-98c005802fc1. That is the CA ID recorded in the FreeIPA ipa CA entry. This gives a hint about where the problem lies. An ldapsearch reveals more:

[f27-2:~] ftweedal% ldapsearch -LLL \
    -D 'cn=directory manager' -w DM_PASSWORD \
    -b 'ou=authorities,ou=ca,o=ipaca' -s one
dn: cn=cdbfeb5a-64d2-4141-98d2-98c005802fc1,ou=authorities,ou=ca,o=ipaca
authoritySerial: 9
objectClass: authority
objectClass: top
cn: cdbfeb5a-64d2-4141-98d2-98c005802fc1
authorityID: cdbfeb5a-64d2-4141-98d2-98c005802fc1
authorityKeyNickname: caSigningCert cert-pki-ca
authorityEnabled: TRUE
authorityDN: CN=Certificate Authority,O=IPA.LOCAL 201711061603
description: Host authority

dn: cn=008a4ded-fd4b-46fe-8614-68518123c95f,ou=authorities,ou=ca,o=ipaca
objectClass: authority
objectClass: top
cn: 008a4ded-fd4b-46fe-8614-68518123c95f
authorityID: 008a4ded-fd4b-46fe-8614-68518123c95f
authorityKeyNickname: caSigningCert cert-pki-ca
authorityEnabled: TRUE
authorityDN: CN=IPA.LOCAL CA 2017.11.09
description: Host authority

There are now two authority entries when there should be one. During startup, Dogtag makes sure it has an authority entry for the main (“host”) CA. It compares the Subject DN from the signing certificate in its NSSDB to the authority entries. If it doesn’t find a match it creates a new entry, and that’s what happened here.

The resolution is straightforward:

  1. Stop Dogtag
  2. Update the authorityDN and authoritySerial attributes of the original host authority entry.
  3. Delete the new host authority entry.
  4. Restart Dogtag.

Now the previous ldapsearch returns one entry, with the original authority ID and correct attribute values:

[f27-2:~] ftweedal% ldapsearch -LLL \
    -D 'cn=directory manager' -w DM_PASSWORD \
    -b 'ou=authorities,ou=ca,o=ipaca' -s one
dn: cn=cdbfeb5a-64d2-4141-98d2-98c005802fc1,ou=authorities,ou=ca,o=ipaca
authoritySerial: 11
authorityDN: CN=IPA.LOCAL CA 2017.11.09
objectClass: authority
objectClass: top
cn: cdbfeb5a-64d2-4141-98d2-98c005802fc1
authorityID: cdbfeb5a-64d2-4141-98d2-98c005802fc1
authorityKeyNickname: caSigningCert cert-pki-ca
authorityEnabled: TRUE
description: Host authority

And the operations that were failing before (e.g. ipa ca-show ipa) now succeed. So we’ve confirmed, or restored, the basic functionality on this server.

LDAP certificate stores

There are two LDAP certificate stores in FreeIPA. The first is cn=ca_renewal,cn=ipa,cn=etc,{basedn}. It is only used for replicating Dogtag CA and system certificates from the CA renewal master to CA replicas. The dogtag-ipa-ca-renew-agent Certmonger helper should update the cn=caSigningCert cert-pki-ca,cn=ca_renewal,cn=ipa,cn=etc,{basedn} entry after renewing the CA certificate. A quick ldapsearch shows that this succeeded, so there is nothing else to do here.

The other certificate store is cn=certificates,cn=ipa,cn=etc,{basedn}. This store contains trusted CA certificates. FreeIPA clients and servers retrieve certificates from this directory when updating their certificate trust stores. Certificates are stored in this container with a cn based on the Subject DN, except for the IPA CA which is stored with cn={REALM-NAME} IPA CA. (In my case, this is cn=IPA.LOCAL IPA CA.)

We discovered the failure to update this certificate store earlier (in the Certmonger journal). Now we must fix it up. We still want to trust certificates with the old Issuer DN, otherwise we would have to reissue all of them. So we need to keep the old CA certificate in the store, alongside the new.

The process to fix up the certificate store is:

  1. Export the new CA certificate from the Dogtag NSSDB to a file:

    [root@f27-2 ~]# certutil -d /etc/pki/pki-tomcat/alias \
       -L -a -n 'caSigningCert cert-pki-ca' > new-ca.crt
  2. Add the new CA certificate to the certificate store:

    [root@f27-2 ~]# ipa-cacert-manage install new-ca.crt
    Installing CA certificate, please wait
    CA certificate successfully installed
    The ipa-cacert-manage command was successful
  3. Rename (modrdn) the existing cn={REALM-NAME} IPA CA entry. The new cn RDN is based on the old CA Subject DN.
  4. Rename the new CA certificate entry. The current cn is the new Subject DN. Rename it to cn={REALM-NAME} IPA CA. I encountered a 389DS attribute uniqueness error when I attempted to do this as a modrdn operation. I’m not sure why it happened. To work around the problem I deleted the entry and added it back with the new cn.

At the end of this procedure the certificate store is as it should be. The CA certificate with new Subject DN is installed as {REALM-NAME} IPA CA and the old CA certificate has been preserved under a different RDN.

Updating certificate databases

The LDAP certificate stores have the new CA certificate. Now we need to update the other certificate databases so that the programs that use them will trust certificates with the new Issuer DN. These databases include:

/etc/ipa/ca.crt

CA trust store used by the IPA framework

/etc/ipa/nssdb

An NSSDB used by FreeIPA

/etc/dirsrv/slapd-{REALM-NAME}

NSSDB used by 389DS

/etc/httpd/alias

NSSDB used by Apache HTTPD

/etc/pki/ca-trust/source/ipa.p11-kit

Adds FreeIPA CA certificates to the system-wide trust store

Run ipa-certupdate to update these databases with the CA certificates from the LDAP CA certificate store:

[root@f27-2 ~]# ipa-certupdate
trying https://f27-2.ipa.local/ipa/json
[try 1]: Forwarding 'schema' to json server 'https://f27-2.ipa.local/ipa/json'
trying https://f27-2.ipa.local/ipa/session/json
[try 1]: Forwarding 'ca_is_enabled/1' to json server 'https://f27-2.ipa.local/ipa/session/json'
[try 1]: Forwarding 'ca_find/1' to json server 'https://f27-2.ipa.local/ipa/session/json'
failed to update IPA.LOCAL IPA CA in /etc/dirsrv/slapd-IPA-LOCAL: Command '/usr/bin/certutil -d /etc/dirsrv/slapd-IPA-LOCAL -A -n IPA.LOCAL IPA CA -t C,, -a -f /etc/dirsrv/slapd-IPA-LOCAL/pwdfile.txt' returned non-zero exit status 255.
failed to update IPA.LOCAL IPA CA in /etc/httpd/alias: Command '/usr/bin/certutil -d /etc/httpd/alias -A -n IPA.LOCAL IPA CA -t C,, -a -f /etc/httpd/alias/pwdfile.txt' returned non-zero exit status 255.
failed to update IPA.LOCAL IPA CA in /etc/ipa/nssdb: Command '/usr/bin/certutil -d /etc/ipa/nssdb -A -n IPA.LOCAL IPA CA -t C,, -a -f /etc/ipa/nssdb/pwdfile.txt' returned non-zero exit status 255.
Systemwide CA database updated.
Systemwide CA database updated.
The ipa-certupdate command was successful
[root@f27-2 ~]# echo $?
0

ipa-certupdate reported that it was successful and it exited cleanly. But a glance at the output shows that not all went well. There were failures added the new CA certificate to several NSSDBs. Running one of the commands manually to see the command output doesn’t give us much more information:

[root@f27-2 ~]# certutil -d /etc/ipa/nssdb -f /etc/ipa/nssdb/pwdfile.txt \
    -A -n 'IPA.LOCAL IPA CA' -t C,, -a < ~/new-ca.crt
certutil: could not add certificate to token or database: SEC_ERROR_ADDING_CERT: Error adding certificate to database.
[root@f27-2 ~]# echo $?
255

At this point I guessed that because there is already a certificate stored with the nickname IPA.LOCAL IPA CA, NSS refuses to add a certificate with a different Subject DN under the same nickname. So I will delete the certificates with this nickname from each of the NSSDBs, then try again. For some reason the nickname appeared twice in each NSSDB:

[root@f27-2 ~]# certutil -d /etc/dirsrv/slapd-IPA-LOCAL -L

Certificate Nickname                                         Trust Attributes
                                                             SSL,S/MIME,JAR/XPI

CN=alt-f27-2.ipa.local,O=Example Organization                u,u,u
CN=CA,O=Example Organization                                 C,,
IPA.LOCAL IPA CA                                             CT,C,C
IPA.LOCAL IPA CA                                             CT,C,C

So for each NSSDB, to delete the certificate I had to execute the certutil command twice. For the 389DS NSSDB, the command was:

[root@f27-2 ~]# certutil -d /etc/httpd/alias -D -n "IPA.LOCAL IPA CA"

The commands for the other NSSDBs were similar. With the problematic certificates removed, I tried running ipa-certupdate again:

[root@f27-2 ~]# ipa-certupdate
trying https://f27-2.ipa.local/ipa/session/json
[try 1]: Forwarding 'ca_is_enabled/1' to json server 'https://f27-2.ipa.local/ipa/session/json'
[try 1]: Forwarding 'ca_find/1' to json server 'https://f27-2.ipa.local/ipa/session/json'
Systemwide CA database updated.
Systemwide CA database updated.
The ipa-certupdate command was successful
[root@f27-2 ~]# echo $?
0

This time there were no errors. certutil shows an IPA.LOCAL IPA CA certificate in the database, and it’s the right certificate:

[root@f27-2 ~]# certutil -d /etc/dirsrv/slapd-IPA-LOCAL -L

Certificate Nickname                                         Trust Attributes
                                                             SSL,S/MIME,JAR/XPI

CN=alt-f27-2.ipa.local,O=Example Organization                u,u,u
CN=CA,O=Example Organization                                 C,,
CN=Certificate Authority,O=IPA.LOCAL 201711061603            CT,C,C
CN=Certificate Authority,O=IPA.LOCAL 201711061603            CT,C,C
IPA.LOCAL IPA CA                                             C,,
[root@f27-2 ~]# certutil -d /etc/dirsrv/slapd-IPA-LOCAL -L -n 'IPA.LOCAL IPA CA'
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 11 (0xb)
        Signature Algorithm: PKCS #1 SHA-256 With RSA Encryption
        Issuer: "CN=Certificate Authority,O=IPA.LOCAL 201711061603"
        Validity:
            Not Before: Thu Nov 09 05:11:12 2017
            Not After : Mon Nov 09 05:11:12 2037
        Subject: "CN=IPA.LOCAL CA 2017.11.09"
        ...

I also confirmed that the old and new CA certificates are present in the /etc/ipa/ca.crt and /etc/pki/ca-trust/source/ipa.p11-kit files. So all the certificate databases now include the new CA certificate.

Renewing the CA certificate (again)

Observe that (in the self-signed FreeIPA CA case) the Issuer DN of the new CA certificate is the Subject DN of the old CA certificate. So we have not quite reached out goal. The original CA certificate was self-signed, so we want a self-signed certificate with the new Subject.

Renewing the CA certificate one more time should result in a self-signed certificate. The current situation is not likely to result in operational issues. So you can consider this an optional step. Anyhow, let’s give it a go:

[root@f27-2 ~]# getcert list -i 20171106062742 | egrep 'status|issuer|subject'
        status: MONITORING
        issuer: CN=Certificate Authority,O=IPA.LOCAL 201711061603
        subject: CN=IPA.LOCAL CA 2017.11.09
[root@f27-2 ~]# getcert resubmit -i 20171106062742
Resubmitting "20171106062742" to "dogtag-ipa-ca-renew-agent".
[root@f27-2 ~]# sleep 5
[root@f27-2 ~]# getcert list -i 20171106062742 | egrep 'status|issuer|subject'
        status: MONITORING
        issuer: CN=IPA.LOCAL CA 2017.11.09
        subject: CN=IPA.LOCAL CA 2017.11.09

Now we have a self-signed CA cert with the new Subject DN. This step has also confirmed that that the certificate issuance is working fine with the new CA subject.

Renewing FreeIPA service certificates

This is another optional step, because we have kept the old CA certificate in the trust store. I want to check that certificate renewals via the FreeIPA framework are working, and this is a fine way to do that.

I’ll renew the HTTP service certificate. This deployment is using an externally-signed HTTP certificate so first I had to track it:

[root@f27-2 ~]# getcert start-tracking \
  -d /etc/httpd/alias -p /etc/httpd/alias/pwdfile.txt \
  -n 'CN=alt-f27-2.ipa.local,O=Example Organization' \
  -c IPA -D 'f27-2.ipa.local' -K 'HTTP/f27-2.ipa.local@IPA.LOCAL'
New tracking request "20171121071700" added.

Then I resubmitted the tracking request. I had to include the -N <SUBJECT> option because the current Subject DN would be rejected by FreeIPA. I also had to include the -K <PRINC_NAME> option due to a bug in Certmonger.

[root@f27-2 ~]# getcert resubmit -i 20171121073608 \
  -N 'CN=f27-2.ipa.local' \
  -K 'HTTP/f27-2.ipa.local@IPA.LOCAL'
Resubmitting "20171121073608" to "IPA".
[root@f27-2 ~]# sleep 5
[root@f27-2 ~]# getcert list -i 20171121073608 \
  | egrep 'status|error|issuer|subject'
      status: MONITORING
      issuer: CN=IPA.LOCAL CA 2017.11.09
      subject: CN=f27-2.ipa.local,O=IPA.LOCAL 201711061603

The renewal succeeded, proving that certificate issuance via the FreeIPA framework is working.

Checking replica health

At this point, I’m happy with the state of the FreeIPA server. But so far I have only dealt with one server in the topology (the renewal master, whose hostname is f27-2.ipa.local). What about other CA replicas?

I log onto f27-1.ipa.local (a CA replica). As a first step I execute ipa-certupdate. This failed in the same was as on the renewal master, and the steps to resolve were the same.

Next I tell Certmonger to renew the CA certificate. This should not renew the CA certificate, only retrieve the certificate from the LDAP certificate store:

[root@f27-1 ~]# getcert list -i 20171106064548 \
  | egrep 'status|error|issuer|subject'
        status: MONITORING
        issuer: CN=Certificate Authority,O=IPA.LOCAL 201711061603
        subject: CN=Certificate Authority,O=IPA.LOCAL 201711061603
[root@f27-1 ~]# getcert resubmit -i 20171106064548
Resubmitting "20171106064548" to "dogtag-ipa-ca-renew-agent".
[root@f27-1 ~]# sleep 30
[root@f27-1 ~]# getcert list -i 20171106064548 | egrep 'status|error|issuer|subject'
        status: MONITORING
        issuer: CN=Certificate Authority,O=IPA.LOCAL 201711061603
        subject: CN=Certificate Authority,O=IPA.LOCAL 201711061603

Well, that did not work. Instead of retrieving the new CA certificate from LDAP, the CA replica issued a new certificate:

[root@f27-1 ~]# certutil -d /etc/pki/pki-tomcat/alias -L \
    -n 'caSigningCert cert-pki-ca'
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 268369927 (0xfff0007)
        Signature Algorithm: PKCS #1 SHA-256 With RSA Encryption
        Issuer: "CN=Certificate Authority,O=IPA.LOCAL 201711061603"
        Validity:
            Not Before: Tue Nov 21 08:18:09 2017
            Not After : Fri Nov 06 06:26:21 2037
        Subject: "CN=Certificate Authority,O=IPA.LOCAL 201711061603"
        ...

This was caused by the first problem we faced when renewing the CA certificate with a new Subject DN. Once again, a mismatch between the Subject DN in the CSR and the FreeIPA CA’s Subject DN has confused the renewal helper.

The resolution in this case is to delete all the certificates with nickname caSigningCert cert-pki-ca or IPA.LOCAl IPA CA from Dogtag’s NSSDB then add the new CA certificate to the NSSDB. Then run ipa-certupdate again. Dogtag must not be running during this process:

[root@f27-1 ~]# systemctl stop pki-tomcatd@pki-tomcat
[root@f27-1 ~]# cd /etc/pki/pki-tomcat/alias
[root@f27-1 ~]# certutil -d . -D -n 'caSigningCert cert-pki-ca'
[root@f27-1 ~]# certutil -d . -D -n 'caSigningCert cert-pki-ca'
[root@f27-1 ~]# certutil -d . -D -n 'caSigningCert cert-pki-ca'
[root@f27-1 ~]# certutil -d . -D -n 'caSigningCert cert-pki-ca'
certutil: could not find certificate named "caSigningCert cert-pki-ca": SEC_ERROR_BAD_DATABASE: security library: bad database.
[root@f27-1 ~]# certutil -d . -D -n 'IPA.LOCAL IPA CA'
[root@f27-1 ~]# certutil -d . -D -n 'IPA.LOCAL IPA CA'
[root@f27-1 ~]# certutil -d . -D -n 'IPA.LOCAL IPA CA'
certutil: could not find certificate named "IPA.LOCAL IPA CA": SEC_ERROR_BAD_DATABASE: security library: bad database.
[root@f27-1 ~]# certutil -d . -A \
    -n 'caSigningCert cert-pki-ca' -t 'CT,C,C' < /root/ipa-ca.pem
[root@f27-1 ~]# ipa-certupdate
trying https://f27-1.ipa.local/ipa/json
[try 1]: Forwarding 'ca_is_enabled' to json server 'https://f27-1.ipa.local/ipa/json'
[try 1]: Forwarding 'ca_find/1' to json server 'https://f27-1.ipa.local/ipa/json'
Systemwide CA database updated.
Systemwide CA database updated.
The ipa-certupdate command was successful
[root@f27-1 ~]# systemctl start pki-tomcatd@pki-tomcat

Dogtag started without issue and I was able to issue a certificate via the ipa cert-request command on this replica.

Discussion

It took a while and required a lot of manual effort, but I reached the goal of changing the CA Subject DN. The deployment seems to be operational, although my testing was not exhaustive and there may be breakage that I did not find.

One of the goals was to define the process for both self-signed and externally-signed CAs. I did not deal with the externally-signed CA case. This article (and the process of writing it) was long enough without it! But much of the process, and problems encountered, will be the same.

There are some important concerns and caveats to be aware of.

First, CRLs generated after the Subject DN change may be bogus. They will be issued by the new CA but will contain serial numbers of revoked certificates that were issued by the old CA. Such assertions are invalid but not harmful in practice because those serial numbers will never be reused with the new CA. This is an implementation detail of Dogtag and not true in general.

But there is a bigger problem related to CRLs. After the CA name change, the old CA will never issue another CRL. This means that revoked certificates with the old Issuer DN will never again appear on a CRL issued by the old CA. Worse, the Dogtag OCSP responder errors when you query the status of a certificate with the old Issuer DN. In sum, this means that there is no way for Dogtag to revoke a certificate with the old Issuer DN. Because many systems “fail open” in the event of missing or invalid CRLs or OCSP errors, this is a potentially severe security issue.

Changing a FreeIPA installation’s CA Subject DN, whether by the procedure outlined in this post or by any other, is unsupported. If you try to do it and break your installation, we (the FreeIPA team) may try to help you recover, to a point. But we can’t guarantee anything. Here be dragons and all that.

If you think you need to change your CA Subject DN and have not read the previous post on this topic, please go and read it. It proposes some alternatives that, if applicable, avoid the messy process and security issues detailed here. Despite showing you how to change a FreeIPA installation’s CA Subject DN, my advice remains: don’t do it. I hope you will heed it.

November 22, 2017 12:00 AM

November 20, 2017

Fraser Tweedale

Changing a CA’s Subject DN; Part I: Don’t Do That

Changing a CA’s Subject DN; Part I: Don’t Do That

When you deploy an X.509 certificate authority (CA), you choose a Subject Distinguished Name for that CA. It is sometimes abbreviated as Subject DN, Subject Name, SDN or just Subject.

The Subject DN cannot be changed; it is “for life”. But sometimes someone wants to change it anyway. In this article I’ll speculate why someone might want to change a CA’s Subject DN, discuss why it is problematic to do so, and propose some alternative approaches.

What is the Subject DN?

A distinguished name (DN) is a sequence of sets of name attribute types and values. Common attribute types include Common Name (CN), Organisation (O), Organisational Unit (OU), Country (C) and so on. DNs are encoded in ASN.1, but have a well defined string representation. Here’s an example CA subject DN:

CN=DigiCert Global Root CA,OU=www.digicert.com,O=DigiCert Inc,C=US

All X.509 certificates contain an Issuer DN field and a Subject DN field. If the same value is used for both issuer and subject, it is a self-signed certificate. When a CA issues a certificate, the Issuer DN on the issued certificate shall be the Subject DN of the CA certificate. This relationship is a “link” in the chain of signatures from some root CA to end entity (or leaf) certificate.

The Subject DN uniquely identifies a CA. It is the CA. A CA can have multiple concurrent certificates, possibly with different public keys and key types. But if the Subject DN is the same, they are just different certificates for a single CA. Corollary: if the Subject DN differs, it is a different CA even if the key is the same.

CA Subject DN in FreeIPA

A standard installation of FreeIPA includes a CA. It can be a root CA or it can be signed by some other CA (e.g. the Active Directory CA of the organisation). As of FreeIPA v4.5 you can specify any CA Subject DN. Earlier versions required the subject to start with CN=Certificate Authority.

If you don’t explicitly specify the subject during installation, it defaults to CN=Certificate Authority, O=EXAMPLE.COM (replace EXAMPLE.COM with the actual realm name).

Why change the CA Subject DN?

Why would someone want to change a CA’s Subject DN? Usually it is because there is some organisational or regulatory requirement for the Subject DN to have a particular form. For whatever reason the Subject DN doesn’t comply, and now they want to bring it into compliance. In the FreeIPA case, we often see that the default CA Subject DN was accepted, only to later realise that a different name is needed.

To be fair, the FreeIPA installer does not prompt for a CA Subject DN but rather uses the default form unless explicitly told otherwise via options. Furthermore, the CA Subject DN is not mentioned in the summary of the installation parameters prior to confirming and proceeding with the installation. And there are the aforementioned restrictions in FreeIPA < v4.5. So in most cases where a FreeIPA administrator wants to change the CA Subject DN, it is not because they chose the wrong one, rather they were not given an opportunity to choose the right one.

Implications of changing the CA Subject DN

In the X.509 data model the Subject DN is the essence of a CA. So what happens if we do change it? There are several areas of concern, and we will look at each in turn.

Certification paths

Normally when you renew a CA certificate, you don’t need to keep the old CA certificates around in your trust stores. If the new CA certificate is within its validity period you can just replace the old certificate, and everything will keep working.

But if you change the Subject DN, you need to keep the old certificate around, because previously issued certificates will bear the old Issuer DN. Conceptually this is not a problem, but many programs and libraries cannot cope with multiple subjects using the same key. In this case the only workaround is to reissue every certificate, with the new Issuer DN. This is a nightmare.

CRLs

A certificate revocation list is a signed list of non-expired certificates that have been revoked. A CRL issuer is either the CA itself, or a trusted delegate. A CRL signing delegate has its own signing key and an X.509 certificate issued by the CA, which asserts that the subject is a CRL issuer. Like certificates, CRLs have an Issuer DN field.

So if the CA’s Subject DN changes, then CRLs issued by that CA must use the new name in the Issuer field. But recall that certificates are uniquely identified by the Issuer DN and Serial (think of this as a composite primary key). So if the CRL issuer changes (or the issuer of the CRL issuer), all the old revocation information is invalid. Now you must maintain two CRLs:

  • One for the old CA Subject. Even after the name change, this CRL may grow as certificates that were issued using the old CA subject are revoked.
  • One for the new CA Subject. It will start off empty.

If a CRL signing delegate is used, there is further complexity. You need two separate CRL signing certificates (one with the old Issuer DN, one with the new), and must

Suffice to say, a lot of CA programs do not handle these scenarios nicely or at all.

OCSP

The Online Certificate Status Protocol is a protocol for checking the revocation status of a single certificate. Like CRLs, OCSP responses may be signed by the issuing CA itself, or a delegate.

As in the CRL delegation case, different OCSP delegates must be used depending on which DN was the Issuer of the certificate whose status is being checked. If performing direct OCSP signing, if identifying the Responder ID by name, then the old or new name would be included depending on the Issuer of the certificate.

Performing the change

Most CA programs do not offer a way to change the Subject DN. This is not surprising, given that the operation just doesn’t fit into X.509 at all, to say nothing of the implementation considerations that arise.

It may be possible to change the CA Subject DN with some manual effort. In a follow-up post I’ll demonstrate how to change the CA Subject DN in a FreeIPA deployment.

Alternative approaches

I have outlined reasons why renaming a CA is a Bad Idea. So what other options are there?

Whether any of the follow options are viable depends on the use case or requirements. They might not be viable. If you have any other ideas about this I would love to have your feedback! So, let’s look at a couple of options.

Do nothing

If you only want to change the CA Subject DN for cosmetic reasons, don’t. Unless there is a clear business or organisational imperative, just accept the way things are. Your efforts would be better spent somewhere else, I promise!

Re-chaining your CA

If there is a requirement for your root CA to have a Subject DN of a particular form, you could create a CA that satisfies the requirement somewhere else (e.g. a separate instance of Dogtag or even a standalone OpenSSL CA). Then you can re-chain your FreeIPA CA up to this new external CA. That is, you renew the CA certificate, but the issuer of the new IPA CA certificate is the new external CA.

The new external CA becomes a trusted root CA, and your FreeIPA infrastructure and clients continue to function as normal. The FreeIPA CA is now an intermediate CA. No certificates need to be reissued, although some server configurations may need to be updated to include the new FreeIPA CA in their certificate chains.

Subordinate CA

If certain end-entity certificates have to be issued by a CA whose Subject DN meets certain requirements, you could create a subordinate CA (or sub-CA for short) with a compliant name. That is, the FreeIPA CA issues an intermediate CA certificate with the desired Subject DN, and that CA issues the leaf certificates.

FreeIPA support Dogtag lightweight sub-CAs as of v4.4 and there are no restrictions on the Subject DN (except uniqueness). Dogtag lightweight CAs live within the same Dogtag instance as the main FreeIPA CA. See ipa help ca for plugin documentation. One major caveat is that CRLs are not yet supported for lightweight CAs (there is an open ticket).

You could also use the FreeIPA CA to issue a CA certificate for some other CA program (possible another deployment of Dogtag or FreeIPA).

Conclusion

In this post I explained what a CA’s Subject DN is, and how it is an integral part of how X.509 works. We discussed some of the conceptual and practical issues that arise when you change a CA’s Subject DN. In particular, path validation, CRLs and OCSP are affected, and a lot of software will break when encountering a “same key, different subject” scenario.

The general recommendation for changing a CA’s subject DN is don’t. But if there is a real business reason why the current subject is unsuitable, we looked at a couple of alternative approaches that could help: re-chaining the CA, and creating sub-CAs.

In my next post we will have an in-depth look how to change a FreeIPA CA’s Subject DN: how to do it, and how to deal with the inevitable breakage.

November 20, 2017 12:00 AM

November 15, 2017

Adam Young

Different CloudForms Catalogs for Different Groups

One of the largest value propositions of DevOps is the concept of Self Service provisioning. If you can remove human interaction from resource allocation, you can reduce both the response time and the likelihood of error in configuration. Red Hat CloudForms has a self service feature that allows a user to select from predefined services. You may wish to show different users different catalog items. This might be for security reasons, such as the set of credentials required and provided, or merely to reduce clutter and focus the end user on specific catalog items. Perhaps some items are still undergoing testing and are not ready for general consumption.

Obviously, these predefined services may not match your entire user population.

I’ve been working on setting up a CloudForms instance where members of different groups see different service catalogs. Here is what I did.

Tags are the primary tool used to match up users and their service catalogs. Specifically, A user will only see a catalog item if his group definition matches the Provisioning Scope tag of the Catalog Item. While you can make some catalog items to have a Provisioning Scope of All, you probably want to scope other items down to the target audience.

I have a demonstration setup based on IdM and CloudForms integration. When uses log in to the CloudForms appliance, one of the user groups managed by LDAP will be used to select their CloudForms group. The CloudForms group has a modified Provisioning Scope tag that will be used to select items from the service catalog.

I also have a top level tenant named “North America” that is used to manage the scope of the tags later on.  I won’t talk through setting this up, as most CloudForms deployment have something set as a top level tenant.

I’m not going to go through the steps to create a new catalog item.  There are other tutorials with go through this in detail.

My organization is adding support for statisticians.  Specifically, we need to provide support for VMs that are designed to support a customized version of the R programming environment.  All users that need these systems will be members of the stats group in IdM.  We want to be able to tag these instances with the stats Provisioning Scope as well.  The user is in the cloudusers group as well, which is required to provide access to the CloudForms appliance.

We start by having our sample user log in to the web UI.  This has the side effect of prepopulating the user and group data.  We could do this manually, but this way is less error prone, if a bit more of hassle.

My user currently only has a single item in her service catalog; the PostgreSQL appliance we make available to all developers.  This allows us to have a standard development environment for database work.

Log out and log back in as an administrator.  Here comes the obscure part.

Provisioning Scope tags are limited to set of valid values.  These values are, by default All or EVMGroup-user_self_service.  This second value matches a group with the same name.  In order to add an option, we need to modify the tag category associated with this tag.

  1. As an administrator, on the top right corner of the screen, click on your user name, and select the Configuration option from the dropdown.
  2. Select your region, in my case this is region 1.
  3. Across the top of the screen, you  will see Settings Region 1, and a series of tabs, most of which have the name of your tenant  (those of you that know my long standing issue with this term are probably grinning at my discomfort).  Since my top level tenant is “North America” I have a tab called North America Tags which I select. Select accordingly.
  4. Next to Category select “Provisioning Scope” from the drop down and you can see my existing set of custom tag values for Provisioning Scope.  Click on <New Entry> to add a new value, which I will call stats. I also use stats for the description.
  5. Click the Add button to the right.  See Below.

Now we can edit the newly defined “R Project” service to limit it to this provisioning scope.

  1. Navigate to Services->Catalogs->Catalog Items.
  2. Select the “R Project” Service.
  3. Click on the Policy  dropdown and select “Edit Tags”
  4. Click on the drop down to the right of “Select a customer tag to assign” (it is probably set on “Auto Approve -Max CPU *”) and scroll down to Provisioning Scope.
  5. The dropdown to the right, which defaults to “<Select a Value to Assign”>. Select this and scroll down to the new value.  For me, this is stats.  The new item will be added to the list.
  6. Click the Save button in the lower right of the screen.

Your list should look like this:

Finally, create the association between this provisioning scope and the stats group.

  1. From the dropdown on the top right of the screen that has your username, select Configuration.
  2. Expand the Access Control accordian
  3. Select groups.
  4. From the Configuration dropdown, select “Add a new Group”
  5. Select a Role for the user.  I use EvmRole-user_self_service
  6. Select a Project/Tenant for the user.
  7. Click on the checkbox labeled “Look Up External Authentiation Groups”
  8. A new field appears called “User to Look Up.”  I am going to user the “statuser” I created for this example, and click retrieve.
  9. The dropdown under the LDAP Groups for User is now populated.  I select stats.

To assign the tag for this group:

  1. Scroll down to the bottom of the page
  2. find and expand the “Provisioning Scope” tag
  3. Select “stats”
  4. Click the Add button in the bottom right corner of the page.

See Below.

Now when statuser logs in  to the self service web UI, they see both of the services provided:

 

One Big Caveat that has messed me up a few times:  a user only has one group active at a time.  If a user is a member of two groups, CloudForms will select one of them as the active group.  Services assigned only to the non-active group will not show up in the service catalog.  In my case, I had a group called cloudusers, and since all users are a member of that group, they would only see the Provisioning Scope, and thus the catalog items, for cloudusers, and not the stats group.

The Self Service webUI allows the user to change group to any of the other groups to which they are assigned.

The best option is to try and maintain a one to many relationship between groups and users;  constrain most users to a single group to avoid confusion.

This has been a long post.  The web UI for CloudForms requires a lot of navigation, and the concepts required to get this to work required more explanation than I originally had planned.  As I get more familiar with CloudForms, I’ll try to show how these types of operations can be automated from the command line, converted to Ansible playbooks, and thus checked in to version control.

I’ve also been told that, for simple use cases, it is possible to just put the user groups into separate tenants, and they will see different catalogs.  While that does not allow a single item to be in both catalogs, it is significantly easier to set up.

A Big Thank You to Laurent Domb for editing and corrections.

by Adam Young at November 15, 2017 02:37 AM

November 14, 2017

Nathaniel McCallum

Writing Installer Images Directly With WebUSB

Chrome 61 recently released support for the WebUSB JavaScript API. This allows direct access to USB devices from websites. Somebody should build a website that takes distribution ISOs and writes them directly to USB mass storage devices. This would significally improve one of the most difficult and error prone steps when installing a Linux distribution such as Fedora.

November 14, 2017 08:31 PM

November 10, 2017

William Brown

Creating yubikey SSH and TLS certificates

Creating yubikey SSH and TLS certificates

Recently yubikeys were shown to have a hardware flaw in the way the generated private keys. This affects the use of them to provide PIV identies or SSH keys.

However, you can generate the keys externally, and load them to the key to prevent this issue.

SSH

First, we’ll create a new NSS DB on an airgapped secure machine (with disk encryption or in memory storage!)

certutil -N -d . -f pwdfile.txt

Now into this, we’ll create a self-signed cert valid for 10 years.

certutil -S -f pwdfile.txt -d . -t "C,," -x -n "SSH" -g 2048 -s "cn=william,O=ssh,L=Brisbane,ST=Queensland,C=AU" -v 120

We export this now to PKCS12 for our key to import.

pk12util -o ssh.p12 -d . -k pwdfile.txt -n SSH

Next we import the key and cert to the hardware in slot 9c

yubico-piv-tool -s9c -i ssh.p12 -K PKCS12 -aimport-key -aimport-certificate -k

Finally, we can display the ssh-key from the token.

ssh-keygen -D /usr/lib64/opensc-pkcs11.so -e

Note, we can make this always used by ssh client by adding the following into .ssh/config:

PKCS11Provider /usr/lib64/opensc-pkcs11.so

TLS Identities

The process is almost identical for user certificates.

First, create the request:

certutil -d . -R -a -o user.csr -f pwdfile.txt -g 4096 -Z SHA256 -v 24 \
--keyUsage digitalSignature,nonRepudiation,keyEncipherment,dataEncipherment --nsCertType sslClient --extKeyUsage clientAuth \
-s "CN=username,O=Testing,L=example,ST=Queensland,C=AU"

Once the request is signed, we should have a user.crt back. Import that to our database:

certutil -A -d . -f pwdfile.txt -i user.crt -a -n TLS -t ",,"

Import our CA certificate also. Next export this to p12:

pk12util -o user.p12 -d . -k pwdfile.txt -n TLS

Now import this to the yubikey - remember to use slot 9a this time!

yubico-piv-tool -s9a -i user.p12 -K PKCS12 -aimport-key -aimport-certificate -k

Done!

November 10, 2017 02:00 PM

Fraser Tweedale

Changing the X.509 signature algorithm in FreeIPA

Changing the X.509 signature algorithm in FreeIPA

X.509 certificates are an application of digital signatures for identity verification. TLS uses X.509 to create a chain of trust from a trusted CA to a service certificate. An X.509 certificate binds a public key to a subject by way of a secure and verifiable signature made by a certificate authority (CA).

A signature algorithm has two parts: a public key signing algorithm (determined by the type of the CA’s signing key) and a collision-resistant hash function. The hash function digests the certified data into a small value that is hard to find collision for, which gets signed.

Computers keep getting faster and attacks on cryptography always get better. So over time older algorithms need to be deprecated, and newer algorithms adopted for use with X.509. In the past the MD5 and SHA-1 digests were often used with X.509, but today SHA-256 (a variant of SHA-2) is the most used algorithm. SHA-256 is also the weakest digest accepted by many programs (e.g. web browsers). Stronger variants of SHA-2 are widely supported.

FreeIPA currently uses the sha256WithRSAEncryption signature algorithm by default. Sometimes we get asked about how to use a stronger digest algorithm. In this article I’ll explain how to do that and discuss the motivations and implications.

Implications of changing the digest algorithm

Unlike re-keying or changing the CA’s Subject DN, re-issuing a certificate signed by the same key, but using a different digest, should Just Work. As long as a client knows about the digest algorithm used, it will be able to verify the signature. It’s fine to have a chain of trust that uses a variety of signature algorithms.

Configuring the signature algorithm in FreeIPA

The signature algorithm is configured in each Dogtag certificate profile. Different profiles can use different signature algorithms. The public key signing algorithm depends on the CA’s key type (e.g. RSA) so you can’t change it; you can only change the digest used.

Modifying certificate profiles

Before FreeIPA 4.2 (RHEL 7.2), Dogtag stored certificate profile configurations as flat files. Dogtag 9 stores them in /var/lib/pki-ca/profiles/ca and Dogtag >= 10 stores them in /var/lib/pki/pki-tomcat/ca/profiles/ca. When Dogtag is using file-based profile storage you must modify profiles on all CA replicas for consistent behaviour. After modifying a profile, Dogtag requires a restart to pick up the changes.

As of FreeIPA 4.2, Dogtag uses LDAP-based profile storage. Changes to profiles get replicated among the CA replicas, so you only need to make the change once. Restart is not required. The ipa certprofile plugin provides commands for importing, exporting and modifying certificate profiles.

Because of the variation among versions, I won’t detail the process of modifying profiles. We’ll look at what modifications to make, but skip over how to apply them.

Profile configuration changes

For service certificates, the profile to modify is caIPAserviceCert. If you want to renew the CA signing cert with a different algorithm, modify the caCACert profile. The relevant profile policy components are signingAlgConstraintImpl and signingAlgDefaultImpl. Look for these components in the profile configuration:

policyset.serverCertSet.8.constraint.class_id=signingAlgConstraintImpl
policyset.serverCertSet.8.constraint.name=No Constraint
policyset.serverCertSet.8.constraint.params.signingAlgsAllowed=SHA1withRSA,SHA256withRSA,SHA512withRSA,MD5withRSA,MD2withRSA,SHA1withDSA,SHA1withEC,SHA256withEC,SHA384withEC,SHA512withEC
policyset.serverCertSet.8.default.class_id=signingAlgDefaultImpl
policyset.serverCertSet.8.default.name=Signing Alg
policyset.serverCertSet.8.default.params.signingAlg=-

Update the policyset.<name>.<n>.default.params.signingAlg parameter; replace the - with the desired signing algorithm. (I set it to SHA512withRSA.) Ensure that the algorithm appears in the policyset.<name>.<n>.constraint.params.signingAlgsAllowed parameter (if not, add it).

After applying this change, certificates issued using the modified profile will use the specified algorithm.

Results

After modifying the caIPAserviceCert profile, we can renew the HTTP certificate and see that the new certificate uses SHA512withRSA. Use getcert list to find the Certmonger tracking request ID for this certificate. We find the tracking request in the output:

...
Request ID '20171109075803':
  status: MONITORING
  stuck: no
  key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt'
  certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB'
  CA: IPA
  issuer: CN=Certificate Authority,O=IPA.LOCAL
  subject: CN=rhel69-0.ipa.local,O=IPA.LOCAL
  expires: 2019-11-10 07:53:11 UTC
  ...
...

So the tracking request ID is 20171109075803. Now resubmit the request:

[root@rhel69-0 ca]# getcert resubmit -i 20171109075803
Resubmitting "20171109075803" to "IPA".

After a few moments, check the status of the request:

[root@rhel69-0 ca]# getcert list -i 20171109075803
Number of certificates and requests being tracked: 8.
Request ID '20171109075803':
  status: MONITORING
  stuck: no
  key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt'
  certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB'
  CA: IPA
  issuer: CN=Certificate Authority,O=IPA.LOCAL
  subject: CN=rhel69-0.ipa.local,O=IPA.LOCAL
  expires: 2019-11-11 00:02:56 UTC
  ...

We can see by the expires field that renewal succeeded. Pretty-printing the certificate shows that it is using the new signature algorithm:

[root@rhel69-0 ca]# certutil -d /etc/httpd/alias -L -n 'Server-Cert'
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 12 (0xc)
        Signature Algorithm: PKCS #1 SHA-512 With RSA Encryption
        Issuer: "CN=Certificate Authority,O=IPA.LOCAL"
        Validity:
            Not Before: Fri Nov 10 00:02:56 2017
            Not After : Mon Nov 11 00:02:56 2019
        Subject: "CN=rhel69-0.ipa.local,O=IPA.LOCAL"

It is using SHA-512/RSA. Mission accomplished.

Discussion

In this article I showed how to configure the signing algorithm in a Dogtag certificate profile. Details about how to modify profiles in particular versions of FreeIPA was out of scope.

In the example I modified the default service certificate profile caIPAserviceCert to use SHA512withRSA. Then I renewed the HTTP TLS certificate to confirm that the configuration change had the intended effect. To change the signature algorithm on the FreeIPA CA certificate, you would modify the caCACert profile then renew the CA certificate. This would only work if the FreeIPA CA is self-signed. If it is externally-signed, it is up to the external CA what digest to use.

In FreeIPA version 4.2 and later, we support the addition of custom certificate profiles. If you want to use a different signature algorithm for a specific use case, instead of modifying the default profile (caIPAserviceCert) you might add a new profile.

The default signature digest algorithm in Dogtag is currently SHA-256. This is appropriate for the present time. There are few reasons why you would need to use something else. Usually it is because of an arbitrary security decision imposed on FreeIPA administrators. There are currently no plans to make the default signature algorithm configurable. But you can control the signature algorithm for a self-signed FreeIPA CA certificate via the ipa-server-install --ca-signing-algorithm option.

In the introduction I mentioned that the CA’s key type determines the public key signature algorithm. That was hand-waving; some key types support multiple signature algorithms. For example, RSA keys support two signature algorithms: PKCS #1 v1.5 and RSASSA-PSS. The latter is seldom used in practice.

The SHA-2 family of algorithms (SHA-256, SHA-384 and SHA-512) are the “most modern” digest algorithms standardised for use in X.509 (RFC 4055). The Russian GOST R digest and signature algorithms are also supported (RFC 4491) although support is not widespread. In 2015 NIST published SHA-3 (based on the Keccak sponge construction). The use of SHA-3 in X.509 has not yet been standardised. There was an Internet-Draft in 2017, but it expired. The current cryptanalysis of SHA-2 suggests there is no urgency to move to SHA-3. But it took a long time to move from SHA-1 (which is now insecure for applications requiring collision resistance) to SHA-2. Therefore it would be good to begin efforts to standardise SHA-3 in X.509 and add library/client support as soon as possible.

November 10, 2017 12:00 AM

November 06, 2017

William Brown

What's the problem with NUMA anyway?

What’s the problem with NUMA anyway?

What is NUMA?

Non-Uniform Memory Architecture is a method of seperating ram and memory management units to be associated with CPU sockets. The reason for this is performance - if multiple sockets shared a MMU, they will cause each other to block, delaying your CPU.

To improve this, each NUMA region has it’s own MMU and RAM associated. If a CPU can access it’s local MMU and RAM, this is very fast, and does not prevent another CPU from accessing it’s own. For example:

CPU 0   <-- QPI --> CPU 1
  |                   |
  v                   v
MMU 0               MMU 1
  |                   |
  v                   v
RAM 1               RAM 2

For example, on the following system, we can see 1 numa region:

# numactl --hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3
node 0 size: 12188 MB
node 0 free: 458 MB
node distances:
node   0
  0:  10

On this system, we can see two:

# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 24 25 26 27 28 29 30 31 32 33 34 35
node 0 size: 32733 MB
node 0 free: 245 MB
node 1 cpus: 12 13 14 15 16 17 18 19 20 21 22 23 36 37 38 39 40 41 42 43 44 45 46 47
node 1 size: 32767 MB
node 1 free: 22793 MB
node distances:
node   0   1
  0:  10  20
  1:  20  10

This means that on the second system there is 32GB of ram per NUMA region which is accessible, but the system has total 64GB.

The problem

The problem arises when a process running on NUMA region 0 has to access memory from another NUMA region. Because there is no direct connection between CPU 0 and RAM 1, we must communicate with our neighbour CPU 1 to do this for us. IE:

CPU 0 --> CPU 1 --> MMU 1 --> RAM 1

Not only do we pay a time delay price for the QPI communication between CPU 0 and CPU 1, but now CPU 1’s processes are waiting on the MMU 1 because we are retrieving memory on behalf of CPU 0. This is very slow (and can be seen by the node distances in the numactl –hardware output).

Today’s work around

The work around today is to limit your Directory Server instance to a single NUMA region. So for our example above, we would limit the instance to NUMA region 0 or 1, and treat the instance as though it only has access to 32GB of local memory.

It’s possible to run two instances of DS on a single server, pinning them to their own regions and using replication between them to provide synchronisation. You’ll need a load balancer to fix up the TCP port changes, or you need multiple addresses on the system for listening.

The future

In the future, we’ll be adding support for better copy-on-write techniques that allow the cores to better cache content after a QPI negotiation - but we still have to pay the transit cost. We can minimise this as much as possible, but there is no way today to avoid this penalty. To use all your hardware on a single instance, there will always be a NUMA cost somewhere.

The best solution is as above: run an instance per NUMA region, and internally provide replication for them. Perhaps we’ll support an automatic configuration of this in the future.

November 06, 2017 02:00 PM

October 24, 2017

Red Hat Blog

Understanding Identity Management Client Enrollment Workflows

Enrolling a client system into Identity Management (IdM) can be done with a single command, namely: ipa-client-install. This command will configure SSSD, Kerberos, Certmonger and other elements of the system to work with IdM. The important result is that the system will get an identity and key so that it can securely connect to IdM and perform its operations. However, to get the identity and key, the system should be trusted else any other system would be able to register and interact with the server. To confirm trust there are four different options:

1. Enrollment by a High Privileged Admin

If ipa-client-install command is executed by a high privileged admin and this admin uses his or her password to run the command, the client will first use Kerberos to authenticate the admin and will then send a request to the server to perform a client registration as admin. There server will check what administrator is allowed to do. There are two different permissions at play in this sequence: one is the right to create a host entry and the other one is to provision the key. Since this admin has high privileges, the server will create a new host entry for the client and return a generated Kerberos key that the client will store in a file called keytab. Once this operation is complete other configuration steps will continue but they are the same in all four provisioning options.

2. Enrollment by a Low Privileged Admin

If an admin does not have privileges to create a client host entry but has the permission to provision the key to the client, the host entries need to be pre-created. To pre-create entries you will need to define a special account and allow it to only register clients (i.e. create host entries) and not give it permissions to do any other administrative activity. You can then use this account in your scripts or with automatic provisioning tool of your choice. This account, or the high level admin, will first pre-create host entries in IdM and then the script or low privileged admin can actually “do” the job of provisioning the keys to the client systems. This approach works fine except that it leads to a password being stored verbatim in the scripts or somewhere in a file or in a source control system. Needless to say – from security point of view – this is not the best approach.

3. Enrollment Using One Time Password

An improvement over the previous option is to use a one time registration password. This approach mostly targets an automated provisioning as completed by a provisioning system. Red Hat Satellite 6, for example, is capable of provisioning systems and enrolling them with IdM automatically using this method. The flow of operations includes:

  • User initiates the provisioning operation
  • Provisioning server (e.g. Satellite 6) connects to IdM and registers a future host. It is implied that the server has permission to do so.
  • IdM returns a registration password that can be used only once.
  • Provisioning server passes the registration password to the system being deployed.
  • The system being deployed is synthesized and booted.
  • During this first boot the ipa-client-install script is invoked with the registration password.
  • The IdM server recognizes the code and completes enrollment returning the key.
  • After this a normal flow of client configuration continues.

A similar approach is being implemented in Nova component of the OpenStack targeting OpenStack 12. In OpenStack case this procedure is used to give identity to the OpenStack nodes so that they can automatically acquire certificate from IdM for all point-to-point communication between services inside OpenStack.

There is also a community effort to build a set of Ansible modules that would use the same method and enroll clients leveraging Ansible as an orchestration engine.

4. Re-enrollment

Finally, in some cases, an already provisioned system needs to be re-enrolled. This usually happens when the system is re-imaged and re-installed. In this case, going through a registration sequence again is an overhead. Instead – the file with the Kerberos key can be backed up and used once the system is re-imaged and restored. The client will then authenticate using the old key and then request a new key. Please note that the configuration files will also be brought into a canonical state, so if you did some manual or automated customization of the configuration, these changes will be lost. This method is also handy when seeking to repair the configuration of a client and to perform client key rotation if your policies require periodic key rotation.

As a part of the Ansible effort another use case case has been identified. What if a client system was lost? It was, for example, a virtual machine in a cloud and was killed for some reason. The server would think that the system is fully functional but in reality the keys are gone on the client side. In this situation one first has to disable the host before attempting to enroll this host again. Such step complicates the provisioning sequence which should be idempotent to be effective in automation tools like Ansible. This limitation will be addressed in later versions of IdM allowing the enrollment procedure to be less cumbersome.

Questions? Comments? Are you using an entirely different workflow for client enrollment? As always – I look forward to hearing your thoughts.

by Dmitri Pal at October 24, 2017 02:42 PM

October 18, 2017

Adam Young

Deliberate Elevation of Privileges

“Ooops.” — Me, doing something as admin that I didn’t mean to do.

While the sudo mechanism has some warranted criticism, it is still an improvement on doing everything as the root account. The essential addition that sudo provides for the average sys admin is the ability to only grant themselves system admin when they explicitly want it.

I was recently thinking about a FreeIPA based cluster where the users did not realize that they could get admin permissions by adding themselves to the user group admins. One Benefit to the centralized admin account is that a user has to chose to operate as admin to perform the operation. If a hacker gets the users password, they do not get admin. However, the number of attacks and weaknesses in this approach far outweigh the benefits. Multiple people need to know the password, revoking it for one revokes it for everyone, anyone can change the password, locking everyone else out, and so on.

We instead added a few key individuals to the admins group and changed the password on the admin account.

This heightened degree of security supports the audit trail. Now if someone performs and admin operation, we know which user did it. It involves enabling audit on the Directory Server (I need to learn how to do this!).

It got me thinking, though, if there was a mechanism like the sudo approach that we could implement for users to temporarily elevate them to admins status. Something like a short term group membership. The requirements, as I can see are these:

  1. A user has to chose to be admin:  “admin-powers activate!”
  2. A user can downgrade back to non-admin at any point: “admin-powers activate!”
  3. Admin powers wear off.  admin-powers only last an hour
  4. No new password has to be memorized for admin-powers
  5. The mechanism for admin-powers has to be resistant to attack.
    1. customizable enough that someone outside the organization can’t guess what they are.
    2. provide some way to prevent shoulder surfing.

I’m going to provide a straw-man here.

  • A REST API protected via SPNEGO
    • another endpoint with client cert possible, too
  • The REST API is password protected with basic-auth.  This is the group password.
  • The IPA service running the web server has the ability to add anyone that is in the “potentaladmins” group to the “admins” groups”
  • The IPA service also schedules an AT job to remove the user from the group.  If an AT entry already exists, remove the older one, so a user can extend their window.
  • A cron job runs each night to remove anyone from the admin group that does not have a current at job scheduled.

As I said, a strawman, but I think it points in the right direction.  Thoughts?

by Adam Young at October 18, 2017 07:31 PM

James Shubin

Copyleft is Dead. Long live Copyleft!

As you may have noticed, we recently re-licensed mgmt from the AGPL (Affero General Public License) to the regular GPL. This is a post explaining the decision and which hopefully includes some insights at the intersection of technology and legal issues.

Disclaimer:

I am not a lawyer, and these are not necessarily the opinions of my employer. I think I’m knowledgeable in this area, but I’m happy to be corrected in the comments. I’m friends with a number of lawyers, and they like to include disclaimer sections, so I’ll include this so that I blend in better.

Background:

It’s well understood in infrastructure coding that the control of, and trust in the software is paramount. It can be risky basing your business off of a product if the vendor has the ultimate ability to change the behaviour, discontinue the software, make it prohibitively expensive, or in the extreme case, use it as a backdoor for corporate espionage.

While many businesses have realized this, it’s unfortunate that many individuals have not. The difference might be protecting corporate secrets vs. individual freedoms, but that’s a discussion for another time. I use Fedora and GNOME, and don’t have any Apple products, but you might value the temporary convenience more. I also support your personal choice to use the software you want. (Not sarcasm.)

This is one reason why Red Hat has done so well. If they ever mistreated their customers, they’d be able to fork and grow new communities. The lack of an asymmetrical power dynamic keeps customers feeling safe and happy!

Section 13:

The main difference between the AGPL and the GPL is the “Remote Network Interaction” section. Here’s a simplified explanation:

Both licenses require that if you modify the code, you give back your contributions. “Copyleft” is Copyright law that legally requires this share-alike provision. These licenses never require this when using the software privately, whether as an individual or within a company. The thing that “activates” the licenses is distribution. If you sell or give someone a modified copy of the program, then you must also include the source code.

The AGPL extends the GPL in that it also activates the license if that software runs on a application providers computer which is common with hosted software-as-a-service. In other words, if you were an external user of a web calendaring solution containing AGPL software, then that provider would have to offer up the code to the application, whereas the GPL would not require this, and neither license would require distribution of code if the application was only available to employees of that company nor would it require distribution of the software used to deploy the calendaring software.

Network Effects and Configuration Management:

If you’re familiar with the infrastructure automation space, you’re probably already aware of three interesting facts:

  1. Hosted configuration management as a service probably isn’t plausible
  2. The infrastructure automation your product uses isn’t the product
  3. Copyleft does not apply to the code or declarations that describe your configuration

As a result of this, it’s unlikely that the Section 13 requirement of the AGPL would actually ever apply to anyone using mgmt!

A number of high profile organizations outright forbid the use of the AGPL. Google and Openstack are two notable examples. There are others. Many claim this is because the cost of legal compliance is high. One argument I heard is that it’s because they live in fear that their entire proprietary software development business would be turned on its head if some sufficiently important library was AGPL. Despite weak enforcement, and with many companies flouting the GPL, Linux and the software industry have not shown signs of waning. Compliance has even helped their bottom line.

Nevertheless, as a result of misunderstanding, fear and doubt, using the AGPL still cuts off a portion of your potential contributors. Possible overzealous enforcing has also probably caused some to fear the GPL.

Foundations and Permissive Licensing:

Why use copyleft at all? Copyleft is an inexpensive way of keeping the various contributors honest. It provides an organization constitution so that community members that invest in the project all get a fair, representative stake.

In the corporate world, there is a lot of governance in the form of “foundations”. The most well-known ones exist in the United States and are usually classified as 501(c)(6) under US Federal tax law. They aren’t allowed to generate a profit, but they exist to fulfill the desires of their dues-paying membership. You’ve probably heard of the Linux Foundation, the .NET foundation, the OpenStack Foundation, and the recent Linux Foundation child, the CNCF. With the major exception being Linux, they primarily fund permissively licensed projects since that’s what their members demand, and the foundation probably also helps convince some percentage of their membership into voluntarily contributing back code.

Running an organization like this is possible, but it certainly adds a layer of overhead that I don’t think is necessary for mgmt at this point.

It’s also interesting to note that of the top corporate contributions to open source, virtually all of the licensing is permissive, usually under the Apache v2 license. I’m not against using or contributing to permissively licensed projects, but I do think there’s a danger if most of our software becomes a monoculture of non-copyleft, and I wanted to take a stand against that trend.

Innovation:

I started mgmt to show that there was still innovation to be done in the automation space, and I think I’ve achieved that. I still have more to prove, but I think I’m on the right path. I also wanted to innovate in licensing by showing that the AGPL isn’t actually  harmful. I’m sad to say that I’ve lost that battle, and that maybe it was too hard to innovate in too many different places simultaneously.

Red Hat has been my main source of funding for this work up until now, and I’m grateful for that, but I’m sad to say that they’ve officially set my time quota to zero. Without their support, I just don’t have the energy to innovate in both areas. I’m sad to say it, but I’m more interested in the technical advancements than I am in the licensing progress it might have brought to our software ecosystem.

Conclusion / TL;DR:

If you, your organization, or someone you know would like to help fund my mgmt work either via a development grant, contract or offer of employment, or if you’d like to be a contributor to the project, please let me know! Without your support, mgmt will die.

Happy Hacking,

James

You can follow James on Twitter for more frequent updates and other random noise.

EDIT: I mentioned in my article that: “Hosted configuration management as a service probably isn’t plausible“. Turns out I was wrong. The splendiferous Nathen Harvey was kind enough to point out that Chef offers a hosted solution! It’s free for five hosts as well!

I was probably thinking more about how I would be using mgmt, and not about the greater ecosystem. If you’d like to build or use a hosted mgmt solution, please let me know!

by purpleidea at October 18, 2017 01:22 AM

October 06, 2017

Red Hat Blog

Picking your Deployment Architecture

In the previous post I talked about Smart Card Support in Red Hat Enterprise Linux. In this article I will drill down into how to select the right deployment architecture depending on your constraints, requirements and availability of the smart card related functionality in different versions of Red Hat Enterprise Linux.

To select the right architecture for a deployment where users would authenticate using smart cards when logging into Linux systems you need to answer a couple of questions.

The main one is “where are my users” and thus “where my users authenticated”? Are your users going to be in Active Directory, in IdM or they are in some other solution? If they are somewhere else other than in AD or IdM the situation might require a deeper dive so please reach out to your technical account manager or sales representative. If you want to keep users in Active Directory and have AD as an authoritative source for the account information and authentication you can do it in two ways. The preferred way is to deploy IdM to manage your Linux environment and establish trust with AD. However this will work only with clients that run version 7.3 and later since they have SSSD capable of working with Active Directory and understand smart card authentication. For older i.e. 6.x clients in this case you might have to use pam_pkcs11 and manage mapping files.

The alternative, if for some valid reason you really can’t use trusts which are highly recommended, would be to deploy IdM and sync accounts from AD. In this case clients 6.8+ and 7.2+ can work against IdM and you will be synchronizing user accounts from Active Directory to IdM. This integration is less preferable since synchronization approach is much less robust than a trust approach and in this case AD becomes a source of accounts but real authentication happens against IdM so if you need authentication auditing you need to do it against IdM.

You can also deploy IdM without ongoing synchronization with AD and manage accounts for your Linux environment purely in IdM. This will work with 6.8+ and 7.2+ clients. And with 7.4 clients you will be able to get Kerberos tickets as a part of smart card authentication allowing Kerberos based SSO between servers and services.

A couple other questions need to be answered.

    • Can I avoid using IdM? Yes you can connect SSSD directly to AD and use smart card authentication since 7.3. With older clients you will have to do mapping via files as described above.
    • How can I handle a small set of Windows servers I have in those scenarios?
      • If you have Active Directory and your users are in Active directory you can connect your Windows systems to Active Directory.
      • If your users are in IdM and there is no AD in the picture there are some ways to configure Windows systems to work with IdM accounts. However this functionality is limited and not supported out of box. To see what can be done on this front contact your TAM or sales representative. In future it will be possible to have IdM be the authoritative source for users and expose those user to Windows systems. That would require a feature that is being worked on in IdM’s upstream project – FreeIPA. It is called a Global Catalog. With Global Catalog users managed by IdM can be exposed to a trusted AD domain and then Windows systems can be connected to such domain. If you are interested in testing such functionality please reach out to FreeIPA team using community mailing lists or by opening a case with Red Hat support.

Scenario 1:

So let us take a case when users will be in IdM, certificates are issued by an external CA and it is either a green field deployment or you can upgrade your clients to 7.4. Here is what it will entail:

  1. Install the latest IdM version (7.4 at the moment the article was written)
  2. Create or load users into IdM
  3. Map certificates to user entries in IdM
  4. Install your clients using 7.4 and ipa-client-install script
  5. Prepare for smart card authentication on clients and server
  6. Test your smart card authentication on those clients
    1. Console login
    2. SSH (locally)
    3. SSH (remotely)

Scenario 2:

If the environment has a mixture of clients with different versions before 7.4:

  1. Install the latest IdM version (7.4 at the moment the article was written)
  2. Create or load users into IdM
  3. Update clients to be at least 6.8 or 7.2
  4. Install your clients on client systems using ipa-client-install script
  5. Prepare for smart card authentication on clients
  6. Publish certificates into user entries
    1. Extract from the card
    2. Publish into IdM
  7. Test your smart card authentication on those clients

Scenario 3:

If you want to leverage trust then the sequence will be the following:

  1. Install the latest IdM version (7.4 at the moment the article was written)
  2. Establish trust with AD
  3. Update clients to be at least 7.3
  4. Install your clients on client systems using ipa-client-install script
  5. Prepare for smart card authentication on clients
  6. Link your AD users with the smart cards
  7. Test your smart card authentication on those clients
    1. Console login
    2. SSH (locally)
    3. SSH (remotely)

As you can see there is unfortunately no support of the trust-based smart card authentication for older 6.x clients. I was asked a question about this the other day at the Defense in Depth conference and gave an answer without checking my notes. The truth is that the smart card authentication with older clients is possible only if you use IdM as a source of your users. Support of the trusts would require back porting of the SSSD to 6.x which will be very hard to do at this stage of Red Hat Enterprise Linux 6 support.

For more information about smart card support in identity management see the following documentation.

For more details about lower level support of the smart cards please see the following knowledge base article.

 

by Dmitri Pal at October 06, 2017 02:11 PM

September 28, 2017

Rich Megginson

How to debug "undefined method for nil:NilClass" in OpenShift Aggregated Logging

In OpenShift Aggregated Logging https://github.com/openshift/origin-aggregated-logging the Fluentd pipeline tries very hard to ensure that the data is correct, because it depends on having clean data in the output section in order to construct the index names for Elasticsearch. If the fields and values are not correct, then the index name construction will fail with an unhelpful error like this:

2017-09-28 13:22:22 -0400 [warn]: temporarily failed to flush the buffer. next_retry=2017-09-28 13:22:23 -0400 error_class="NoMethodError"
error="undefined method `[]' for nil:NilClass" plugin_id="object:1c0bd1c"
2017-09-28 13:22:22 -0400 [warn]: /opt/app-root/src/gems/fluent-plugin-elasticsearch-1.9.5.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:240:in `eval'
2017-09-28 13:22:22 -0400 [warn]: /opt/app-root/src/gems/fluent-plugin-elasticsearch-1.9.5.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:240:in `eval'

There is no context about what field might be missing, what tag is matching, or even which plugin it is, the operations output or the applications output (although you do get the plugin_id, which could be used to look up the actual plugin information, if the Fluentd monitoring is enabled).
One solution is to just edit the logging-fluentd ConfigMap, and add a stdout filter in the right place:
## matches
          <filter **>
            @type stdout
          </filter>
          @include configs.d/openshift/output-pre-*.conf
          ...

and dump the time, tag, and record just before the outputs. The problem with this is that it will cause a feedback loop, since Fluentd is reading from its own pod log. The solution to this is to also throw away Fluentd pod logs.
## filters
          @include configs.d/openshift/filter-pre-*.conf
          @include configs.d/openshift/filter-retag-journal.conf
          <match kubernetes.journal.container.fluentd kubernetes.var.log.containers.fluentd**>
            @type null
          </match>

This must come after the filter-retag-journal.conf which identifies and tags Fluentd pod log records. Then restart Fluentd (oc pod delete $fluentd_pod, oc label node, etc.). The Fluentd pod log will now contain data like this:
2017-09-28 13:44:47 -0400 output_tag: {"type":"response","@timestamp":"2017-09-28T17:44:19.524989+00:00","pid":8,"method":"head","statusCode":200,
"req":{"url":"/","method":"head","headers":{"user-agent":"curl/7.29.0","host":"localhost:5601","accept":"*/*"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1"},
"res":{"statusCode":200,"responseTime":2,"contentLength":9},
"message":"HEAD / 200 2ms - 9.0B",
"docker":{"container_id":"e1cc1b22d04683645b00de53c0891e284c492358fd2830142f4523ad29eec060"},
"kubernetes":{"container_name":"kibana","namespace_name":"logging","pod_name":"logging-kibana-1-t9tvv",
"pod_id":"358622d8-a467-11e7-ab9a-0e43285e8fce","labels":{"component":"kibana","deployment":"logging-kibana-1",
"deploymentconfig":"logging-kibana","logging-infra":"kibana","provider":"openshift"},
"host":"ip-172-18-0-133.ec2.internal","master_url":"https://kubernetes.default.svc.cluster.local",
"namespace_id":"9dbd679c-a466-11e7-ab9a-0e43285e8fce"},...

Now, if you see a record that is missing @timestamp, or a record from a pod that is missing kubernetes.namespace_name or kubernetes.namespace_id, you know that the exception is caused by one of these missing fields.

September 28, 2017 08:07 PM

Powered by Planet