FreeIPA Identity Management planet - technical blogs

March 15, 2018

Fraser Tweedale

DN attribute value encoding in X.509

DN attribute value encoding in X.509

X.509 certificates use the X.500 Distinguished Name (DN) data type to represent issuer and subject names. X.500 names may contain a variety of fields including CommonName, OrganizationName, Country and so on. This post discusses how these values are encoded and compared, and problematic circumstances that can arise.

ASN.1 string types and encodings

ASN.1 offers a large number of string types, including:

  • NumericString
  • PrintableString
  • IA5String
  • UTF8String
  • BMPString
  • …several others

When serialising an ASN.1 object, each of these string types has a different tag. Some of the types have a shared representation for serialisation but differ in which characters they allow. For example, NumericString and PrintableString are both represented in DER using one byte per character. But NumericString only allows digits (09) and SPACE, whereas PrintableString admits the full set of ASCII printable characters. In contrast, BMPString uses two bytes to represent each character; it is equivalent to UTF-16BE. UTF8String, unsurprisingly, uses UTF-8.

ASN.1 string types for X.509 name attributes

Each of the various X.509 name attribute types uses a specific ASN.1 string type. Some types have a size constraint. For example:

X520countryName      ::= PrintableString (SIZE (2))
DomainComponent      ::= IA5String
X520CommonName       ::= DirectoryName (SIZE (1..64))
X520OrganizationName ::= DirectoryName (SIZE (1..64))

Hold on, what is DirectoryName? It is not a universal ASN.1 type; it is specified as a sum of string types:

DirectoryName ::= CHOICE {
    teletexString     TeletexString,
    printableString   PrintableString,
    universalString   UniversalString,
    utf8String        UTF8String,
    bmpString         BMPString }

Note that a size constraint on DirectoryName propagates to each of the cases. The constraint gives a maximum length in characters, not bytes.

Most X.509 attribute types use DirectoryName, including common name (CN), organization name (O), organizational unit (OU), locality (L), state or province name (ST). For these attribute types, which encoding should be used? RFC 5280 § provides some guidance:

The DirectoryString type is defined as a choice of PrintableString,
TeletexString, BMPString, UTF8String, and UniversalString.  CAs
conforming to this profile MUST use either the PrintableString or
UTF8String encoding of DirectoryString, with two exceptions.

The current version of X.509 only allows PrintableString and UTF8String. Earlier versions allowed any of the types in DirectoryString. The exceptions mentioned are grandfather clauses that permit the use of the now-prohibited types in environments that were already using them.

So for strings containing non-ASCII code points UTF8String is the only type you can use. But for ASCII-only strings, there is still a choice, and the RFC does not make a recommendation on which to use. Both are common in practice.

This poses an interesting question. Suppose two encoded DNs have the same attributes in the same order, but differ in the string encodings used. Are they the same DN?

Comparing DNs

RFC 5280 §7.1 outlines the procedure for comparing DNs. To compare strings you must convert them to Unicode, translate or drop some special-purpose characters, and perform case folding and normalisation. The resulting strings are then compared case-insensitively. According to this rule, DNs that use different string encodings but are otherwise the same are equal.

But the situation is more complex in practice. Earlier versions of X.509 required only binary comparison of DNs. For example, RFC 3280 states:

Conforming implementations are REQUIRED to implement the following
name comparison rules:

   (a)  attribute values encoded in different types (e.g.,
   PrintableString and BMPString) MAY be assumed to represent
   different strings;

   (b) attribute values in types other than PrintableString are case
   sensitive (this permits matching of attribute values as binary

   (c)  attribute values in PrintableString are not case sensitive
   (e.g., "Marianne Swanson" is the same as "MARIANNE SWANSON"); and

   (d)  attribute values in PrintableString are compared after
   removing leading and trailing white space and converting internal
   substrings of one or more consecutive white space characters to a
   single space.

Futhermore, RFC 5280 and earlier versions of X.509 state:

The X.500 series of specifications defines rules for comparing
distinguished names that require comparison of strings without regard
to case, character set, multi-character white space substring, or
leading and trailing white space.  This specification relaxes these
requirements, requiring support for binary comparison at a minimum.

This is a contradiction. The above states that binary comparison of DNs is acceptable, but other sections require a more sophisticated comparison algorithm. The combination of this contradiction, historical considerations and (no doubt) programmer laziness means that many X.509 implementations only perform binary comparison of DNs.

How CAs should handle DN attribute encoding

To ease certification path construction with clients that only perform binary matching of DNs, RFC 5280 states the following requirement:

When the subject of the certificate is a CA, the subject
field MUST be encoded in the same way as it is encoded in the
issuer field (Section in all certificates issued by
the subject CA.  Thus, if the subject CA encodes attributes
in the issuer fields of certificates that it issues using the
TeletexString, BMPString, or UniversalString encodings, then
the subject field of certificates issued to that CA MUST use
the same encoding.

This is confusing wording, but in practical terms there are two requirements:

  1. The Issuer DN on a certificate must be byte-identical to the Subject DN of the CA that issued it.
  2. The attribute encodings in a CA’s Subject DN must not change (e.g. when the CA certificate gets renewed).

If a CA violates either of these requirements breakage will ensue. Programs that do binary DN comparison will be unable to construct a certification path to the CA.

For end-entity (or leaf) certificates, the subject DN is not use in any links of the certification path. Changing the subject attribute encoding when renewing an end-entity certificate will not break validation. But it could still confuse some programs that only do binary comparison of DNs (e.g. they might display two distinct subjects).

Processing certificate requests

What about when processing certificate requests—should CAs respect the attribute encodings in the CSR? In my experience, CA programs are prone to issuing certificates with the subject encoded differently from how it was encoded in the CSR. CAs may do various kinds of validation, substitution or addition of subject name attributes. Or they may enforce the use of a particular encoding regardless of the encoding in the CSR.

Is this a problem? It depends on the client program. In my experience most programs can handle this situation. Problems mainly arise when the issuer or subject encoding changes upon renewal (for the reasons discussed above).

If a CSR-versus-certificate encoding mismatch does cause a problem for you, you may have to create a new CSR with the attributes encoding you expect the CA to use for the certificate. In many programs this is not straightforward, if it is possible at all. If you control the CA you might be able to configure it to use particular encodings for string attributes, or to respect the encodings in the CSR. The options available and how to configure them vary among CA programs.


X.509 requires the use of either PrintableString or UTF8String for most DN attribute types. Strings consisting of printable 7-bit ASCII characters can be represented using either encoding. This ambiguity can lead to problems in certification path construction.

Formally, two DNs that have the same attributes and values are the same DN, regardless of the string encodings used. But there are many programs that only perform binary matching of DNs. To avoid causing problems for such programs a CA:

  • must ensure that the Issuer DN field on all certificates it issues is identical to its own Subject DN;
  • must ensure that Subject DN attribute encodings on CA certificates it issues to a given subject do not change upon renewal;
  • should ensure that Subject DN attribute encodings on end-entity certificates it issues to a given subject do not change upon renewal.

CAs will often issue certificates with values encoded differently from how they were presented in the CSR. This usually does not cause problems. But if it does cause problems, you might be able to configure the client program to produce a CSR with different attribute encodings. If you control the CA you may be able to configure it to have a different treatment for attribute encodings. How to do these things was beyond the scope of this article.

March 15, 2018 12:00 AM

February 26, 2018

William Brown

Smartcards and You - How To Make Them Work on Fedora/RHEL

Smartcards and You - How To Make Them Work on Fedora/RHEL

Smartcards are a great way to authenticate users. They have a device (something you have) and a pin (something you know). They prevent password transmission, use strong crypto and they even come in a variety of formats. From your “card” shapes to yubikeys.

So why aren’t they used more? It’s the classic issue of usability - the setup for them is undocumented, complex, and hard to discover. Today I hope to change this.

The Goal

To authenticate a user with a smartcard to a physical linux system, backed onto LDAP. The public cert in LDAP is validated, as is the chain to the CA.

You Will Need

I’ll be focusing on the yubikey because that’s what I own.

Preparing the Smartcard

First we need to make the smartcard hold our certificate. Because of a crypto issue in yubikey firmware, it’s best to generate certificates for these externally.

I’ve documented this before in another post, but for accesibility here it is again.

Create an NSS DB, and generate a certificate signing request:

certutil -d . -N -f pwdfile.txt
certutil -d . -R -a -o user.csr -f pwdfile.txt -g 4096 -Z SHA256 -v 24 \
--keyUsage digitalSignature,nonRepudiation,keyEncipherment,dataEncipherment --nsCertType sslClient --extKeyUsage clientAuth \
-s "CN=username,O=Testing,L=example,ST=Queensland,C=AU"

Once the request is signed, and your certificate is in “user.crt”, import this to the database.

certutil -A -d . -f pwdfile.txt -i user.crt -a -n TLS -t ",,"
certutil -A -d . -f pwdfile.txt -i ca.crt -a -n TLS -t "CT,,"

Now export that as a p12 bundle for the yubikey to import.

pk12util -o user.p12 -d . -k pwdfile.txt -n TLS

Now import this to the yubikey - remember to use slot 9a this time! As well make sure you set the touch policy NOW, because you can’t change it later!

yubico-piv-tool -s9a -i user.p12 -K PKCS12 -aimport-key -aimport-certificate -k --touch-policy=always

Setting up your LDAP user

First setup your system to work with LDAP via SSSD. You’ve done that? Good! Now it’s time to get our user ready.

Take our user.crt and convert it to DER:

openssl x509 -inform PEM -outform DER -in user.crt -out user.der

Now you need to transform that into something that LDAP can understand. In the future I’ll be adding a tool to 389-ds to make this “automatic”, but for now you can use python:

>>> import base64
>>> with open('user.der', 'r') as f:
>>>    print(base64.b64encode(

That should output a long base64 string on one line. Add this to your ldap user with ldapvi:

userCertificate;binary:: <BASE64>

Note that ‘;binary’ tag has an important meaning here for certificate data, and the ‘::’ tells ldap that this is b64 encoded, so it will decode on addition.

Setting up the system

Now that you have done that, you need to teach SSSD how to intepret that attribute.

In your various SSSD sections you’ll need to make the following changes:

auth_provider = ldap
ldap_user_certificate = userCertificate;binary

# This controls OCSP checks, you probably want this enabled!
# certificate_verification = no_verification

pam_cert_auth = True

Now the TRICK is letting SSSD know to use certificates. You need to run:

sudo touch /var/lib/sss/pubconf/pam_preauth_available

With out this, SSSD won’t even try to process CCID authentication!

Add your ca.crt to the system trusted CA store for SSSD to verify:

certutil -A -d /etc/pki/nssdb -i ca.crt -n USER_CA -t "CT,,"

Add coolkey to the database so it can find smartcards:

modutil -dbdir /etc/pki/nssdb -add "coolkey" -libfile /usr/lib64/

Check that SSSD can find the certs now:

# sudo /usr/libexec/sssd/p11_child --pre --nssdb=/etc/pki/nssdb
PIN for william
CAC ID Certificate

If you get no output here you are missing something! If this doesn’t work, nothing will!

Finally, you need to tweak PAM to make sure that pam_unix isn’t getting in the way. I use the following configuration.

auth        required
# This skips pam_unix if the given uid is not local (IE it's from SSSD)
auth        [default=1 ignore=ignore success=ok]
auth        sufficient nullok try_first_pass
auth        requisite uid >= 1000 quiet_success
auth        sufficient prompt_always ignore_unknown_user
auth        required

account     required
account     sufficient
account     sufficient uid < 1000 quiet
account     [default=bad success=ok user_unknown=ignore]
account     required

password    requisite try_first_pass local_users_only retry=3 authtok_type=
password    sufficient sha512 shadow try_first_pass use_authtok
password    sufficient use_authtok
password    required

session     optional revoke
session     required
-session    optional
session     [success=1 default=ignore] service in crond quiet use_uid
session     required
session     optional

That’s it! Restart SSSD, and you should be good to go.

Finally, you may find SELinux isn’t allowing authentication. This is really sad that smartcards don’t work with SELinux out of the box and I have raised a number of bugs, but check this just in case.

Happy authentication!

February 26, 2018 02:00 PM

February 22, 2018

Rob Crittenden

Developers should learn to love the IPA lite-server

If you’re trying to debug an issue in a plugin then the lite-server is for you. It has a number of advantages:

  • It runs in-tree which means you don’t need to commit, build code, re-install, etc
  • Or worse, avoid directly editing files in /usr/lib/python3.6/*
  • It is very pdb friendly
  • Auto-reloads modified python code
  • It doesn’t run as root

You’ll need two sessions to your IPA master. In one you run the lite-server via:

$ export KRB5CCNAME=~/.ipa/ccache
$ kinit admin
$ make lite-server

In the second we run the client. You’ll need to say which configuration to use:

$ export IPA_CONFDIR=~/.ipa

Now copy the installed configuration there:

$ cp /etc/ipa/default.conf ~/.ipa
$ cp /etc/ipa/ca.crt ~/.ipa

Edit ~/.ipa/default.conf and change the xmlrpc_uri to:


Now you can run your command locally:

$ kinit admin
$ PYTHONPATH=. python3 ./ipa user-show admin

And if something isn’t working right, stick pdb in ipaserver/plugins/ in the show pre_callback  command and re-run (notice that the lite-server picks up the change automatically):

$ PYTHONPATH=. python3 ./ipa user-show admin

And in the lite-server session:

> /home/rcrit/redhat/freeipa/ipaserver/plugins/
-> return dn


by rcritten at February 22, 2018 06:43 PM

February 20, 2018

Rob Crittenden

certmonger D-Bus introspection

I’m looking to do some certificate work related to certmonger and was thinking D-Bus would be a good way to get the data (freeIPA does something similar). The Using D-Bus Introspection blog post was key for me to figure out what certmonger could provide (without digging too much into the code).

I ended up running:

dbus-send --system --dest=org.fedorahosted.certmonger \
--type=method_call --print-reply \
/org/fedorahosted/certmonger \

This provided me the list of interfaces I needed. First I started with getting the current requests:

dbus-send --system --dest=org.fedorahosted.certmonger \
--type=method_call --print-reply \
/org/fedorahosted/certmonger \

Then you can pick or iterate through the requests to get the information you want. Here is how to get the serial number:

dbus-send --system --dest=org.fedorahosted.certmonger \
--type=method_call --print-reply \
/org/fedorahosted/certmonger/requests/Request1 \
org.freedesktop.DBus.Properties.Get \
string:org.fedorahosted.certmonger.request string:serial

You can find a list of possible values in src/tdbus.h

by rcritten at February 20, 2018 11:49 PM

January 29, 2018

Fabiano Fidencio

Fleet Commander!

A really short update!

I've presented a talk about Fleet Commander at DevConf CZ'2018, which basically show-cases the current status of the project after having the whole integration with FreeIPA and SSSD done!

Please, take a look at the presentation and slides.

While preparing this presentation we've found some issues on SSSD side, which already have some PRs opened: #495 and #497.

Also, fc-vagans project has been created to help people to easily test and develop for Fleet Commander.

Hopefully we'll be able to get the SSSD patches merged and backported to Fedora27. Meanwhile, I'd strongly recommend people to use the fc-vagans, as the patches are present there.

So, give it a try and, please, talk to us (#fleet-commander at!

And ... a similar talk will be given at FOSDEM'2018! Take a look at our DevRoom schedule and join us there!

by (Fabiano Fidêncio) at January 29, 2018 09:11 PM

December 22, 2017

William Brown

Using b43 firmware on Fedora Atomic Workstation

Using b43 firmware on Fedora Atomic Workstation

My Macbook Pro has a broadcom b43 wireless chipset. This is notorious for being one of the most annoying wireless adapters on linux. When you first install Fedora you don’t even see “wifi” as an option, and unless you poke around in dmesg, you won’t find how to enable b43 to work on your platform.


The b43 driver requires proprietary firmware to be loaded else the wifi chip will not run. There are a number of steps for this process found on the linux wireless page . You’ll note that one of the steps is:

export FIRMWARE_INSTALL_DIR="/lib/firmware"
sudo b43-fwcutter -w "$FIRMWARE_INSTALL_DIR" broadcom-wl-5.100.138/linux/wl_apsta.o

So we need to be able to write and extract our firmware to /usr/lib/firmware, and then reboot and out wifi works.

Fedora Atomic Workstation

Atomic WS is similar to atomic server, that it’s a read-only ostree based deployment of fedora. This comes with a number of unique challenges and quirks but for this issue:

sudo touch /usr/lib/firmware/test
/bin/touch: cannot touch '/usr/lib/firmware/test': Read-only file system

So we can’t extract our firmware!

Normally linux also supports reading from /usr/local/lib/firmware (which on atomic IS writeable ...) but for some reason fedora doesn’t allow this path.

Solution: Layered RPMs

Atomic has support for “rpm layering”. Ontop of the ostree image (which is composed of rpms) you can supply a supplemental list of packages that are “installed” at rpm-ostree update time.

This way you still have an atomic base platform, with read-only behaviours, but you gain the ability to customise your system. To achive it, it must be possible to write to locations in /usr during rpm install.

This means our problem has a simple solution: Create a b43 rpm package. Note, that you can make this for yourself privately, but you can’t distribute it for legal reasons.

Get setup on atomic to build the packages:

rpm-ostree install rpm-build createrepo

RPM specfile:


%define debug_package %{nil} Summary: Allow b43 fw to install on ostree installs due to bz1512452 Name: b43-fw Version: 1.0.0 Release: 1 License: Proprietary, DO NOT DISTRIBUTE BINARY FORMS URL: Group: System Environment/Kernel

BuildRequires: b43-fwcutter


%description Broadcom firmware for b43 chips.

%prep %setup -q -n broadcom-wl-5.100.138

%build true

%install pwd mkdir -p %{buildroot}/usr/lib/firmware b43-fwcutter -w %{buildroot}/usr/lib/firmware linux/wl_apsta.o

%files %defattr(-,root,root,-) %dir %{_prefix}/lib/firmware/b43 %{_prefix}/lib/firmware/b43/*

%changelog * Fri Dec 22 2017 William Brown <william at> - 1.0.0 - Initial version

Now you can put this into a folder like so:

mkdir -p ~/rpmbuild/{SPECS,SOURCES}
<editor> ~/rpmbuild/SPECS/b43-fw.spec
wget -O ~/rpmbuild/SOURCES/broadcom-wl-5.100.138.tar.bz2

We are now ready to build!

rpmbuild -bb ~/rpmbuild/SPECS/b43-fw.spec
createrepo ~/rpmbuild/RPMS/x86_64/

Finally, we can install this. Create a yum repos file:

baseurl=file:///home/<YOUR USERNAME HERE>/rpmbuild/RPMS/x86_64
rpm-ostree install b43-fw

Now reboot and enjoy wifi on your Fedora Atomic Macbook Pro!

December 22, 2017 02:00 PM

December 21, 2017

Alexander Bokovoy

FOSDEM 2018 IAM devroom

FOSDEM is one of largest free software conferences in Europe. It is run by volunteers for volunteers and since 2001 gathers together more than 8000 people every year. Sure, during first years there were less visitors (I had been lucky to actually present at the first FOSDEM and also ran a workshop there) but the atmosphere didn’t change and it is still has the same classical hacker gathering feeling.

In 2018 FOSDEM will run on the weekend of February 3rd and 4th. Since the event has grown up significantly, there are multiple development rooms in addition to the main tracks. Each development room is given a room for Saturday or Sunday (or both). Each development room issues own call for proposals (CfP), chooses talks for the schedule and runs the event. FOSDEM crew films and streams all devrooms online for those who couldn’t attend them in real time but the teams behind actual devrooms are what powers the event.

In 2018 there will be 42 devrooms in addition to the main track. Think about it as 43 different conferences happening at the same time, that’s the scale and power of FOSDEM. I’m still being impressed by the power of volunteers who contribute to FOSDEM success even long since the original crew of sysadmins of Free University of Brussels decided to stop working on FOSDEM.

Identity management related topics has been always part of FOSDEM. In 2016 I was presenting in the main track about our progress with GNOME desktop readiness for enteprise environments, integration with freeIPA and other topics, including a demo of freeIPA and Ipsilon powering authentication for Owncloud and Google Apps. Some of my colleagues ran freeIPA presentation well before that too.

We wanted to have a bit more focused story telling too. Radovan Semancik tried to organize a devroom in 2016 but it wasn’t accepted. Michael Ströder tried the same in 2017. Getting a devroom proposal to pass always comes with a fair amount of luck but finally we suceeded with FOSDEM 2018. I’d like to thank you my colleague Fraser Tweedale who wrote the original proposal draft out of which grew up the effort with Identity and Access Management devroom.

We tried to keep a balance between a number of talks and a variety of topics presented. We only have 8.5 hours of schedule allocated. With 5 minutes intervals between the talks we were able to accomodate 14 talks out of 25 proposals.

The talks are structured in roughly five categories:

  • Identity and access management for operating systems
  • Application level identity and access management
  • Interoperability issues between POSIX and Active Directory environments
  • Deployment reports for open source identity management solutions
  • Security and cryptography on a system and application level

Admittedly, we’ve got one of smallest rooms (50 people) allocated but this is a start. On Saturday, February 3rd, 2018, please come to room UD2.119. And if you couldn’t be in person at FOSDEM, streaming will be available too.

See you in Brussels!

December 21, 2017 10:35 AM

Powered by Planet