FreeIPA Identity Management planet - technical blogs

May 05, 2017

Justin Stephenson

Measuring SSSD performance with SystemTap

This post is intended to provide information about finding SSSD bottlenecks with SystemTap.

One of the most common complaints with SSSD is slowness during login or NSS commands such as ‘getent’ or ‘id’ especially in large LDAP/Active Directory environments. Log analysis alone can be difficult to track down the source of the delay, especially with certain configurations(Indirect AD Integration) where there is a significant number of backend operations that occur during login.

In SSSD 1.14, performance enhancements were made to optimize cache write operations decreasing overall time spent updating the filesystem cache. These bottlenecks were discovered by developers based on userspace probing in certain areas of the SSSD code with SystemTap.

Below are some steps on getting started with SystemTap and SSSD, in this example we will use recent additions of High-Level Data Provider request probes.

  • First, install the necessary packages mentioned here: Installation and Setup

    • It is not required to install kernel-debuginfo or sssd-debuginfo to run these userspace systemtap scripts.
  • You can now check if the probe markers are available with:

# stap -L 'process("/usr/libexec/sssd/sssd_be").mark("*")'
process("/usr/libexec/sssd/sssd_be").mark("dp_req_done") $arg1:long $arg2:long $arg3:long
process("/usr/libexec/sssd/sssd_be").mark("dp_req_send") $arg1:long $arg2:long

# stap -L 'process("/usr/lib64/sssd/libsss_ldap_common.so").mark("*")' | head
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_acct_req_recv") $arg1:long $arg2:long $arg3:long $arg4:long
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_acct_req_send") $arg1:long $arg2:long $arg3:long $arg4:long
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_deref_search_recv") $arg1:long $arg2:long
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_deref_search_send") $arg1:long $arg2:long
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_get_generic_ext_recv") $arg1:long $arg2:long $arg3:long
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_get_generic_ext_send") $arg1:long $arg2:long $arg3:long
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_nested_group_check_cache_post")
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_nested_group_check_cache_pre")
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_nested_group_deref_process_post")
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_nested_group_deref_process_pre")

  • The existing SystemTap scripts are located in /usr/share/sssd/systemtap. The id_perf.stp can be used to measure performance specifically with the ‘id’ command, while the nested_group_perf.stp generates metrics and useful information associated with nested group processing code.
# ll /usr/share/sssd/systemtap/
-rw-r--r--. 1 root root 2038 May 4 18:16 dp_request.stp
-rw-r--r--. 1 root root 3854 May 4 13:56 id_perf.stp
-rw-r--r--. 1 root root 8613 May 4 14:44 nested_group_perf.stp

  • Running the dp_request.stp script will track Data Provider requests and provide information about the request which took the most time to complete.
# vim /usr/share/sssd/systemtap/dp_request.stp 
/* Start Run with:
* stap -v dp_request.stp
*
* Then reproduce slow login or id/getent in another terminal.
* Ctrl-C running stap once login completes.

# stap -v /usr/share/sssd/systemtap/dp_request.stp
Pass 1: parsed user script and 469 library scripts using 244964virt/45004res/7588shr/37596data kb, in 100usr/20sys/128real ms.
Pass 2: analyzed script: 4 probes, 13 functions, 5 embeds, 11 globals using 246992virt/48356res/8816shr/39624data kb, in 30usr/160sys/396real ms.
Pass 3: using cached /root/.systemtap/cache/d5/stap_d5d7fd869e61741e13b43b7a6932a761_11210.c
Pass 4: using cached /root/.systemtap/cache/d5/stap_d5d7fd869e61741e13b43b7a6932a761_11210.ko
Pass 5: starting run.
*** Beginning run! ***
--> DP Request [Account #1] sent for domain [AD.JSTEPHEN]
DP Request [Account #1] finished with return code [0]: [Success]
Elapsed time [0m8.476s]

--> DP Request [Account #2] sent for domain [idm.jstephen]
DP Request [Account #2] finished with return code [0]: [Success]
Elapsed time [0m0.003s]

--> DP Request [Initgroups #3] sent for domain [AD.JSTEPHEN]
DP Request [Initgroups #3] finished with return code [0]: [Success]
Elapsed time [0m0.115s]

--> DP Request [Account #4] sent for domain [idm.jstephen]
DP Request [Account #4] finished with return code [0]: [Success]
Elapsed time [0m0.001s]

--> DP Request [Account #5] sent for domain [idm.jstephen]
DP Request [Account #5] finished with return code [0]: [Success]
Elapsed time [0m0.002s]

--> DP Request [Account #6] sent for domain [idm.jstephen]
DP Request [Account #6] finished with return code [0]: [Success]
Elapsed time [0m0.001s]

--> DP Request [Account #7] sent for domain [idm.jstephen]
DP Request [Account #7] finished with return code [0]: [Success]
Elapsed time [0m0.000s]

--> DP Request [Account #8] sent for domain [idm.jstephen]
DP Request [Account #8] finished with return code [0]: [Success]
Elapsed time [0m0.001s]

--> DP Request [Account #9] sent for domain [idm.jstephen]
DP Request [Account #9] finished with return code [0]: [Success]
Elapsed time [0m0.001s]

^C
Ending Systemtap Run - Providing Summary
Total Number of DP requests: [9]
Total time in DP requests: [0m8.600s]
Slowest request data:
Request: [Account #1]
Start Time: [Fri May 5 10:47:14 2017 EDT]
End Time: [Fri May 5 10:47:23 2017 EDT]
Duration: [0m8.476s]

Pass 5: run completed in 0usr/40sys/15329real ms.

  • We can see that the getAccountInfo #1 DP request completed in 8.476 seconds, the Start Time/End Time provided here can be used to help narrow down log analysis.
(Fri May  5 10:47:14 2017) [sssd[be[idm.jstephen]]] [dp_get_account_info_handler] (0x0200): Got request for [0x1][BE_REQ_USER][name=trustuser1@ad.jstephen]
(Fri May 5 10:47:14 2017) [sssd[be[idm.jstephen]]] [dp_attach_req] (0x0400): DP Request [Account #1]: New request. Flags [0x0001].
(Fri May 5 10:47:14 2017) [sssd[be[idm.jstephen]]] [dp_attach_req] (0x0400): Number of active DP request: 1
...
<snip>
...
(Fri May 5 10:47:23 2017) [sssd[be[idm.jstephen]]] [dp_req_done] (0x0400): DP Request [Account #1]: Request handler finished [0]: Success

  • The existing SystemTap scripts can be modified or new scripts can be created for a certain use-case as long as the existing probes/tapsets in /usr/share/systemtap/tapset/sssd.stp are used.
# LDAP search probes
probe sdap_search_send = process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_get_generic_ext_send")
{
base = user_string($arg1);
scope = $arg2;
filter = user_string($arg3);

probestr = sprintf("-> search base [%s] scope [%d] filter [%s]",
base, scope, filter);
}

The stap -L command shown previously will list out the functions where probes were added making these markers available for writing scripts with.

The goal will be to add more low-level probes to iterative functions where SSSD spends a lot of time. This will allow developers and administrators to analyze performance issues in detail.

by noreply@blogger.com (Justin Stephenson) at May 05, 2017 04:39 PM

April 28, 2017

Alexander Bokovoy

How to debug FreeIPA privilege separation issues

FreeIPA 4.5 has a lot of internal changes. A server side of the FreeIPA framework now runs in a privilege separation mode. This improves security of FreeIPA management operations but complicates debugging of the server. During FreeIPA 4.5 development phase Simo Sorce and I spent a lot of time debugging regressions and decided to document how we log events and how to debug server side operations. As result, this article details on what privilege separation means in FreeIPA management framework context and how to debug it.

April 28, 2017 07:00 PM

March 21, 2017

Fraser Tweedale

Supporting large key sizes in FreeIPA certificates

A couple of issues around key sizes in FreeIPA certificates have come to my attention this week: how to issue certificates for large key sizes, and how to deploy FreeIPA with a 4096-bit key. In this post I’ll discuss the situation with each of these issues. Though related, they are different issues so I’ll address each separately.

Issuing certificates with large key sizes

While researching the second issue I stumbled across issue #6319: ipa cert-request limits key size to 1024,2048,3072,4096 bits. To wit:

ftweedal% ipa cert-request alice-8192.csr --principal alice
ipa: ERROR: Certificate operation cannot be completed:
  Key Parameters 1024,2048,3072,4096 Not Matched

The solution is straightforward. Each certificate profile configures the key types and sizes that will be accepted by that profile. The default profile is configured to allow up to 4096-bit keys, so the certificate request containing an 8192-bit key fails. The profile configuration parameter involved is:

policyset.<name>.<n>.constraint.params.keyParameters=1024,2048,3072,4096

If you append 8192 to that list and update the profile configuration via ipa certprofile-mod (or create a new profile via ipa certprofile-import), then everything will work!

Deploying FreeIPA with IPA CA signing key > 2048-bits

When you deploy FreeIPA today, the IPA CA has a 2048-bit RSA key. There is currently no way to change this, but Dogtag does support configuring the key size when spawning a CA instance, so it should not be hard to support this in FreeIPA. I created issue #6790 to track this.

Looking beyond RSA, there is also issue #3951: ECC Support for the CA which concerns supporting a elliptic curve signing key in the FreeIPA CA. Once again, Dogtag supports EC signing algorithms, so supporting this in FreeIPA should be a matter of deciding the ipa-server-install(1) options and mechanically adjusting the pkispawn configuration.

If you have use cases for large signing keys and/or NIST ECC keys or other algorithms, please do not hesitate to leave comments in the issues linked above, or get in touch with the FreeIPA team on the freeipa-users@redhat.com mailing list or #freeipa on Freenode.

by ftweedal at March 21, 2017 12:59 AM

March 15, 2017

Rich Megginson

Elasticsearch Troubleshooting - unassigned_shard and cluster state RED

Problem - unassigned_shards and cluster status RED



Using OpenShift origin-aggregated-logging 1.2, Elasticsearch 1.5.2, the cluster status is RED.
oc exec logging-es-xxx-N-yyy -n logging -- curl -s \
  --key /etc/elasticsearch/keys/admin-key \
  --cert /etc/elasticsearch/keys/admin-cert \
  --cacert /etc/elasticsearch/keys/admin-ca \
  https://localhost:9200/_cluster/health | \
  python -mjson.tool
{
    "active_primary_shards": 12345,
    "active_shards": 12345,
    "cluster_name": "logging-es",
    "initializing_shards": 0,
    "number_of_data_nodes": 3,
    "number_of_nodes": 3,
    "number_of_pending_tasks": 0,
    "relocating_shards": 0,
    "status": "red",
    "timed_out": false,
    "unassigned_shards": 7
}

The problem is the unassigned_shards. We need to identify those shards and
figure out how to deal with them so the cluster can move to yellow or
green.


Solution - identify and delete problematic indices



Use the /_cluster/health?level=indices to get a list of the indices status:
oc exec logging-es-xxx-N-yyy -n logging -- curl -s \
  --key /etc/elasticsearch/keys/admin-key \
  --cert /etc/elasticsearch/keys/admin-cert \
  --cacert /etc/elasticsearch/keys/admin-ca \
  https://localhost:9200/_cluster/health?level=indices | \
  python -mjson.tool > indices.json

The report will list each index and its state:
"my-index.2017.03.15":{
   "active_primary_shards": 4,
   "active_shards": 4,
   "initializing_shards": 0,
   "number_of_replicas": 0,
   "number_of_shards": 5,
   "relocating_shards": 0,
   "status": "red",
   "unassigned_shards": 1
 },
 ...

Look for records that have "status": "red" and an "unassigned_shards" with
a value of 1 or higher. IF YOU DON’T NEED THE DATA ANYMORE, AND ARE SURE
THAT THIS DATA CAN BE LOST, then it might be easiest to just delete these using
the REST API:
oc exec logging-es-xxx-N-yyy -n logging -- curl -s \
  --key /etc/elasticsearch/keys/admin-key \
  --cert /etc/elasticsearch/keys/admin-cert \
  --cacert /etc/elasticsearch/keys/admin-ca \
  -XDELETE https://localhost:9200/my-index.2017.03.15

If you need to recover this data, or deletion is not working, then use the
recovery procedure documented at
indices recovery

March 15, 2017 05:45 PM

March 06, 2017

Red Hat Blog

Identity Management Improvements in Red Hat Enterprise Linux 7.3: Part 2

In Part 1 of this series, we looked at core improvements for Identity Management (IdM) in Red Hat Enterprise Linux (RHEL) 7.3, as well as manageability and other improvements. In the second half, we’re going to look at interoperabilty, and Active Directory integration. 

Certificate Management

Enriched certificate management is an ongoing theme for several releases.

In the current release we focused on the following use case: assume you issue certificates for different purposes like devices, systems, services, VPNs, switches and so on, using IdM CA. If you have a single CA, all the certificates come from the same trust chain, so administrators have to explicitly limit the scope of the certificates to the environment they are used in to prevent cross pollination and misuse of the certificates issued for one purpose with a different service.

Getting all these access control rules right becomes a really complex task. It would have been much easier if one could just have a dedicated CA for each of the environments. But standing up a separate CA infrastructure is usually an even bigger task. Not anymore! With the SubCA feature one can create a dedicated CA with a couple of commands in seconds.

The second enhancement is the ability to authenticate using smart cards via SSSD and IdM, but this time with added support for Active Directory users coming from a trusted forest. In version 7.2, we introduced the ability to authenticate IdM users using certificates on smart cards when they log into the Linux systems configured with SSSD. This time we added the ability for Active Directory users in trusted forests to use their certificates when they are published to Active Directory or into ID override entries in IdM.

Making certificate related enhancements to bring more and more related functionality is a multi release plan. You will see more additions in this area when the next release arrives.

Interoperability

For some time, you have asked about making the IdM management API available. We were still not confident that it is ready, although we included an API browser into the IdM UI as an experimental feature. Finally, we made a set of changes that enables us to make the API publicly available. We take the commitment to support the IdM management API very seriously and want to make sure there are no issues that would force us to make incompatible changes.

This is why, in the Red Hat Enterprise Linux 7.3 release, we offer a technical preview of the IdM API. We plan to declare full support in one of the future releases. Your feedback and comments will be extremely valuable. To get more information about the IdM management API please read the knowledge base article we published last November on using the technology preview.

Many customers still have legacy UNIX systems and we often get questions about how to integrate these systems into the IdM ecosystem. While IdM can provide authentication via standard LDAP & Kerberos protocols and perform identity lookups via LDAP protocol, the access control capabilities implemented in IdM are not available to those UNIX systems.

To close this gap and provide a single central place for access control management across modern Linux and legacy UNIX systems, a new community project has been launched – pam_hbac.   This project offers a pam module that leverages IdM host-based-access-control rules. It is currently built for Solaris, FreeBSD  and Linux. This module is not included in Red Hat Enterprise Linux as it needs to be installed on other platforms and is not supported by Red Hat but rather by a community of open source developers. If you are interested in this project, please collaborate via GitHub. If you are interested in an AIX version, please contact your IBM representative and open an RFE with IBM.

As I have written some time ago, Red Hat has been working on the identity provider solution to allow federation and SSO for web applications using SAML and OpenID Connect protocols. Earlier this year, Red Hat released a fully supported solution called Red Hat SSO powered by the Keycloak community project. IdM in Red Hat Enterprise Linux 7.3 has been validated as a back end for RH SSO, in parallel to Active Directory and generic LDAP. However, we recommend waiting a bit until the next version of Red Hat SSO is released in several weeks. That version will include a tighter integration between IdP server and IdM/SSSD bringing a better user experience in complex setups.

Active Directory Integration

Clients in AD domains

Some customers that consider deploying IdM with Active Directory trusts face a challenge related to the names of the hosts. If LInux systems are deployed inside the same DNS domain as Active Directory domain controllers, moving to trusts would mean changing hostnames to a different domain when Kerberos SSO is expected to work between all the systems in the environment. In some cases renaming is possible, in some it is really hard. To discuss this issue in details and suggest some workarounds I put together a separate blog post.

Here I want to mention that while we took a look at what else can be done for this use case, we could not find anything that is in our power to improve in the current situation. Hosts can remain unchanged, but that would make it impossible to SSH into those hosts leveraging Kerberos based SSO. If this is acceptable then no name changes would be needed. At that point it becomes a deployment choice and would depend on the constraints and priorities of the specific customer environment.

External trust

The original implementation of IdM to AD trusts implies a full trust between IdM and the whole Active Directory forest. In some cases this is not desirable. Sometimes the users that should be exposed to IdM resources are isolated in a separate domain and it makes sense to have a direct trust with that specific domain rather than with the whole forest. IdM in Red Hat Enterprise Linux 7.3 now has the capability to establish trust with a selected Active Directory domain, rather than with the whole forest.

UPN Support

Users in Active Directory can have an arbitrary name assigned to them called User Principal Name (UPN). By default UPN is constructed from the domain name and user login name automatically, but in some cases it can explicitly be reconfigured. In this case, no assumptions can be made about the UPN name – it is just a string that can pretty much contain any value. SSSD was capable of working with arbitrarily UPN names in the direct integration scenario but was lacking the same flexibility in trust cases. This limitation has been addressed and SSSD can now handle arbitrary UPN names when connected to IdM in trust setup with AD.

Keytab Renewal

When a system is joined directly to Active Directory as a domain member, it has to adhere to key rotation policies. SSSD is now capable of automatically renewing its kerberos keys, following policies defined in Active Directory.

Password Change

In a trust setup, legacy UNIX and Linux systems are connected to a special LDAP compatibility view that exposes merged information between data coming from Active Directory and data stored in IdM. In the current release Active Directory and IdM users authenticating via legacy systems connected to the compatibility tree can change their user password when it expires. Password change via compatibility tree was not possible in the past.

As you can see in the last release, as in every release before, we have delivered a lot of new identity management capabilities. We would be glad to hear your input on the new and old features as well as your improvement requests. Comments are always welcome! Try it, use it, provide feedback. We are here to listen, build and make your day-to-day life easier.

by Dmitri Pal at March 06, 2017 03:19 PM

March 04, 2017

Fabiano Fidencio

SSSD: {DBus,Socket}-activated responders (2nd try!)

Second time's the charm! :-)


Since the first post about this topic some improvements have been done in order to fix a bug found and reported by a Debian user (Thanks Stric!).

The fix is part of SSSD 1.15.1 release and altogether with the release, some other robustness improvements have been done! Let's go through the changes ...


Avoid starting the responders before SSSD is up!


I've found out that the NSS responder had been started up before SSSD and it's quit problematic during the boot up process as libc does initgroups on pretty much any account, checking all NSS modules in order to be precise.

By calling sss_nss the NSS responder is triggered and tries to talk to the data providers (which are not up yet, as SSSD is not up yet ...), causing the boot up process to hang until libc gives up (causing a timeout on services like systemd-login and all the services depending on this one).

The fix for this issue looks like:
1
2
3
4
5
6
7
8
@@ -1,6 +1,7 @@
[Unit]
Description=SSSD @responder@ Service responder socket
Documentation=man:sssd.conf(5)
+ After=sssd.service
BindsTo=sssd.service

[Socket]

And, as I've been told by systemd developers that "BindsTo=" must always come together with "After=" (although it is not documented yet ...) this fix has been applied for all responders' unit files.


Avoid starting the responders' sockets before SSSD is up!


We really want (at least for now) to have the responders' sockets completely tied up to SSSD service. We want the responders to be socket-activated only after SSSD is up and just right above this section you can see an explanation why we want to have this kind of control.

In order to achieve this some changes were needed in the sockets' units, as systemd automatically adds "Before=sockets.target" to any socket unit by default (and sockets.target is started up in an really early phase of the boot process).

And there I went again to talk to systemd developers about the best approach to do not start the responder's sockets before SSSD is up and the patch that came out as a result of the discussion looks like:

1
2
3
4
5
6
7
8
9
@@ -3,6 +3,8 @@
Documentation=man:sssd.conf(5)
After=sssd.service
BindsTo=sssd.service
+ DefaultDependencies=no
+ Conflicts=shutdown.target

[Socket]
ListenStream=@pipepath@/@responder@

By doing this change the sockets are no longer started before sockets.target, but just after SSSD service is started. The downside of this approach is that we have to deal with conflicts by our own and that is the reason the "Conflicts=shutdown.target" has been added.


Be more robust against misconfigurations!


As now that we have two completely different ways to manage the services, we really have to be robust in order to avoid that the admins will mix them up wrongly.

So far we have been flexible enough to allow admins to have some of the services being started up by the monitor, while other services left for systemd. And it's okay! The problem would start when the monitor has been told to start a responder (by having the responder listed in the services' line of sssd.conf) and this very same responder is supposed to be socket-activated (the admin did systemctl enable sssd-@responder@.socket).

In the situation describe above we could end up with two responders' services running (for the very same responder). The best way found to fix this issue is adding a simple program to check whether the socket-activated responder is also mentioned in the sssd.conf services' line. In case it's mentioned there, just do not start the socket up and leave the whole responsibility to the monitor. Otherwise, take advantage of systemd machinery!

The change on the sockets' unit looks like:
1
2
3
4
5
6
7
8
@@ -7,6 +7,7 @@
Conflicts=shutdown.target

[Socket]
+ ExecStartPre=@libexecdir@/sssd/sssd_check_socket_activated_responders -r @responder@
ListenStream=@pipepath@/@responder@
SocketUser=@SSSD_USER@
SocketGroup=@SSSD_USER@


Also, I've decided to be a little bit stricter on our side and also refuse manual start up of the responders' services and the change for this looks like:
1
2
3
4
5
6
7
8
@@ -3,6 +3,7 @@
Documentation=man:sssd.conf(5)
After=sssd.service
BindsTo=sssd.service
+ RefuseManualStart=true

[Install]
Also=sssd-@responder@.socket


And how can I start using the socket-activated services?


As by default we still use the monitor to manage services, some little configuration change is need.

See the example below explaining how to enable the PAM and AutoFS services to be socket-activated.

Considering your /etc/sssd/sssd.conf has something like:

1
2
3
[sssd]
services = nss, pam, autofs
...

Enable PAM and AutoFS responders' sockets:
# systemctl enable sssd-pam.socket
# systemctl enable sssd-autofs.socket

Remove both PAM and AutoFS responders from the services' line, like:

1
2
3
[sssd]
services = nss
...

Restart SSSD service
    # systemctl restart sssd.service

    And you're ready to go!


    Is there any known issue that I should be aware of?


    Yes, there is! You should avoid having PAC responder, needed by IPA domains, socket-activated for now. The reason for this is that due to an ugly hack on SSSD code this responder is added to the services' list anytime an IPA domain is detected.

    By doing this, the service is always started by the monitor and there is nothing that could be done on our socket's units to detected this situation and avoid starting up the PAC socket.

    A possible way to fix this issue is patching ipa-client-install to either explicitly add the PAC responder to the services' list (in case the admin wants to keep using the monitor) or to enable the PAC responders' socket (in case the admin wants to take advantage of socket-activation).

    Once it's done on IPA side, we would be able to drop the code that enables the PAC responder automatically from SSSD. However, doing this right now would break backwards compatibility!


    Where can I find more info about SSSD?


    More information about SSSD can be found on the project page: https://pagure.io/SSSD/sssd/

    If you want to report us a bug, please, follow this web page and file an issue in the SSSD pagure instance.

    Please, keep in mind that currently we're in the middle of a migration process from FedoraHosted to Pagure and it will take a while to have everything in place, again.

    Even though, you can find more info about SSSD's internals here.

    In case you want to contribute to the project, please, read this webpage and feel free to approach us at #sssd on freenode (irc://irc.freenode.net/sssd).

    by noreply@blogger.com (Fabiano Fidêncio) at March 04, 2017 08:40 PM

    February 20, 2017

    Fraser Tweedale

    Wildcard certificates in FreeIPA

    The FreeIPA team sometimes gets asked about wildcard certificate support. A wildcard certificate is an X.509 certificate where the DNS-ID has a wildcard in it (typically as the most specific domain component, e.g. *.cloudapps.example.com). Most TLS libraries match wildcard domains in the obvious way.

    In this blog post we will discuss the state of wildcard certificates in FreeIPA, but before proceeding it is fitting to point out that wildcard certificates are deprecated, and for good reason. While the compromise of any TLS private key is a serious matter, the attacker can only impersonate the entities whose names appear on the certificate (typically one or a handful of DNS addresses). But a wildcard certificate can impersonate any host whose name happens to match the wildcard value.

    In time, validation of wildcard domains will be disabled by default and (hopefully) eventually removed from TLS libraries. The emergence of protocols like ACME that allow automated domain validation and certificate issuance mean that there is no real need for wildcard certificates anymore, but a lot of programs are yet to implement ACME or similar; therefore there is still a perceived need for wildcard certificates. In my opinion some of this boils down to lack of awareness of novel solutions like ACME, but there can also be a lack of willingness to spend the time and money to implement them, or a desire to avoid changing deployed systems, or taking a "wait and see" approach when it comes to new, security-related protocols or technologies. So for the time being, some organisations have good reasons to want wildcard certificates.

    FreeIPA currently has no special support for wildcard certificates, but with support for custom certificate profiles, we can create and use a profile for issuing wildcard certificates.

    Creating a wildcard certificate profile in FreeIPA

    This procedure works on FreeIPA 4.2 (RHEL 7.2) and later.

    First, kinit admin and export an existing service certificate profile configuration to a file:

    ftweedal% ipa certprofile-show caIPAserviceCert --out wildcard.cfg
    ---------------------------------------------------
    Profile configuration stored in file 'wildcard.cfg'
    ---------------------------------------------------
      Profile ID: caIPAserviceCert
      Profile description: Standard profile for network services
      Store issued certificates: TRUE

    Modify the profile; the minimal diff is:

    --- wildcard.cfg.bak
    +++ wildcard.cfg
    @@ -19 +19 @@
    -policyset.serverCertSet.1.default.params.name=CN=$request.req_subject_name.cn$, o=EXAMPLE.COM
    +policyset.serverCertSet.1.default.params.name=CN=*.$request.req_subject_name.cn$, o=EXAMPLE.COM
    @@ -108 +108 @@
    -profileId=caIPAserviceCert
    +profileId=wildcard

    Now import the modified configuration as a new profile called wildcard:

    ftweedal% ipa certprofile-import wildcard \
        --file wildcard.cfg \
        --desc 'Wildcard certificates' \
        --store 1
    ---------------------------
    Imported profile "wildcard"
    ---------------------------
      Profile ID: wildcard
      Profile description: Wildcard certificates
      Store issued certificates: TRUE

    Next, set up a CA ACL to allow the wildcard profile to be used with the cloudapps.example.com host:

    ftweedal% ipa caacl-add wildcard-hosts
    -----------------------------
    Added CA ACL "wildcard-hosts"
    -----------------------------
      ACL name: wildcard-hosts
      Enabled: TRUE
    
    ftweedal% ipa caacl-add-profile wildcard-hosts --certprofiles wildcard
      ACL name: wildcard-hosts
      Enabled: TRUE
      CAs: ipa
      Profiles: wildcard
    -------------------------
    Number of members added 1
    -------------------------
    
    ftweedal% ipa caacl-add-host wildcard-hosts --hosts cloudapps.example.com
      ACL name: wildcard-hosts
      Enabled: TRUE
      CAs: ipa
      Profiles: wildcard
      Hosts: cloudapps.example.com
    -------------------------
    Number of members added 1
    -------------------------

    An additional step is required in FreeIPA 4.4 (RHEL 7.3) and later (it does not apply to FreeIPA < 4.4):

    ftweedal% ipa caacl-add-ca wildcard-hosts --cas ipa
      ACL name: wildcard-hosts
      Enabled: TRUE
      CAs: ipa
    -------------------------
    Number of members added 1
    -------------------------

    Then create a CSR with subject CN=cloudapps.example.com (details omitted), and issue the certificate:

    ftweedal% ipa cert-request my.csr \
        --principal host/cloudapps.example.com \
        --profile wildcard
      Issuing CA: ipa
      Certificate: MIIEJzCCAw+gAwIBAgIBCzANBgkqhkiG9w0BAQsFADBBMR8...
      Subject: CN=*.cloudapps.example.com,O=EXAMPLE.COM
      Issuer: CN=Certificate Authority,O=EXAMPLE.COM
      Not Before: Mon Feb 20 04:21:41 2017 UTC
      Not After: Thu Feb 21 04:21:41 2019 UTC
      Serial number: 11
      Serial number (hex): 0xB

    Alternatively, you can use Certmonger to request the certificate:

    ftweedal% ipa-getcert request \
      -d /etc/httpd/alias -p /etc/httpd/alias/pwdfile.txt \
      -n wildcardCert \
      -T wildcard

    This will request a certificate for the current host. The -T option specifies the profile to use.

    Discussion

    Observe that the subject common name (CN) in the CSR does not contain the wildcard. FreeIPA requires naming information in the CSR to perfectly match the subject principal. As mentioned in the introduction, FreeIPA has no specific support for wildcard certificates, so if a wildcard were included in the CSR, it would not match the subject principal and the request would be rejected.

    When constructing the certificate, Dogtag performs a variable substitution into a subject name string. That string contains the literal wildcard and the period to its right, and the common name (CN) from the CSR gets substituted in after that. The relevant line in the profile configuration is:

    policyset.serverCertSet.1.default.params.name=CN=*.$request.req_subject_name.cn$, o=EXAMPLE.COM

    When it comes to wildcards in Subject Alternative Name DNS-IDs, it might be possible to configure a Dogtag profile to add this in a similar way to the above, but I do not recommend it, nor am I motivated to work out a reliable way to do this, given that wildcard certificates are deprecated. (By the time TLS libraries eventually remove support for treating the subject CN as a DNS-ID, I will have little sympathy for organisations that still haven’t moved away from wildcard certs).

    In conclusion: you shouldn’t use wildcard certificates, and FreeIPA has no special support for them, but if you really need to, you can do it with a custom certificate profile.

    by ftweedal at February 20, 2017 04:55 AM

    February 06, 2017

    Red Hat Blog

    Identity Management Improvements in Red Hat Enterprise Linux 7.3: Part 1

    Red Hat Enterprise Linux (RHEL) 7.3 has been out for a bit, but have you looked at what we’ve added in the Identity Management area for this release? I’m excited to say, we’ve added quite a bit!

    In the past I have been talking about individual features in Identity Management (IdM) and System Security Services Daemon (SSSD) but this is really not how we prioritize our efforts nowadays. We look at customer requests, community efforts, and market trends and then define themes for the release. So what were these themes for RHEL 7.3?

    Improvements to the Core

    Performance

    As our identity management solution matures customers start to deploy it in more sophisticated environments with more than fifty thousands systems or users, complex deeply nested group structure, advanced access control and sudo rules. In such environments, IdM and SSSD were not always meeting performance and scalability expectations. We wanted to correct that. Several efforts in different areas have been launched to make the solution work better for such complex deployments. In our test environment on a reference VM with 4GB of RAM and 8 cores we managed to improve:

    • User and group operations with complex group structure – about 3 times faster
    • Kerberos authentication – about 100 times faster
    • Bulk user provisioning – about 20 times faster (relies on disabling memberOf plugin and rebuilding group membership after the bulk operation)

    On the client side SSSD was slow in processing large objects in the cache, especially big groups with hundreds of members. The problem manifested itself most vividly when users performed the “ls -l” command on a directory with files owned by many different users. SSSD already had a workaround by means of ignore_group_members option but that was not enough. The structure of the SSSD cache was significantly reworked rendering twice as better results as in the past.

    In addition to that, the underlying directory server includes a new experimental feature called Nunc Stans. The feature solves the problem of thousands of concurrent client connections that have been significantly affecting server performance. The feature is disabled by default. If you are interested in experimenting with this feature please contact your technical account manager to make us aware of your plans.

    There is no limit to perfection so we will continue working on performance and scalability improvements in the follow-up releases.

    DNS Related Enhancements

    One of the limitations that large environments with several datacenters were facing was inability to limit which subset of servers the clients should prefer to connect to. It was possible to limit the set explicitly by providing the list of the preferred servers on the client side but that required additional configuration steps on every client which is an administrative overhead.

    A better solution would have been to rely on DNS to identify the servers the client can connect to. But with the original DNS implementation there was no way to associate a set of clients with a set of servers so that clients would not go to the other side of the globe to connect to a server in a remote datacenter.

    The DNS locations feature introduced in the current release solves this problem by allowing administrator to define a set of servers in the datacenter and to affiliate clients to this set of servers. The feature is functionally similar to the Active Directory capability called “sites.” The changes are in the IdM DNS server so the feature is available in the deployments that rely on DNS server provided by IdM to manage connected Linux clients.

    Replica Management

    In this release, the replica management area saw multiple significant improvements.

    In the past, managing replicas in IdM was quite a challenge. Each replica only knew about its peers. There was no central place where all topology information was stored. As a result it was really hard to assess the state of the deployment and see which replicas connected to which other replicas. This changed. Now topology information is replicated and every replica in the deployment knows about the whole environment. To see the topology one can use a topology graph. Replication agreements can be added and removed with a mouse click.

    Using Topology Graph to view replica topology

    " data-medium-file="https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=300&h=188" data-large-file="https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=640" class="wp-image-2887 size-medium" style="margin:10px;" src="https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=300&h=188" alt="Using Topology Graph to view replica topology" width="300" height="188" srcset="https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=300&h=188 300w, https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=600&h=376 600w, https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=150&h=94 150w" sizes="(max-width: 300px) 100vw, 300px" />
    Figure 1: Using Topology Graph to view replica topology

    In addition to topology information, the inventory of the installed components is also available now. In the past it was hard to see which servers have a CA or DNS server deployed. Now with the server roles report in the UI, the administrator can see which servers have which roles in the environment.

    We also changed the replica deployment procedure because it was hard to automate properly. In the past the expectation was that replicas would be installed by humans that will type the administrative password. When you need to deploy replicas on demand this does not scale well.

    Efforts to create Puppet scripts or Ansible playbooks for replica deployment also faced the problem of embedding passwords into the body of the module. Keeping in mind that modules and playbooks are usually source controlled and need to be accessed by different people, having highly sensitive passwords in them was an audit nightmare.

    To address this issue, IdM introduced a new replica installation procedure also called replica promotion. The installer will lay out the client bits first. The client will register and get its identity. The existing master, knowing that a replica is being installed, would elevate privileges of the client to allow the client to convert itself to a replica. This process allows deployment of the replicas in a much more dynamic and secure fashion. Existing replication management utilities have been updated in a backward compatible way.

    These replication management improvements are enabled automatically for the new installations. For the existing installations to take advantage of these features one needs to update all participating servers to Red Hat Enterprise Linux 7.3 and then change the domain level setting to 1.

    Also many customers that are interested in deploying IdM have dozens of remote sites. To accommodate this the limit of supported servers in one deployment was increased from 20 to 60.

    Access Control

    Continuing the trend that we started with implementing together with MIT the support of two factor OTP-based authentication over the Kerberos protocol, IdM and SSSD in Red Hat Enterprise Linux 7.3 bring in a new, revolutionary technology. This technology is called “Authentication Indicators.”

    In the past all tickets created by the Kerberos server were born equal, regardless of what type of authentication was originally used. Now, authentication Indicators allow tagging the ticket in different ways, depending on whether single or multi factor authentication is used. This technology enables administrators to control which kerberized services are available to users depending on the type of the authentication. Using Authentication Indicators, one would be able to define a set of hosts and services that require two factor authentication and let users access other hosts and services with tickets acquired as a result of a single factor authentication.

    Another improvement that is worth mentioning is the change to how IdM and SSSD communicate SUDO policies. In the past SSSD was able to work only with the traditional SUDO LDAP schema defined by the SUDO project. On the other hand, the schema that IdM uses to store SUDO information is different. It was designed to provide a better user experience and improve manageability. The side effect of this situation was that IdM had to create a special LDAP view to serve SUDO information to the clients including SSSD. This view added performance overhead and complexity to the solution. With the Red Hat Enterprise Linux 7.3 release, SSSD is now capable of working with the internal SUDO schema adopted by IdM. Once the clients are updated to the latest version, the special SUDO view on IdM servers can be disabled, freeing memory and boosting server performance.

    Manageability

    Deploying clients in the cloud requires more flexibility with names that identify a system for Kerberos authentication. In many cases a system has an internal name assigned by a cloud provider and an external name visible outside the cloud. To be able to use multiple names for the same system or service, the Identity Management in Red Hat Enterprise Linux added the ability to define alternative names (Kerberos aliases) via the user interface and command line. With this feature, one can deploy a system in a cloud and use Kerberos to authenticate to the system or service from inside and outside the cloud.

    SSSD is growing its responsibilities and it is becoming harder to operate and troubleshoot if something goes wrong. To make administrator’s life easier, SSSD is now accompanied with a couple of new utilities. One utility allows fine grained management of the SSSD cache so that state of the cache can be easily inspected. The tool allows tweaking or removing individual objects and entries in the cache, without removing the cache altogether. Another tool, called sssctl, provides information about SSSD status: whether it is online or not and what servers it is currently communicating with.

    In addition to the utilities, SSSD processing of sssd.conf have been improved. With this enhancement SSSD has a higher chance to automatically detect typos, missing values and misconfiguration introduced via sssd.conf. The logic is still basic, but the change lays a good foundation for the future improvements in this area.

    With better sssd.conf parsing, SSSD also gained the ability to merge several sssd.conf configuration files that augment each other. This is useful when different snippets of the configuration come with different applications that rely on the SSSD service provided by the system. This way applications can augment or extend the main SSSD configuration without explicitly modifying it.

    In Part 2, we’ll look at certificate management, interoperability, and Active Directory integration improvements you’ll find in RHEL 7.3.

    by Dmitri Pal at February 06, 2017 03:00 PM

    Powered by Planet