FreeIPA Identity Management planet - technical blogs

February 20, 2017

Fraser Tweedale

Wildcard certificates in FreeIPA

The FreeIPA team sometimes gets asked about wildcard certificate support. A wildcard certificate is an X.509 certificate where the DNS-ID has a wildcard in it (typically as the most specific domain component, e.g. *.cloudapps.example.com). Most TLS libraries match wildcard domains in the obvious way.

In this blog post we will discuss the state of wildcard certificates in FreeIPA, but before proceeding it is fitting to point out that wildcard certificates are deprecated, and for good reason. While the compromise of any TLS private key is a serious matter, the attacker can only impersonate the entities whose names appear on the certificate (typically one or a handful of DNS addresses). But a wildcard certificate can impersonate any host whose name happens to match the wildcard value.

In time, validation of wildcard domains will be disabled by default and (hopefully) eventually removed from TLS libraries. The emergence of protocols like ACME that allow automated domain validation and certificate issuance mean that there is no real need for wildcard certificates anymore, but a lot of programs are yet to implement ACME or similar; therefore there is still a perceived need for wildcard certificates. In my opinion some of this boils down to lack of awareness of novel solutions like ACME, but there can also be a lack of willingness to spend the time and money to implement them, or a desire to avoid changing deployed systems, or taking a "wait and see" approach when it comes to new, security-related protocols or technologies. So for the time being, some organisations have good reasons to want wildcard certificates.

FreeIPA currently has no special support for wildcard certificates, but with support for custom certificate profiles, we can create and use a profile for issuing wildcard certificates.

Creating a wildcard certificate profile in FreeIPA

First, kinit admin and export an existing service certificate profile configuration to a file:

ftweedal% ipa certprofile-show caIPAserviceCert --out wildcard.cfg
---------------------------------------------------
Profile configuration stored in file 'wildcard.cfg'
---------------------------------------------------
  Profile ID: caIPAserviceCert
  Profile description: Standard profile for network services
  Store issued certificates: TRUE

Modify the profile; the minimal diff is:

--- wildcard.cfg.bak
+++ wildcard.cfg
@@ -19 +19 @@
-policyset.serverCertSet.1.default.params.name=CN=$request.req_subject_name.cn$, o=EXAMPLE.COM
+policyset.serverCertSet.1.default.params.name=CN=*.$request.req_subject_name.cn$, o=EXAMPLE.COM
@@ -108 +108 @@
-profileId=caIPAserviceCert
+profileId=wildcard

Now import the modified configuration as a new profile called wildcard:

ftweedal% ipa certprofile-import wildcard \
    --file wildcard.cfg \
    --desc 'Wildcard certificates' \
    --store 1
---------------------------
Imported profile "wildcard"
---------------------------
  Profile ID: wildcard
  Profile description: Wildcard certificates
  Store issued certificates: TRUE

Next, set up a CA ACL to allow the wildcard profile to be used with the cloudapps.example.com host:

ftweedal% ipa caacl-add wildcard-hosts
-----------------------------
Added CA ACL "wildcard-hosts"
-----------------------------
  ACL name: wildcard-hosts
  Enabled: TRUE

ftweedal% ipa caacl-add-ca wildcard-hosts --cas ipa
  ACL name: wildcard-hosts
  Enabled: TRUE
  CAs: ipa
-------------------------
Number of members added 1
-------------------------

ftweedal% ipa caacl-add-profile wildcard-hosts --certprofiles wildcard
  ACL name: wildcard-hosts
  Enabled: TRUE
  CAs: ipa
  Profiles: wildcard
-------------------------
Number of members added 1
-------------------------

ftweedal% ipa caacl-add-host wildcard-hosts --hosts cloudapps.example.com
  ACL name: wildcard-hosts
  Enabled: TRUE
  CAs: ipa
  Profiles: wildcard
  Hosts: cloudapps.example.com
-------------------------
Number of members added 1
-------------------------

Then create a CSR with subject CN=cloudapps.example.com (details omitted), and issue the certificate:

ftweedal% ipa cert-request my.csr \
    --principal host/cloudapps.example.com \
    --profile wildcard
  Issuing CA: ipa
  Certificate: MIIEJzCCAw+gAwIBAgIBCzANBgkqhkiG9w0BAQsFADBBMR8...
  Subject: CN=*.cloudapps.example.com,O=EXAMPLE.COM
  Issuer: CN=Certificate Authority,O=EXAMPLE.COM
  Not Before: Mon Feb 20 04:21:41 2017 UTC
  Not After: Thu Feb 21 04:21:41 2019 UTC
  Serial number: 11
  Serial number (hex): 0xB

Discussion

Observe that the subject common name (CN) in the CSR does not contain the wildcard. FreeIPA requires naming information in the CSR to perfectly match the subject principal. As mentioned in the introduction, FreeIPA has no specific support for wildcard certificates, so if a wildcard were included in the CSR, it would not match the subject principal and the request would be rejected.

When constructing the certificate, Dogtag performs a variable substitution into a subject name string. That string contains the literal wildcard and the period to its right, and the common name (CN) from the CSR gets substituted in after that. The relevant line in the profile configuration is:

policyset.serverCertSet.1.default.params.name=CN=*.$request.req_subject_name.cn$, o=EXAMPLE.COM

When it comes to wildcards in Subject Alternative Name DNS-IDs, it might be possible to configure a Dogtag profile to add this in a similar way to the above, but I do not recommend it, nor am I motivated to work out a reliable way to do this, given that wildcard certificate are deprecated. (By the time TLS libraries eventually remove support for treating the subject CN as a DNS-ID, I will have little sympathy for organisations that still haven’t moved away from wildcard certs).

In conclusion: you shouldn’t use wildcard certificates, and FreeIPA has no special support for them, but if you really to, you can do it with a custom certificate profile.

by ftweedal at February 20, 2017 04:55 AM

February 06, 2017

Red Hat Blog

Identity Management Improvements in Red Hat Enterprise Linux 7.3: Part 1

Red Hat Enterprise Linux (RHEL) 7.3 has been out for a bit, but have you looked at what we’ve added in the Identity Management area for this release? I’m excited to say, we’ve added quite a bit!

In the past I have been talking about individual features in Identity Management (IdM) and System Security Services Daemon (SSSD) but this is really not how we prioritize our efforts nowadays. We look at customer requests, community efforts, and market trends and then define themes for the release. So what were these themes for RHEL 7.3?

Improvements to the Core

Performance

As our identity management solution matures customers start to deploy it in more sophisticated environments with more than fifty thousands systems or users, complex deeply nested group structure, advanced access control and sudo rules. In such environments, IdM and SSSD were not always meeting performance and scalability expectations. We wanted to correct that. Several efforts in different areas have been launched to make the solution work better for such complex deployments. In our test environment on a reference VM with 4GB of RAM and 8 cores we managed to improve:

  • User and group operations with complex group structure – about 3 times faster
  • Kerberos authentication – about 100 times faster
  • Bulk user provisioning – about 20 times faster (relies on disabling memberOf plugin and rebuilding group membership after the bulk operation)

On the client side SSSD was slow in processing large objects in the cache, especially big groups with hundreds of members. The problem manifested itself most vividly when users performed the “ls -l” command on a directory with files owned by many different users. SSSD already had a workaround by means of ignore_group_members option but that was not enough. The structure of the SSSD cache was significantly reworked rendering twice as better results as in the past.

In addition to that, the underlying directory server includes a new experimental feature called Nunc Stans. The feature solves the problem of thousands of concurrent client connections that have been significantly affecting server performance. The feature is disabled by default. If you are interested in experimenting with this feature please contact your technical account manager to make us aware of your plans.

There is no limit to perfection so we will continue working on performance and scalability improvements in the follow-up releases.

DNS Related Enhancements

One of the limitations that large environments with several datacenters were facing was inability to limit which subset of servers the clients should prefer to connect to. It was possible to limit the set explicitly by providing the list of the preferred servers on the client side but that required additional configuration steps on every client which is an administrative overhead.

A better solution would have been to rely on DNS to identify the servers the client can connect to. But with the original DNS implementation there was no way to associate a set of clients with a set of servers so that clients would not go to the other side of the globe to connect to a server in a remote datacenter.

The DNS locations feature introduced in the current release solves this problem by allowing administrator to define a set of servers in the datacenter and to affiliate clients to this set of servers. The feature is functionally similar to the Active Directory capability called “sites.” The changes are in the IdM DNS server so the feature is available in the deployments that rely on DNS server provided by IdM to manage connected Linux clients.

Replica Management

In this release, the replica management area saw multiple significant improvements.

In the past, managing replicas in IdM was quite a challenge. Each replica only knew about its peers. There was no central place where all topology information was stored. As a result it was really hard to assess the state of the deployment and see which replicas connected to which other replicas. This changed. Now topology information is replicated and every replica in the deployment knows about the whole environment. To see the topology one can use a topology graph. Replication agreements can be added and removed with a mouse click.

Using Topology Graph to view replica topology

" data-medium-file="https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=300&h=188" data-large-file="https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=640" class="wp-image-2887 size-medium" style="margin:10px;" src="https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=300&h=188" alt="Using Topology Graph to view replica topology" width="300" height="188" srcset="https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=300&h=188 300w, https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=600&h=376 600w, https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=150&h=94 150w" sizes="(max-width: 300px) 100vw, 300px" />
Figure 1: Using Topology Graph to view replica topology

In addition to topology information, the inventory of the installed components is also available now. In the past it was hard to see which servers have a CA or DNS server deployed. Now with the server roles report in the UI, the administrator can see which servers have which roles in the environment.

We also changed the replica deployment procedure because it was hard to automate properly. In the past the expectation was that replicas would be installed by humans that will type the administrative password. When you need to deploy replicas on demand this does not scale well.

Efforts to create Puppet scripts or Ansible playbooks for replica deployment also faced the problem of embedding passwords into the body of the module. Keeping in mind that modules and playbooks are usually source controlled and need to be accessed by different people, having highly sensitive passwords in them was an audit nightmare.

To address this issue, IdM introduced a new replica installation procedure also called replica promotion. The installer will lay out the client bits first. The client will register and get its identity. The existing master, knowing that a replica is being installed, would elevate privileges of the client to allow the client to convert itself to a replica. This process allows deployment of the replicas in a much more dynamic and secure fashion. Existing replication management utilities have been updated in a backward compatible way.

These replication management improvements are enabled automatically for the new installations. For the existing installations to take advantage of these features one needs to update all participating servers to Red Hat Enterprise Linux 7.3 and then change the domain level setting to 1.

Also many customers that are interested in deploying IdM have dozens of remote sites. To accommodate this the limit of supported servers in one deployment was increased from 20 to 60.

Access Control

Continuing the trend that we started with implementing together with MIT the support of two factor OTP-based authentication over the Kerberos protocol, IdM and SSSD in Red Hat Enterprise Linux 7.3 bring in a new, revolutionary technology. This technology is called “Authentication Indicators.”

In the past all tickets created by the Kerberos server were born equal, regardless of what type of authentication was originally used. Now, authentication Indicators allow tagging the ticket in different ways, depending on whether single or multi factor authentication is used. This technology enables administrators to control which kerberized services are available to users depending on the type of the authentication. Using Authentication Indicators, one would be able to define a set of hosts and services that require two factor authentication and let users access other hosts and services with tickets acquired as a result of a single factor authentication.

Another improvement that is worth mentioning is the change to how IdM and SSSD communicate SUDO policies. In the past SSSD was able to work only with the traditional SUDO LDAP schema defined by the SUDO project. On the other hand, the schema that IdM uses to store SUDO information is different. It was designed to provide a better user experience and improve manageability. The side effect of this situation was that IdM had to create a special LDAP view to serve SUDO information to the clients including SSSD. This view added performance overhead and complexity to the solution. With the Red Hat Enterprise Linux 7.3 release, SSSD is now capable of working with the internal SUDO schema adopted by IdM. Once the clients are updated to the latest version, the special SUDO view on IdM servers can be disabled, freeing memory and boosting server performance.

Manageability

Deploying clients in the cloud requires more flexibility with names that identify a system for Kerberos authentication. In many cases a system has an internal name assigned by a cloud provider and an external name visible outside the cloud. To be able to use multiple names for the same system or service, the Identity Management in Red Hat Enterprise Linux added the ability to define alternative names (Kerberos aliases) via the user interface and command line. With this feature, one can deploy a system in a cloud and use Kerberos to authenticate to the system or service from inside and outside the cloud.

SSSD is growing its responsibilities and it is becoming harder to operate and troubleshoot if something goes wrong. To make administrator’s life easier, SSSD is now accompanied with a couple of new utilities. One utility allows fine grained management of the SSSD cache so that state of the cache can be easily inspected. The tool allows tweaking or removing individual objects and entries in the cache, without removing the cache altogether. Another tool, called sssctl, provides information about SSSD status: whether it is online or not and what servers it is currently communicating with.

In addition to the utilities, SSSD processing of sssd.conf have been improved. With this enhancement SSSD has a higher chance to automatically detect typos, missing values and misconfiguration introduced via sssd.conf. The logic is still basic, but the change lays a good foundation for the future improvements in this area.

With better sssd.conf parsing, SSSD also gained the ability to merge several sssd.conf configuration files that augment each other. This is useful when different snippets of the configuration come with different applications that rely on the SSSD service provided by the system. This way applications can augment or extend the main SSSD configuration without explicitly modifying it.

In Part 2, we’ll look at certificate management, interoperability, and Active Directory integration improvements you’ll find in RHEL 7.3.

by Dmitri Pal at February 06, 2017 03:00 PM

January 31, 2017

Fabiano Fidencio

SSSD: {DBus,Socket}-activated responders!

Since its 1.15.0 release, SSSD takes advantage of systemd machinery and introduces a new way to deal with the responders.

Previously, in order to have a responder initialized, the admin would have to add the specific responder to the "services" line in sssd.conf file, which does make sense for the responders that are often used but not for those rarely used (as the infopipe and PAC responders for instance).

This old way is still preserved (at least for now) and this new release is fully backwards-compatible with the old config file.

For this new release, however, adding responders to the "services" isn't needed anymore as the admin can easily enable any of the responders' sockets and those will be {dbus,socket}-activated on demand and will be up while are still being used. In case the responder becomes idle, it will automatically shut itself down after a configurable amount of time.

The sockets we've created are: sssd-autofs.socket, sssd-nss.socket, sssd-pac.socket, sssd-pam.socket (and sssd-pam-priv.socket, but you don't have to worry about this one), sssd-ssh.socket and sssd-sudo.socket. As an example, considering the admins want to enable the sockets for both NSS and PAM responders, they should do: `systemctl enable sssd-pam.socket sssd-nss.socket` and voilà!

In some cases the admins may also want to set the "responder_idle_timeout" option added for each of the responders in order to tweak for how long the responder will be running in case itbecomes idle. Setting this option to 0 (zero) disables the responder_idle_timeout. For more details, please, check sssd.conf man page.

For this release we've taken a more conservative path and are leaving up to the admins to enable the services they want to have enabled in case they would like to try to using {dbus,socket}-activated responders

It's also important to note that while the SELinux policies are not updated in your distro you may need to have SELinux in permissive mode in order to test/use the {dbus,socket}-activated responders. A bug for this is already filed for Fedora and hopefully will be fixed before the new package is included in the distro.

And the changes in the code were (a high-level explanation) ...

Before this work the monitor was the piece of code responsible for handling the responders listed in the services' line of sssd.conf file. And by handling I mean:

  • Gets the list of services to be started (and, consequently, the total number of services);
  • For each service:
    • Gets the service configuration;
    • Starts the service;
    • Adds the service to the services' list;
    • Once the service is up, a dbus message is sent to the monitor, which ...
      • Sets up the sbus* connection to communicate with the service;
      • Marks the service as started;

Now, the monitor does (considering an empty services' line):

  • Once the service is up, a dbus message is sent to the monitor;
    • The number of services is increased;
    • Gets the service configuration;
    • Adds the service to the services' list
    • Sets up the sbus connection to communicate with the service;
    • Sets up a destructor to the sbus connection in order to properly shutdown the service when this connection is closed;
    • Marks the service as started;

By looking at those two different processes done by the monitor, some of you may have realized an extra step needed when the service has been {dbus,socket}-activated that was needed at all before. Yep, "Sets up a destructor to the sbus connection in order to properly shutdown the service when this connection is closed" is a completely new thing as, previously, the services were just shut down when SSSD was shut down and now the services are shutdown when they become idle.

So, what's basically done now is:
 - Once there's no communication to the service, it's (sbus) connection with the monitor is closed;
 - Closing the (sbus) connection triggers the following actions:
    - The number of services is decreased;
    - The connection destructor is unset (otherwise it would be called again on the service has been freed);
    - Service is shut down:

*sbus: SSSD uses dbus protocol over a private socket to handle its internal communication, so the services do not talk over system bus.

And how do the unit files look like?

SSSD has 7 services: autofs, ifp, nss, pac, pam, ssh and sudo. From those 7 services 4 of them have pretty much these unit files:

AutoFS, PAC, SSH and Sudo unit files:


sssd-$responder.service:
[Unit]
Description=SSSD $(responder) Service responder
Documentation=man:sssd.conf(5)
After=sssd.service
BindsTo=sssd.service

[Install]
Also=sssd-$responder.socket

[Service]
ExecStartPre=-/bin/chown $sssd_user:$sssd_user /var/log/sssd/sssd_autofs.log
ExecStart=/usr/libexec/sssd/sssd_$responder --debug-to-files --socket-activated
Restart=on-failure
User=$sssd_user
Group=$ssd_user
PermissionsStartOnly=true

sssd-$responder.socket:
[Unit]
Description=SSSD $(responder) Service responder socket
Documentation=man:sssd.conf(5)
BindsTo=sssd.service

[Socket]
ListenStream=/var/lib/sss/pipes/$responder
SocketUser=$sssd_user
SocketGroup=$sssd_user

[Install]
WantedBy=sssd.service


And about the different ones? We will get there ... and also explain why they are different.

The infopipe (ifp) unit file:

As the infopipe won't be socket-activated, it doesn't have the its respective .socket unit.
Also, differently than the others responders the infopipe responder can only be run as root nowadays.
In the end, its .service unit looks like:

sssd-ifp.service:
[Unit]
Description=SSSD IFP Service responder
Documentation=man:sssd-ifp(5)
After=sssd.service
BindsTo=sssd.service

[Service]
Type=dbus
BusName=org.freedesktop.sssd.infopipe
ExecStart=/usr/libexec/sssd/sssd_ifp --uid 0 --gid 0 --debug-to-files --dbus-activated
Restart=on-failure

The PAM unit files:

The main difference between PAM responder and the others is that PAM has two sockets that can end up socket-activating its service. Also, these sockets have a special permission.
In the end, its unit files look like:

sssd-pam.service:
[Unit]
Description=SSSD PAM Service responder
Documentation=man:sssd.conf(5)
After=sssd.service
BindsTo=sssd.service

[Install]
Also=sssd-pam.socket sssd-pam-priv.socket

[Service]
ExecStartPre=-/bin/chown $sssd_user:$sssd_user @logpath@/sssd_pam.log
ExecStart=@libexecdir@/sssd/sssd_pam --debug-to-files --socket-activated
Restart=on-failure
User=$sssd_user
Group=$sssd_user
PermissionsStartOnly=true

sssd-pam.socket:
[Unit]
Description=SSSD PAM Service responder socket
Documentation=man:sssd.conf(5)
BindsTo=sssd.service
BindsTo=sssd-pam-priv.socket

[Socket]
ListenStream=@pipepath@/pam
SocketUser=root
SocketGroup=root

[Install]
WantedBy=sssd.service

sssd-pam-priv.socket:
[Unit]
Description=SSSD PAM Service responder private socket
Documentation=man:sssd.conf(5)
BindsTo=sssd.service
BindsTo=sssd-pam.socket

[Socket]
Service=sssd-pam.service
ListenStream=@pipepath@/private/pam
SocketUser=root
SocketGroup=root
SocketMode=0600

[Install]
WantedBy=sssd.service

The NSS unit files:

The NSS responder was the trickiest one to have working properly, mainly because when socket-activated it has to run as root.
The reason behind this is that systemd calls getpwnam() and getgrnam() when using "User="/"Group=" different than root. By doing this libc ends up querying for $sssd_user, trying to talk to NSS responder which is not up yet and then the clients would end up hanging for a few minutes (due to our default_client_timeout) which is something we really want to avoid.

In the end, its unit files look like:

sssd-nss.service:
Description=SSSD NSS Service responder
Documentation=man:sssd.conf(5)
After=sssd.service
BindsTo=sssd.service

[Install]
Also=sssd-nss.socket

[Service]
ExecStartPre=-/bin/chown root:root @logpath@/sssd_nss.log
ExecStart=@libexecdir@/sssd/sssd_nss --debug-to-files --socket-activated
Restart=on-failure

sssd-nss.socket:
[Unit]
Description=SSSD NSS Service responder socket
Documentation=man:sssd.conf(5)
BindsTo=sssd.service

[Socket]
ListenStream=@pipepath@/nss
SocketUser=$sssd_user
SocketGroup=$sssd_user

All the services' units have a "BindsTo=sssd.service" in order to ensure that the service will be stopped when sssd.service is stopped so in case SSSD is shutdown/restart those actions will be propagated to the responders as well.

Similarly to "BindsTo=ssssd.service" there's "WantedBy=sssd.service" in every socket unit and it's there to ensure that, once the socket is enabled it will be automatically started by SSSD when SSSD is started.

And that's pretty much all changes that I've covered with this work.

I really have to say a big thank you to ...

  • Lukas Nykryn and Michal Sekletar who patiently reviewed the unit files we're using and gave me a lot if good tips while doing this work;
  • Sumit Bose who helped me to find out the issue with the NSS responder when trying to run it as a non-privileged user;
  • Jakub Hrozek, Lukas Slebodnik and Pavel Brezina for reviewing and helping me to find bugs, crashes, regressions that fortunately were avoided.

And what's next?

There's already a patch making the {dbus,socket}-activated automatically enabled when SSSD starts, which changes our approach from having to explicit enable the sockets in order to take advantage of this work to explicitly mask the disable (actually, mask) the sockets of the processes that shouldn't be {dbus,socket}-activated.

Also, a bigger work for the future is to also have the providers being socket-activated, but this is material for a different blob post. ;-)

Nice, nice. But I'm having issues with what you've described!

In case it happens to you, please, keep in mind that the referred way to diagnose any issues would be:

  • Inspecting sssd.conf in order to check which are the explicitly activated responders in the services' line;
  • `systemctl status sssd.service`;
  • `systemctl status sssd-$responder.service` (for the {dbus,socket}-activated ones);
  • `journalctl -u sssd.service`;
  • `journalctl -u sssd-$responder.service` (for the {dbus,socket}-activated ones);
  • `journalctl -br`;
  • Checking SSSD debug logs in order to see whether SSSD sockets where communicated

by noreply@blogger.com (Fabiano Fidêncio) at January 31, 2017 04:27 PM

January 24, 2017

Red Hat Blog

PCI Series: Requirement 10 – Track and Monitor All Access to Network Resources and Cardholder Data

This is my last post dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS). This specific post is related to requirement ten (i.e. the requirement to track and monitor all access to network resources and cardholder data). The outline and mapping of individual articles to the requirements can be found in the overarching post that started the series.

Requirement ten focuses on audit and monitoring. Many components of an IdM-based solution, including client components like SSSD and certmonger, generate a detailed audit trail about authentication and user activity. Linux systems have an audit subsystem and all critical authentication and access related events are sent there. One can then use different technologies (or third party software) to collect and centralize these audit trails. Red Hat is working to provide a log collection, aggregation, and correlation solution across different components and products in the Red Hat portfolio. This is an ongoing effort and I plan to write about it (in the future) when there is more to show. This solution is expected to become a foundation for another offering that allows for capturing, centralizing, and correlating recorded user sessions. A demo of this session recording technology is available here. The working plan is to allow for not only the recording and playback of captured sessions but also correlation with an audit trail from the same system – enabling full introspection into the user activity on the system.

Questions about how Identity Management relates to requirement ten? Did you enjoy this series and/or find it to be useful?  I encourage you to reach out using the comments section (below).

by Dmitri Pal at January 24, 2017 06:15 PM

December 09, 2016

Jakub Hrozek

Restrict the set of groups the user is a member of with SSSD

Written by Jakub Hrozek and Sumit Bose

One of the frequent requests we receive for a future enhancement for the SSSD project is one where admins would like to restrict the list of groups users on a host are a member of. We currently don’t offer this functionality and in this blog post I would like to explain why it’s not a completely trivial thing to implement correctly (and therefore why it’s not implemented already..) and offer a workaround that can be used for many environments as long as the administrator understands its limitations.

The administrator typically wants to restrict the list of groups the uses are a member of on the host for either (or both) of these reasons:

  • to limit access to resources beyond what the users are normally privileged to access
  • to provide a performance boost where sssd wouldn’t even attempt to resolve groups that are not listed in the configured group list

The vast majority of administrators are actually asking to improve performance, especially in High Performance Computing environments where users are part of a huge domain, but the client machine is only interested in a subset of the groups that give access to some resource. At the same time, avoiding the group resolution completely is significantly less trivial than the first case where SSSD would resolve all the groups and then only present a subset to the system.

Resolving the list of groups the user is a member of is a complex task, which can query different resources or interfaces depending on how the user logs in or what kind of command the administrator invokes. In the simplest case, where SSSD is connected to a generic LDAP server and the admin calls the “id” utility, SSSD would search the LDAP directory for groups the user is a member of. This scenario is actually possible to restrict already (and we’ll show how later in the post), but there are more ways to resolve a user’s group memberships. For example, in an Active Directory domain, the group membership might come from the PAC data blob attached to the Kerberos ticket acquired during authentication in the form of a list of SIDs which must be resolved into either a POSIX ID or a name before to be filtered later – and this SID-to-name/SID-to-ID resolution would negate any performance benefit in most cases.

Similarly, when an IPA client is resolving groups for an AD user from a trusted directory, it would ask one of the IPA masters for a list of groups the user belongs to. These two examples hopefully illustrate that there are many code paths to consider to make this feature work as expected in all possible scenarios. While implementing this feature is something we have on our roadmap, it will take time to get there.

With that said, SSSD already provides a way to only include certain groups as long as only the generic LDAP lookups are used to resolve group memberships. As an example, we’ll illustrate how to restrict the client to only search and return two groups. We’ll start with a setup where SSSD is connected to an Active Directory domain and returns the full list of group memberships, for example:

$ id sssduser@win.trust.test
uid=679801116(sssduser@win.trust.test) gid=679800513(domain users@win.trust.test) groups=679800513(domain users@win.trust.test),679801117(global_group@win.trust.test),679801105(sudogroup@win.trust.test),679801118(universal_group@win.trust.test)

The client uses a fairly default configuration where the domain section of sssd.conf looks like this:

[domain/win.trust.test]
id_provider = ad
access_provider = ad
ad_domain = win.trust.test
krb5_realm = WIN.TRUST.TEST
realmd_tags = manages-system joined-with-adcli

Let’s say that in our environment, we only care about the “sudogroup” so that the user can elevate her privileges with sudo and the “Domain Users” group to access shared resources.

The first step is to make the client look up the groups the user is a member of using plain LDAP lookups instead of looking up the AD-specific tokenGroups attribute. Normally, if all groups are to be returned, using the tokenGroups attribute provides a significant performance benefit, because the list of all groups is a member of can be returned with a single BASE-scoped search of the user entry. However, the tokenGroups attribute is a multi-valued list of SIDs the user is a member of and as said earlier, all the SIDs would have to be resolved into group names anyway. Therefore we disable the tokengroups support in the [domain] section of the sssd.conf config file by adding this parameter:

ldap_use_tokengroups = false

Then, we need to instruct SSSD to only look for the two groups we care about on the client. We include the names of the two groups as an extension of the LDAP group search base, which can optionally also include the scope and the filter to search with:

ldap_group_search_base = CN=Users,DC=win,DC=trust,DC=test?sub?(|(cn=domain users)(cn=sudogroup))

Please note that the group search base is not required to be set unless in the typical case, because SSSD infers its value from the domain name of the AD domain it is joined to.

Finally, we restart the SSSD service:

# systemctl restart sssd
Make sure we pull data from AD and not the cache on the next lookup:
# sss_cache -E
And resolve the user entry again:
id sssduser@win.trust.test
uid=679801116(sssduser@win.trust.test) gid=679800513(domain users@win.trust.test) groups=679800513(domain users@win.trust.test),679801105(sudogroup@win.trust.test)

This lookup only includes the domain users group and the sudogroup. Inspecting the SSSD debug logs would show that the client only attempted to fetch group names that include the “domain users” or the “sudogroup”.

This configuration workaround would work only if the client lookups the entries in an LDAP tree directly – so it wouldn’t work in cases where an IPA client is looking up groups of a user who comes from a trusted AD domain or in the case where the user logs in without a password using her Kerberos ticket from a Windows machine and expects SSSD to read the group memberships from the PAC blob attached to the Kerberos ticket. Similarly, this workaround only works for the joined domain, because at the moment (at least until the trusted domains can be made configurable), the trusted domains only use the default search base. These cases will be covered when the upstream ticket

https://fedorahosted.org/sssd/ticket/3249 is implemented in one of the future SSSD releases.

We also believe that, since this feature is often asked for performance reasons, we should focus on improving SSSD performance in the general case instead of providing workarounds.


by jhrozek at December 09, 2016 12:12 PM

December 07, 2016

Rich Megginson

Monitoring Fluentd and the Elasticsearch output plugin

Fluentd has a monitor input plugin: http://docs.fluentd.org/articles/monitoring

Unfortunately, the documentation is pretty scant, and some of the useful, interesting endpoints and options are not documented. I've captured some of that missing information below, and shown how it can be used to monitor the Elasticsearch output plugin.

Endpoints

/api/plugins

Provides information about each plugin in a text based columnar format:
$ curl -s http://localhost:24220/api/plugins
plugin_id:object:1dce4b0        plugin_category:input   type:monitor_agent
output_plugin:false     retry_count:
plugin_id:object:11b4120        plugin_category:input   type:systemd    output_p
lugin:false     retry_count:
plugin_id:object:19fb914        plugin_category:output  type:rewrite_tag_filter
output_plugin:true      retry_count:
...

/api/plugins.json

Same as /api/plugins except in JSON format:
$ curl -s http://localhost:24220/api/plugins.json | python -mjson.tool
{
    "plugins": [
        {
            "config": {
                "@type": "monitor_agent",
                "bind": "0.0.0.0",
                "port": "24220"
            },
            "output_plugin": false,
            "plugin_category": "input",
            "plugin_id": "object:1dce4b0",
            "retry_count": null,
            "type": "monitor_agent"
        },
...

/api/config

Provides basic fluentd configuration information in text format:
$ curl -s http://localhost:24220/api/config
pid:19  ppid:1  config_path:/etc/fluent/fluent.conf     pid_file:       plugin_dirs:["/etc/fluent/plugin"]      log_path:

/api/config.json

Provides basic fluentd configuration information in JSON format:
$ curl -s http://localhost:24220/api/config.json | python -mjson.tool
{
    "config_path": "/etc/fluent/fluent.conf",
    "log_path": null,
    "pid": 19,
    "pid_file": null,
    "plugin_dirs": [
        "/etc/fluent/plugin"
    ],
    "ppid": 1
}

Query String Options

debug

For plugins, this will print all of the instance variables:
$ http://localhost:24220/api/plugins.json\?debug=1 | python -mjson.tool
{
    "plugins": [
        {
            "config": {
                "@type": "monitor_agent",
                "bind": "0.0.0.0",
                "port": "24220"
            },
            "instance_variables": {
                "bind": "0.0.0.0",
                "emit_config": false,
                "emit_interval": 60,
...

@type

Search for plugin by @type:
$ http://localhost:24220/api/plugins.json\?@type=monitor_agent | python -mjson.tool
{
    "plugins": [
        {
            "config": {
                "@type": "monitor_agent",
                "bind": "0.0.0.0",
                "port": "24220"
            },
            "output_plugin": false,
            "plugin_category": "input",
            "plugin_id": "object:1dce4b0",
            "retry_count": null,
            "type": "monitor_agent"
        }
    ]
}

@id

Search for plugin by @id. For example, in the above output, there is "plugin_id": "object:1dce4b0". Once you have identified the id, you can use that to display only the information for that particular id:
$ http://localhost:24220/api/plugins.json\?@id=object:1dce4b0 | python -mjson.tool
{
    "plugins": [
        {
            "config": {
                "@type": "monitor_agent",
                "bind": "0.0.0.0",
                "port": "24220"
            },
            "output_plugin": false,
            "plugin_category": "input",
            "plugin_id": "object:1dce4b0",
            "retry_count": null,
            "type": "monitor_agent"
        }
    ]
}

tag

Match the tag and get the info from the matched output plugin. Only works on output plugins. I unfortunately don't have an example, but I suppose you could use something like this to find the output plugins which have a match block which has a match for **_sendtoforwarder_**:
$ http://localhost:24220/api/plugins.json\?tag=prefix_sendtoforwarder_suffix | python -mjson.tool
{
    "plugins": [
        {
...

Debugging the Fluentd Elasticsearch plugin


First, identify the output plugin in question to get the plugin id:
$ http://localhost:24220/api/plugins.json\?@type=elasticsearch_dynamic | python -mjson.tool
{
    "plugins": [
        {
            "buffer_queue_length": 0,
            "buffer_total_queued_size": 0,
            "config": {
                "@type": "elasticsearch_dynamic",
...
                "index_name": ".operations.${record['@timestamp'].nil? ? Time.at
(time).getutc.strftime(@logstash_dateformat) : Time.parse(record['@timestamp']).
getutc.strftime(@logstash_dateformat)}",
...
            "plugin_id": "object:1b4cc64",
...

This is the one I'm looking for, which has a plugin id of object:1b4cc64. Next, I can use the @id parameter in conjunction with the debug one to get some interesting statistics:
$ http://localhost:24220/api/plugins.json\?@id=object:1b4cc64\&debug=1 | \
  python -mjson.tool | \
  egrep 'buffer_total_queued_size|emit_count'
            "buffer_total_queued_size": 0,
                "emit_count": 3164,

I can even put this in a simple loop to see how the queue size and emit count change over time:
$ while true ; do
  date
  http://localhost:24220/api/plugins.json\?@id=object:1b4cc64\&debug=1 | \
    python -mjson.tool | egrep 'buffer_total_queued_size|emit_count'
  sleep 1
done
Wed Dec  7 23:56:18 UTC 2016
            "buffer_total_queued_size": 0,
                "emit_count": 3318,
Wed Dec  7 23:56:21 UTC 2016
            "buffer_total_queued_size": 1654,
                "emit_count": 3322,
Wed Dec  7 23:56:23 UTC 2016
            "buffer_total_queued_size": 2146,
                "emit_count": 3324,
Wed Dec  7 23:56:25 UTC 2016
            "buffer_total_queued_size": 0,
                "emit_count": 3326,

This tells me that the plugin is working, the queues are being flushed regularly, and the emit count (roughly, the number of times fluentd flushes the queued outputs, the number of times a request is made to Elasticsearch) is steadily increasing.

December 07, 2016 11:58 PM

December 06, 2016

Red Hat Blog

PCI Series: Requirement 8 – Identify and Authenticate Access to System Components

This post continues my series dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS).  This specific post is related to requirement eight (i.e. the requirement to identify and authenticate access to system components). The outline and mapping of individual articles to requirements can be found in the overarching post that started the series.

Requirement eight is directly related to IdM. IdM can be used to address most of the requirements in this section. IdM stores user accounts, provides user account life-cycle management (from creation to termination), and controls the different types of credentials that users can use to authenticate (e.g. passwords, certificates, and one-time-password tokens); it also defines policies related to a number of associated credentials (e.g. password complexity, strength, and expiration policies or account lockout and retry policies). The details about these capabilities can be found in different chapters of the Linux Domain Identity, Authentication, and Policy Guide.

Requirement 8.3 explicitly calls for multi-factor authentication. IdM has an integrated support for open standard OTP tokens (e.g. Yubikey, FreeOTP, and Google Authenticator) and can also leverage existing authentication systems like, for example, RSA Authentication Manager. IdM can even be used as a back-end for RADIUS/TACACS or for a VPN server – allowing 2FA for remote access into a given network.

Questions about how Identity Management relates to requirement eight? Reach out using the comments section (below).

by Dmitri Pal at December 06, 2016 11:00 PM

Florence Blanc-Renaud

Using Certmonger to track certificates

When FreeIPA is installed with an integrated IdM CA, it is using certmonger to track and renew its certificates. But what does this exactly mean?

When the certificates are reaching their expiration date, certmonger detects that it needs to renew them and takes care of the renewal (request a renewed certificate, then install the new certificate at the right location and finally restart the service so that it picks up the new certificate). It means that the system administrator does not need to bother anymore with renewals!

Well… When everything works well it is really a great functionality. But sometimes a small problem can prevent the renewal and FreeIPA ends up with expired certificates and HTTP or LDAP services refusing to start. In this case, it is really difficult to understand what has gone wrong, and how to fix the issue.

In this post, I will explain what is happening behind the scene with certmonger, so that you understand where to look for if you need to troubleshoot.

Certmonger concepts

Certmonger daemon and CLI

Certmonger provides 2 main components:

  • the certmonger daemon that is the “engine” tracking the list of certificates and launching renewal commands
  • the command-line interface: getcert, that allows to send commands to the certmonger daemon (for instance request a new certificate, list the tracked certificates, start or stop tracking a certificate, renew a certificate…)

Certificate Authority

Certmonger provides a generic interface allowing to communicate with various certificate systems, such as Dogtag, FreeIPA… A simple definition for Certificate System would be a software solution able to deliver certificates. This allows to use the same certmonger command independently of the Certificate System that will actually handle the request. The getcert command just reads the additional argument -c to know with which Certificate authority to interface.

Then certmonger needs to know how to interface with each type of Certificate System. This is done by defining Certificate Authorities that can be listed with:

$ getcert list-cas
CA 'SelfSign':
 is-default: no
 ca-type: INTERNAL:SELF
 next-serial-number: 01
CA 'IPA':
 is-default: no
 ca-type: EXTERNAL
 helper-location: /usr/libexec/certmonger/ipa-submit
[...]

Each section starting with ‘CA’ defines a type of Certificate Authority that certmonger knows to handle. The output of the command also shows a helper-location, which is the command that certmonger will call to discuss with the Certificate Authority. For instance:

$ getcert list-cas -c IPA
CA 'IPA':
 is-default: no
 ca-type: EXTERNAL
 helper-location: /usr/libexec/certmonger/ipa-submit

shows that certmonger will run the command “/usr/libexec/certmonger/ipa-submit” when interfacing with IPA certificate authority.

Each helper command is following an interface imposed by certmonger. For instance, environment variables are set by certmonger to provide the operation to execute, the CSR etc…

Certificate tracking

List of tracked certificates

In order to know the list of certificates currently tracked by certmonger, the command getcert list can be used. It shows a lot of information:

  • the certificate location (for instance HTTP server cert is stored in the NSS database /etc/httpd/alias)
  • the certificate nickname
  • the file storing the pin
  • the Certificate Authority that will be used to renew the certificate
  • the expiration date
  • the status of the certificate (MONITORING when it is tracked and not expired)

For instance, to list all the tracking requests for certificates with a nickname “Server-Cert” stored in the NSS db /etc/httpd/alias:

$ getcert list -n Server-Cert -d /etc/httpd/alias/
Number of certificates and requests being tracked: 8.
Request ID '20161122101308':
 status: MONITORING
 stuck: no
 key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt'
 certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB'
 CA: IPA
 issuer: CN=Certificate Authority,O=DOMAIN.COM
 subject: CN=ipaserver.domain.com,O=DOMAIN.COM
 expires: 2018-11-23 10:09:34 UTC
 key usage: digitalSignature,nonRepudiation,keyEncipherment,dataEncipherment
 eku: id-kp-serverAuth,id-kp-clientAuth
 pre-save command: 
 post-save command: /usr/lib64/ipa/certmonger/restart_httpd
 track: yes
 auto-renew: yes

Certificate renewal

When a certification is near its expiration date, certmonger daemon will automatically issue a renewal command using the CA helper, obtain a renewed certificate and replace the previous cert with the new one.

It is also possible to manually renew a certificate in advance by using the command getcert resubmit -i <id>, where <id> is the Request ID displayed by getcert list for the targetted certificate. This command will renew the certificate using the right helper command.

Start/Stop tracking a certificate

The commands getcert start-tracking and getcert stop-tracking enable or disable the monitoring of a certificate. It is important to understand that they do not manipulate the certificate (stop-tracking does not delete it or remove it from the NSS database) but simply add/remove the certificate to/from the list of monitored certificates.

Pre and post-save commands

When a certificate is tracked by certmonger, it can be useful to define pre-save and post-save commands that certmonger will call during the renewal process. For instance:

$ getcert list -n Server-Cert -d /etc/httpd/alias/
Number of certificates and requests being tracked: 8.
Request ID '20161122101308':
 status: MONITORING
 stuck: no
 key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt'
 certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB'
[...]
 pre-save command: 
 post-save command: /usr/lib64/ipa/certmonger/restart_httpd
 track: yes
 auto-renew: yes

shows that the renewal of HTTPd Server Cert:

  • will be handled by IPA Certificate Authority. Remember, we can find the associated helper using getcert list-cas -c IPA
  • will also launch the command restart_httpd

This is useful when a service needs to be restarted in order to pick up the new certificate.

Troubleshooting

 Certmonger logs

Certmonger uses the journal log. For instance, when a certificate is near its expiration date, the journal will show:

$ sudo journalctl -xe -t certmonger | more
Nov 05 11:35:47 ipaserver.domain.com certmonger[59223]: Certificate named "auditSigningCert cert-pki-ca" in token "NSS Certificate DB" in database "/etc/pki/pki-tomcat/alias" will not be valid after 20161115150822.

And when the certificate has been automatically renewed, the journal will show:

$ journalctl -t certmonger | more
Nov 24 12:23:15 ipaserver.domain.com certmonger[36674]: Certificate named "ipaCert" in token "NSS Certificate DB" in database "/etc/httpd/alias" issued by CA and saved.

Output of getcert list

It is possible to check the status for each certificate using getcert list:

  • when the certificate is still valid, the status should be MONITORING.
  • when the certificate is near its expiration date, certmonger will request its renewal and the status will change from MONITORING to SUBMITTING and finally back to MONITORING (you may also see intermediate status PRE_SAVE_CERT and POST_SAVE_CERT).

When the renewal fails, getcert list will also show an error message. It will help determine which phase failed, and from there you will need to check the logs specific to the CA helper or to the pre-save or post-save commands.

In the next post, I will detail the errors that can arise with the helpers used with FreeIPA.


by floblanc at December 06, 2016 01:17 PM

November 28, 2016

Red Hat Blog

PCI Series: Requirement 7 – Restrict Access to Cardholder Data by Business Need to Know

This is my sixth post dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS).  This specific post is related to requirement seven (i.e. the requirement to restrict access to cardholder data by business need to know).  The outline and mapping of individual articles to the requirements can be found in the overarching post that started the series.

Section 7 of the PCI DSS standard talks about access control and limiting the privileges of administrative accounts.  IdM can play a big role in addressing these requirements.  IdM provides several key features that are related to access control and privileged account management.  The first one is host-based-access-control (HBAC).  With HBAC, one can centrally define which groups of users can access which groups of systems using which login services.  Another feature is an ability to centrally define sudo rules that control which users can run which commands on which systems as other users (usually as root).  Yet another capability worth mentioning is the ability to define how user account are mapped to the SELinux user.  Using this feature one can, for example, prevent developer accounts from touching executables on production machines while still enabling them read access to some of the parts of the application data and log(s) for better troubleshooting of potential bugs or misconfigurations.

Questions about how Identity Management relates to requirement seven? Reach out using the comments section (below).

by Dmitri Pal at November 28, 2016 03:30 PM

Powered by Planet