FreeIPA Identity Management planet - technical blogs

March 21, 2017

Fraser Tweedale

Supporting large key sizes in FreeIPA certificates

A couple of issues around key sizes in FreeIPA certificates have come to my attention this week: how to issue certificates for large key sizes, and how to deploy FreeIPA with a 4096-bit key. In this post I’ll discuss the situation with each of these issues. Though related, they are different issues so I’ll address each separately.

Issuing certificates with large key sizes

While researching the second issue I stumbled across issue #6319: ipa cert-request limits key size to 1024,2048,3072,4096 bits. To wit:

ftweedal% ipa cert-request alice-8192.csr --principal alice
ipa: ERROR: Certificate operation cannot be completed:
  Key Parameters 1024,2048,3072,4096 Not Matched

The solution is straightforward. Each certificate profile configures the key types and sizes that will be accepted by that profile. The default profile is configured to allow up to 4096-bit keys, so the certificate request containing an 8192-bit key fails. The profile configuration parameter involved is:

policyset.<name>.<n>.constraint.params.keyParameters=1024,2048,3072,4096

If you append 8192 to that list and update the profile configuration via ipa certprofile-mod (or create a new profile via ipa certprofile-import), then everything will work!

Deploying FreeIPA with IPA CA signing key > 2048-bits

When you deploy FreeIPA today, the IPA CA has a 2048-bit RSA key. There is currently no way to change this, but Dogtag does support configuring the key size when spawning a CA instance, so it should not be hard to support this in FreeIPA. I created issue #6790 to track this.

Looking beyond RSA, there is also issue #3951: ECC Support for the CA which concerns supporting a elliptic curve signing key in the FreeIPA CA. Once again, Dogtag supports EC signing algorithms, so supporting this in FreeIPA should be a matter of deciding the ipa-server-install(1) options and mechanically adjusting the pkispawn configuration.

If you have use cases for large signing keys and/or NIST ECC keys or other algorithms, please do not hesitate to leave comments in the issues linked above, or get in touch with the FreeIPA team on the freeipa-users@redhat.com mailing list or #freeipa on Freenode.

by ftweedal at March 21, 2017 12:59 AM

March 15, 2017

Rich Megginson

Elasticsearch Troubleshooting - unassigned_shard and cluster state RED

Problem - unassigned_shards and cluster status RED



Using OpenShift origin-aggregated-logging 1.2, Elasticsearch 1.5.2, the cluster status is RED.
oc exec logging-es-xxx-N-yyy -n logging -- curl -s \
  --key /etc/elasticsearch/keys/admin-key \
  --cert /etc/elasticsearch/keys/admin-cert \
  --cacert /etc/elasticsearch/keys/admin-ca \
  https://localhost:9200/_cluster/health | \
  python -mjson.tool
{
    "active_primary_shards": 12345,
    "active_shards": 12345,
    "cluster_name": "logging-es",
    "initializing_shards": 0,
    "number_of_data_nodes": 3,
    "number_of_nodes": 3,
    "number_of_pending_tasks": 0,
    "relocating_shards": 0,
    "status": "red",
    "timed_out": false,
    "unassigned_shards": 7
}

The problem is the unassigned_shards. We need to identify those shards and
figure out how to deal with them so the cluster can move to yellow or
green.


Solution - identify and delete problematic indices



Use the /_cluster/health?level=indices to get a list of the indices status:
oc exec logging-es-xxx-N-yyy -n logging -- curl -s \
  --key /etc/elasticsearch/keys/admin-key \
  --cert /etc/elasticsearch/keys/admin-cert \
  --cacert /etc/elasticsearch/keys/admin-ca \
  https://localhost:9200/_cluster/health?level=indices | \
  python -mjson.tool > indices.json

The report will list each index and its state:
"my-index.2017.03.15":{
   "active_primary_shards": 4,
   "active_shards": 4,
   "initializing_shards": 0,
   "number_of_replicas": 0,
   "number_of_shards": 5,
   "relocating_shards": 0,
   "status": "red",
   "unassigned_shards": 1
 },
 ...

Look for records that have "status": "red" and an "unassigned_shards" with
a value of 1 or higher. IF YOU DON’T NEED THE DATA ANYMORE, AND ARE SURE
THAT THIS DATA CAN BE LOST, then it might be easiest to just delete these using
the REST API:
oc exec logging-es-xxx-N-yyy -n logging -- curl -s \
  --key /etc/elasticsearch/keys/admin-key \
  --cert /etc/elasticsearch/keys/admin-cert \
  --cacert /etc/elasticsearch/keys/admin-ca \
  -XDELETE https://localhost:9200/my-index.2017.03.15

If you need to recover this data, or deletion is not working, then use the
recovery procedure documented at
indices recovery

March 15, 2017 05:45 PM

March 06, 2017

Red Hat Blog

Identity Management Improvements in Red Hat Enterprise Linux 7.3: Part 2

In Part 1 of this series, we looked at core improvements for Identity Management (IdM) in Red Hat Enterprise Linux (RHEL) 7.3, as well as manageability and other improvements. In the second half, we’re going to look at interoperabilty, and Active Directory integration. 

Certificate Management

Enriched certificate management is an ongoing theme for several releases.

In the current release we focused on the following use case: assume you issue certificates for different purposes like devices, systems, services, VPNs, switches and so on, using IdM CA. If you have a single CA, all the certificates come from the same trust chain, so administrators have to explicitly limit the scope of the certificates to the environment they are used in to prevent cross pollination and misuse of the certificates issued for one purpose with a different service.

Getting all these access control rules right becomes a really complex task. It would have been much easier if one could just have a dedicated CA for each of the environments. But standing up a separate CA infrastructure is usually an even bigger task. Not anymore! With the SubCA feature one can create a dedicated CA with a couple of commands in seconds.

The second enhancement is the ability to authenticate using smart cards via SSSD and IdM, but this time with added support for Active Directory users coming from a trusted forest. In version 7.2, we introduced the ability to authenticate IdM users using certificates on smart cards when they log into the Linux systems configured with SSSD. This time we added the ability for Active Directory users in trusted forests to use their certificates when they are published to Active Directory or into ID override entries in IdM.

Making certificate related enhancements to bring more and more related functionality is a multi release plan. You will see more additions in this area when the next release arrives.

Interoperability

For some time, you have asked about making the IdM management API available. We were still not confident that it is ready, although we included an API browser into the IdM UI as an experimental feature. Finally, we made a set of changes that enables us to make the API publicly available. We take the commitment to support the IdM management API very seriously and want to make sure there are no issues that would force us to make incompatible changes.

This is why, in the Red Hat Enterprise Linux 7.3 release, we offer a technical preview of the IdM API. We plan to declare full support in one of the future releases. Your feedback and comments will be extremely valuable. To get more information about the IdM management API please read the knowledge base article we published last November on using the technology preview.

Many customers still have legacy UNIX systems and we often get questions about how to integrate these systems into the IdM ecosystem. While IdM can provide authentication via standard LDAP & Kerberos protocols and perform identity lookups via LDAP protocol, the access control capabilities implemented in IdM are not available to those UNIX systems.

To close this gap and provide a single central place for access control management across modern Linux and legacy UNIX systems, a new community project has been launched – pam_hbac.   This project offers a pam module that leverages IdM host-based-access-control rules. It is currently built for Solaris, FreeBSD  and Linux. This module is not included in Red Hat Enterprise Linux as it needs to be installed on other platforms and is not supported by Red Hat but rather by a community of open source developers. If you are interested in this project, please collaborate via GitHub. If you are interested in an AIX version, please contact your IBM representative and open an RFE with IBM.

As I have written some time ago, Red Hat has been working on the identity provider solution to allow federation and SSO for web applications using SAML and OpenID Connect protocols. Earlier this year, Red Hat released a fully supported solution called Red Hat SSO powered by the Keycloak community project. IdM in Red Hat Enterprise Linux 7.3 has been validated as a back end for RH SSO, in parallel to Active Directory and generic LDAP. However, we recommend waiting a bit until the next version of Red Hat SSO is released in several weeks. That version will include a tighter integration between IdP server and IdM/SSSD bringing a better user experience in complex setups.

Active Directory Integration

Clients in AD domains

Some customers that consider deploying IdM with Active Directory trusts face a challenge related to the names of the hosts. If LInux systems are deployed inside the same DNS domain as Active Directory domain controllers, moving to trusts would mean changing hostnames to a different domain when Kerberos SSO is expected to work between all the systems in the environment. In some cases renaming is possible, in some it is really hard. To discuss this issue in details and suggest some workarounds I put together a separate blog post.

Here I want to mention that while we took a look at what else can be done for this use case, we could not find anything that is in our power to improve in the current situation. Hosts can remain unchanged, but that would make it impossible to SSH into those hosts leveraging Kerberos based SSO. If this is acceptable then no name changes would be needed. At that point it becomes a deployment choice and would depend on the constraints and priorities of the specific customer environment.

External trust

The original implementation of IdM to AD trusts implies a full trust between IdM and the whole Active Directory forest. In some cases this is not desirable. Sometimes the users that should be exposed to IdM resources are isolated in a separate domain and it makes sense to have a direct trust with that specific domain rather than with the whole forest. IdM in Red Hat Enterprise Linux 7.3 now has the capability to establish trust with a selected Active Directory domain, rather than with the whole forest.

UPN Support

Users in Active Directory can have an arbitrary name assigned to them called User Principal Name (UPN). By default UPN is constructed from the domain name and user login name automatically, but in some cases it can explicitly be reconfigured. In this case, no assumptions can be made about the UPN name – it is just a string that can pretty much contain any value. SSSD was capable of working with arbitrarily UPN names in the direct integration scenario but was lacking the same flexibility in trust cases. This limitation has been addressed and SSSD can now handle arbitrary UPN names when connected to IdM in trust setup with AD.

Keytab Renewal

When a system is joined directly to Active Directory as a domain member, it has to adhere to key rotation policies. SSSD is now capable of automatically renewing its kerberos keys, following policies defined in Active Directory.

Password Change

In a trust setup, legacy UNIX and Linux systems are connected to a special LDAP compatibility view that exposes merged information between data coming from Active Directory and data stored in IdM. In the current release Active Directory and IdM users authenticating via legacy systems connected to the compatibility tree can change their user password when it expires. Password change via compatibility tree was not possible in the past.

As you can see in the last release, as in every release before, we have delivered a lot of new identity management capabilities. We would be glad to hear your input on the new and old features as well as your improvement requests. Comments are always welcome! Try it, use it, provide feedback. We are here to listen, build and make your day-to-day life easier.

by Dmitri Pal at March 06, 2017 03:19 PM

March 04, 2017

Fabiano Fidencio

SSSD: {DBus,Socket}-activated responders (2nd try!)

Second time's the charm! :-)


Since the first post about this topic some improvements have been done in order to fix a bug found and reported by a Debian user (Thanks Stric!).

The fix is part of SSSD 1.15.1 release and altogether with the release, some other robustness improvements have been done! Let's go through the changes ...


Avoid starting the responders before SSSD is up!


I've found out that the NSS responder had been started up before SSSD and it's quit problematic during the boot up process as libc does initgroups on pretty much any account, checking all NSS modules in order to be precise.

By calling sss_nss the NSS responder is triggered and tries to talk to the data providers (which are not up yet, as SSSD is not up yet ...), causing the boot up process to hang until libc gives up (causing a timeout on services like systemd-login and all the services depending on this one).

The fix for this issue looks like:
1
2
3
4
5
6
7
8
@@ -1,6 +1,7 @@
[Unit]
Description=SSSD @responder@ Service responder socket
Documentation=man:sssd.conf(5)
+ After=sssd.service
BindsTo=sssd.service

[Socket]

And, as I've been told by systemd developers that "BindsTo=" must always come together with "After=" (although it is not documented yet ...) this fix has been applied for all responders' unit files.


Avoid starting the responders' sockets before SSSD is up!


We really want (at least for now) to have the responders' sockets completely tied up to SSSD service. We want the responders to be socket-activated only after SSSD is up and just right above this section you can see an explanation why we want to have this kind of control.

In order to achieve this some changes were needed in the sockets' units, as systemd automatically adds "Before=sockets.target" to any socket unit by default (and sockets.target is started up in an really early phase of the boot process).

And there I went again to talk to systemd developers about the best approach to do not start the responder's sockets before SSSD is up and the patch that came out as a result of the discussion looks like:

1
2
3
4
5
6
7
8
9
@@ -3,6 +3,8 @@
Documentation=man:sssd.conf(5)
After=sssd.service
BindsTo=sssd.service
+ DefaultDependencies=no
+ Conflicts=shutdown.target

[Socket]
ListenStream=@pipepath@/@responder@

By doing this change the sockets are no longer started before sockets.target, but just after SSSD service is started. The downside of this approach is that we have to deal with conflicts by our own and that is the reason the "Conflicts=shutdown.target" has been added.


Be more robust against misconfigurations!


As now that we have two completely different ways to manage the services, we really have to be robust in order to avoid that the admins will mix them up wrongly.

So far we have been flexible enough to allow admins to have some of the services being started up by the monitor, while other services left for systemd. And it's okay! The problem would start when the monitor has been told to start a responder (by having the responder listed in the services' line of sssd.conf) and this very same responder is supposed to be socket-activated (the admin did systemctl enable sssd-@responder@.socket).

In the situation describe above we could end up with two responders' services running (for the very same responder). The best way found to fix this issue is adding a simple program to check whether the socket-activated responder is also mentioned in the sssd.conf services' line. In case it's mentioned there, just do not start the socket up and leave the whole responsibility to the monitor. Otherwise, take advantage of systemd machinery!

The change on the sockets' unit looks like:
1
2
3
4
5
6
7
8
@@ -7,6 +7,7 @@
Conflicts=shutdown.target

[Socket]
+ ExecStartPre=@libexecdir@/sssd/sssd_check_socket_activated_responders -r @responder@
ListenStream=@pipepath@/@responder@
SocketUser=@SSSD_USER@
SocketGroup=@SSSD_USER@


Also, I've decided to be a little bit stricter on our side and also refuse manual start up of the responders' services and the change for this looks like:
1
2
3
4
5
6
7
8
@@ -3,6 +3,7 @@
Documentation=man:sssd.conf(5)
After=sssd.service
BindsTo=sssd.service
+ RefuseManualStart=true

[Install]
Also=sssd-@responder@.socket


And how can I start using the socket-activated services?


As by default we still use the monitor to manage services, some little configuration change is need.

See the example below explaining how to enable the PAM and AutoFS services to be socket-activated.

Considering your /etc/sssd/sssd.conf has something like:

1
2
3
[sssd]
services = nss, pam, autofs
...

Enable PAM and AutoFS responders' sockets:
# systemctl enable sssd-pam.socket
# systemctl enable sssd-autofs.socket

Remove both PAM and AutoFS responders from the services' line, like:

1
2
3
[sssd]
services = nss
...

Restart SSSD service
    # systemctl restart sssd.service

    And you're ready to go!


    Is there any known issue that I should be aware of?


    Yes, there is! You should avoid having PAC responder, needed by IPA domains, socket-activated for now. The reason for this is that due to an ugly hack on SSSD code this responder is added to the services' list anytime an IPA domain is detected.

    By doing this, the service is always started by the monitor and there is nothing that could be done on our socket's units to detected this situation and avoid starting up the PAC socket.

    A possible way to fix this issue is patching ipa-client-install to either explicitly add the PAC responder to the services' list (in case the admin wants to keep using the monitor) or to enable the PAC responders' socket (in case the admin wants to take advantage of socket-activation).

    Once it's done on IPA side, we would be able to drop the code that enables the PAC responder automatically from SSSD. However, doing this right now would break backwards compatibility!


    Where can I find more info about SSSD?


    More information about SSSD can be found on the project page: https://pagure.io/SSSD/sssd/

    If you want to report us a bug, please, follow this web page and file an issue in the SSSD pagure instance.

    Please, keep in mind that currently we're in the middle of a migration process from FedoraHosted to Pagure and it will take a while to have everything in place, again.

    Even though, you can find more info about SSSD's internals here.

    In case you want to contribute to the project, please, read this webpage and feel free to approach us at #sssd on freenode (irc://irc.freenode.net/sssd).

    by noreply@blogger.com (Fabiano Fidêncio) at March 04, 2017 08:40 PM

    February 20, 2017

    Fraser Tweedale

    Wildcard certificates in FreeIPA

    The FreeIPA team sometimes gets asked about wildcard certificate support. A wildcard certificate is an X.509 certificate where the DNS-ID has a wildcard in it (typically as the most specific domain component, e.g. *.cloudapps.example.com). Most TLS libraries match wildcard domains in the obvious way.

    In this blog post we will discuss the state of wildcard certificates in FreeIPA, but before proceeding it is fitting to point out that wildcard certificates are deprecated, and for good reason. While the compromise of any TLS private key is a serious matter, the attacker can only impersonate the entities whose names appear on the certificate (typically one or a handful of DNS addresses). But a wildcard certificate can impersonate any host whose name happens to match the wildcard value.

    In time, validation of wildcard domains will be disabled by default and (hopefully) eventually removed from TLS libraries. The emergence of protocols like ACME that allow automated domain validation and certificate issuance mean that there is no real need for wildcard certificates anymore, but a lot of programs are yet to implement ACME or similar; therefore there is still a perceived need for wildcard certificates. In my opinion some of this boils down to lack of awareness of novel solutions like ACME, but there can also be a lack of willingness to spend the time and money to implement them, or a desire to avoid changing deployed systems, or taking a "wait and see" approach when it comes to new, security-related protocols or technologies. So for the time being, some organisations have good reasons to want wildcard certificates.

    FreeIPA currently has no special support for wildcard certificates, but with support for custom certificate profiles, we can create and use a profile for issuing wildcard certificates.

    Creating a wildcard certificate profile in FreeIPA

    This procedure works on FreeIPA 4.2 (RHEL 7.2) and later.

    First, kinit admin and export an existing service certificate profile configuration to a file:

    ftweedal% ipa certprofile-show caIPAserviceCert --out wildcard.cfg
    ---------------------------------------------------
    Profile configuration stored in file 'wildcard.cfg'
    ---------------------------------------------------
      Profile ID: caIPAserviceCert
      Profile description: Standard profile for network services
      Store issued certificates: TRUE

    Modify the profile; the minimal diff is:

    --- wildcard.cfg.bak
    +++ wildcard.cfg
    @@ -19 +19 @@
    -policyset.serverCertSet.1.default.params.name=CN=$request.req_subject_name.cn$, o=EXAMPLE.COM
    +policyset.serverCertSet.1.default.params.name=CN=*.$request.req_subject_name.cn$, o=EXAMPLE.COM
    @@ -108 +108 @@
    -profileId=caIPAserviceCert
    +profileId=wildcard

    Now import the modified configuration as a new profile called wildcard:

    ftweedal% ipa certprofile-import wildcard \
        --file wildcard.cfg \
        --desc 'Wildcard certificates' \
        --store 1
    ---------------------------
    Imported profile "wildcard"
    ---------------------------
      Profile ID: wildcard
      Profile description: Wildcard certificates
      Store issued certificates: TRUE

    Next, set up a CA ACL to allow the wildcard profile to be used with the cloudapps.example.com host:

    ftweedal% ipa caacl-add wildcard-hosts
    -----------------------------
    Added CA ACL "wildcard-hosts"
    -----------------------------
      ACL name: wildcard-hosts
      Enabled: TRUE
    
    ftweedal% ipa caacl-add-profile wildcard-hosts --certprofiles wildcard
      ACL name: wildcard-hosts
      Enabled: TRUE
      CAs: ipa
      Profiles: wildcard
    -------------------------
    Number of members added 1
    -------------------------
    
    ftweedal% ipa caacl-add-host wildcard-hosts --hosts cloudapps.example.com
      ACL name: wildcard-hosts
      Enabled: TRUE
      CAs: ipa
      Profiles: wildcard
      Hosts: cloudapps.example.com
    -------------------------
    Number of members added 1
    -------------------------

    An additional step is required in FreeIPA 4.4 (RHEL 7.3) and later (it does not apply to FreeIPA < 4.4):

    ftweedal% ipa caacl-add-ca wildcard-hosts --cas ipa
      ACL name: wildcard-hosts
      Enabled: TRUE
      CAs: ipa
    -------------------------
    Number of members added 1
    -------------------------

    Then create a CSR with subject CN=cloudapps.example.com (details omitted), and issue the certificate:

    ftweedal% ipa cert-request my.csr \
        --principal host/cloudapps.example.com \
        --profile wildcard
      Issuing CA: ipa
      Certificate: MIIEJzCCAw+gAwIBAgIBCzANBgkqhkiG9w0BAQsFADBBMR8...
      Subject: CN=*.cloudapps.example.com,O=EXAMPLE.COM
      Issuer: CN=Certificate Authority,O=EXAMPLE.COM
      Not Before: Mon Feb 20 04:21:41 2017 UTC
      Not After: Thu Feb 21 04:21:41 2019 UTC
      Serial number: 11
      Serial number (hex): 0xB

    Alternatively, you can use Certmonger to request the certificate:

    ftweedal% ipa-getcert request \
      -d /etc/httpd/alias -p /etc/httpd/alias/pwdfile.txt \
      -n wildcardCert \
      -T wildcard

    This will request a certificate for the current host. The -T option specifies the profile to use.

    Discussion

    Observe that the subject common name (CN) in the CSR does not contain the wildcard. FreeIPA requires naming information in the CSR to perfectly match the subject principal. As mentioned in the introduction, FreeIPA has no specific support for wildcard certificates, so if a wildcard were included in the CSR, it would not match the subject principal and the request would be rejected.

    When constructing the certificate, Dogtag performs a variable substitution into a subject name string. That string contains the literal wildcard and the period to its right, and the common name (CN) from the CSR gets substituted in after that. The relevant line in the profile configuration is:

    policyset.serverCertSet.1.default.params.name=CN=*.$request.req_subject_name.cn$, o=EXAMPLE.COM

    When it comes to wildcards in Subject Alternative Name DNS-IDs, it might be possible to configure a Dogtag profile to add this in a similar way to the above, but I do not recommend it, nor am I motivated to work out a reliable way to do this, given that wildcard certificates are deprecated. (By the time TLS libraries eventually remove support for treating the subject CN as a DNS-ID, I will have little sympathy for organisations that still haven’t moved away from wildcard certs).

    In conclusion: you shouldn’t use wildcard certificates, and FreeIPA has no special support for them, but if you really need to, you can do it with a custom certificate profile.

    by ftweedal at February 20, 2017 04:55 AM

    February 06, 2017

    Red Hat Blog

    Identity Management Improvements in Red Hat Enterprise Linux 7.3: Part 1

    Red Hat Enterprise Linux (RHEL) 7.3 has been out for a bit, but have you looked at what we’ve added in the Identity Management area for this release? I’m excited to say, we’ve added quite a bit!

    In the past I have been talking about individual features in Identity Management (IdM) and System Security Services Daemon (SSSD) but this is really not how we prioritize our efforts nowadays. We look at customer requests, community efforts, and market trends and then define themes for the release. So what were these themes for RHEL 7.3?

    Improvements to the Core

    Performance

    As our identity management solution matures customers start to deploy it in more sophisticated environments with more than fifty thousands systems or users, complex deeply nested group structure, advanced access control and sudo rules. In such environments, IdM and SSSD were not always meeting performance and scalability expectations. We wanted to correct that. Several efforts in different areas have been launched to make the solution work better for such complex deployments. In our test environment on a reference VM with 4GB of RAM and 8 cores we managed to improve:

    • User and group operations with complex group structure – about 3 times faster
    • Kerberos authentication – about 100 times faster
    • Bulk user provisioning – about 20 times faster (relies on disabling memberOf plugin and rebuilding group membership after the bulk operation)

    On the client side SSSD was slow in processing large objects in the cache, especially big groups with hundreds of members. The problem manifested itself most vividly when users performed the “ls -l” command on a directory with files owned by many different users. SSSD already had a workaround by means of ignore_group_members option but that was not enough. The structure of the SSSD cache was significantly reworked rendering twice as better results as in the past.

    In addition to that, the underlying directory server includes a new experimental feature called Nunc Stans. The feature solves the problem of thousands of concurrent client connections that have been significantly affecting server performance. The feature is disabled by default. If you are interested in experimenting with this feature please contact your technical account manager to make us aware of your plans.

    There is no limit to perfection so we will continue working on performance and scalability improvements in the follow-up releases.

    DNS Related Enhancements

    One of the limitations that large environments with several datacenters were facing was inability to limit which subset of servers the clients should prefer to connect to. It was possible to limit the set explicitly by providing the list of the preferred servers on the client side but that required additional configuration steps on every client which is an administrative overhead.

    A better solution would have been to rely on DNS to identify the servers the client can connect to. But with the original DNS implementation there was no way to associate a set of clients with a set of servers so that clients would not go to the other side of the globe to connect to a server in a remote datacenter.

    The DNS locations feature introduced in the current release solves this problem by allowing administrator to define a set of servers in the datacenter and to affiliate clients to this set of servers. The feature is functionally similar to the Active Directory capability called “sites.” The changes are in the IdM DNS server so the feature is available in the deployments that rely on DNS server provided by IdM to manage connected Linux clients.

    Replica Management

    In this release, the replica management area saw multiple significant improvements.

    In the past, managing replicas in IdM was quite a challenge. Each replica only knew about its peers. There was no central place where all topology information was stored. As a result it was really hard to assess the state of the deployment and see which replicas connected to which other replicas. This changed. Now topology information is replicated and every replica in the deployment knows about the whole environment. To see the topology one can use a topology graph. Replication agreements can be added and removed with a mouse click.

    Using Topology Graph to view replica topology

    " data-medium-file="https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=300&h=188" data-large-file="https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=640" class="wp-image-2887 size-medium" style="margin:10px;" src="https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=300&h=188" alt="Using Topology Graph to view replica topology" width="300" height="188" srcset="https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=300&h=188 300w, https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=600&h=376 600w, https://rhelblog.files.wordpress.com/2017/01/replica-management.png?w=150&h=94 150w" sizes="(max-width: 300px) 100vw, 300px" />
    Figure 1: Using Topology Graph to view replica topology

    In addition to topology information, the inventory of the installed components is also available now. In the past it was hard to see which servers have a CA or DNS server deployed. Now with the server roles report in the UI, the administrator can see which servers have which roles in the environment.

    We also changed the replica deployment procedure because it was hard to automate properly. In the past the expectation was that replicas would be installed by humans that will type the administrative password. When you need to deploy replicas on demand this does not scale well.

    Efforts to create Puppet scripts or Ansible playbooks for replica deployment also faced the problem of embedding passwords into the body of the module. Keeping in mind that modules and playbooks are usually source controlled and need to be accessed by different people, having highly sensitive passwords in them was an audit nightmare.

    To address this issue, IdM introduced a new replica installation procedure also called replica promotion. The installer will lay out the client bits first. The client will register and get its identity. The existing master, knowing that a replica is being installed, would elevate privileges of the client to allow the client to convert itself to a replica. This process allows deployment of the replicas in a much more dynamic and secure fashion. Existing replication management utilities have been updated in a backward compatible way.

    These replication management improvements are enabled automatically for the new installations. For the existing installations to take advantage of these features one needs to update all participating servers to Red Hat Enterprise Linux 7.3 and then change the domain level setting to 1.

    Also many customers that are interested in deploying IdM have dozens of remote sites. To accommodate this the limit of supported servers in one deployment was increased from 20 to 60.

    Access Control

    Continuing the trend that we started with implementing together with MIT the support of two factor OTP-based authentication over the Kerberos protocol, IdM and SSSD in Red Hat Enterprise Linux 7.3 bring in a new, revolutionary technology. This technology is called “Authentication Indicators.”

    In the past all tickets created by the Kerberos server were born equal, regardless of what type of authentication was originally used. Now, authentication Indicators allow tagging the ticket in different ways, depending on whether single or multi factor authentication is used. This technology enables administrators to control which kerberized services are available to users depending on the type of the authentication. Using Authentication Indicators, one would be able to define a set of hosts and services that require two factor authentication and let users access other hosts and services with tickets acquired as a result of a single factor authentication.

    Another improvement that is worth mentioning is the change to how IdM and SSSD communicate SUDO policies. In the past SSSD was able to work only with the traditional SUDO LDAP schema defined by the SUDO project. On the other hand, the schema that IdM uses to store SUDO information is different. It was designed to provide a better user experience and improve manageability. The side effect of this situation was that IdM had to create a special LDAP view to serve SUDO information to the clients including SSSD. This view added performance overhead and complexity to the solution. With the Red Hat Enterprise Linux 7.3 release, SSSD is now capable of working with the internal SUDO schema adopted by IdM. Once the clients are updated to the latest version, the special SUDO view on IdM servers can be disabled, freeing memory and boosting server performance.

    Manageability

    Deploying clients in the cloud requires more flexibility with names that identify a system for Kerberos authentication. In many cases a system has an internal name assigned by a cloud provider and an external name visible outside the cloud. To be able to use multiple names for the same system or service, the Identity Management in Red Hat Enterprise Linux added the ability to define alternative names (Kerberos aliases) via the user interface and command line. With this feature, one can deploy a system in a cloud and use Kerberos to authenticate to the system or service from inside and outside the cloud.

    SSSD is growing its responsibilities and it is becoming harder to operate and troubleshoot if something goes wrong. To make administrator’s life easier, SSSD is now accompanied with a couple of new utilities. One utility allows fine grained management of the SSSD cache so that state of the cache can be easily inspected. The tool allows tweaking or removing individual objects and entries in the cache, without removing the cache altogether. Another tool, called sssctl, provides information about SSSD status: whether it is online or not and what servers it is currently communicating with.

    In addition to the utilities, SSSD processing of sssd.conf have been improved. With this enhancement SSSD has a higher chance to automatically detect typos, missing values and misconfiguration introduced via sssd.conf. The logic is still basic, but the change lays a good foundation for the future improvements in this area.

    With better sssd.conf parsing, SSSD also gained the ability to merge several sssd.conf configuration files that augment each other. This is useful when different snippets of the configuration come with different applications that rely on the SSSD service provided by the system. This way applications can augment or extend the main SSSD configuration without explicitly modifying it.

    In Part 2, we’ll look at certificate management, interoperability, and Active Directory integration improvements you’ll find in RHEL 7.3.

    by Dmitri Pal at February 06, 2017 03:00 PM

    January 31, 2017

    Fabiano Fidencio

    SSSD: {DBus,Socket}-activated responders!

    Since its 1.15.0 release, SSSD takes advantage of systemd machinery and introduces a new way to deal with the responders.

    Previously, in order to have a responder initialized, the admin would have to add the specific responder to the "services" line in sssd.conf file, which does make sense for the responders that are often used but not for those rarely used (as the infopipe and PAC responders for instance).

    This old way is still preserved (at least for now) and this new release is fully backwards-compatible with the old config file.

    For this new release, however, adding responders to the "services" isn't needed anymore as the admin can easily enable any of the responders' sockets and those will be {dbus,socket}-activated on demand and will be up while are still being used. In case the responder becomes idle, it will automatically shut itself down after a configurable amount of time.

    The sockets we've created are: sssd-autofs.socket, sssd-nss.socket, sssd-pac.socket, sssd-pam.socket (and sssd-pam-priv.socket, but you don't have to worry about this one), sssd-ssh.socket and sssd-sudo.socket. As an example, considering the admins want to enable the sockets for both NSS and PAM responders, they should do: `systemctl enable sssd-pam.socket sssd-nss.socket` and voilà!

    In some cases the admins may also want to set the "responder_idle_timeout" option added for each of the responders in order to tweak for how long the responder will be running in case itbecomes idle. Setting this option to 0 (zero) disables the responder_idle_timeout. For more details, please, check sssd.conf man page.

    For this release we've taken a more conservative path and are leaving up to the admins to enable the services they want to have enabled in case they would like to try to using {dbus,socket}-activated responders

    It's also important to note that while the SELinux policies are not updated in your distro you may need to have SELinux in permissive mode in order to test/use the {dbus,socket}-activated responders. A bug for this is already filed for Fedora and hopefully will be fixed before the new package is included in the distro.

    And the changes in the code were (a high-level explanation) ...

    Before this work the monitor was the piece of code responsible for handling the responders listed in the services' line of sssd.conf file. And by handling I mean:

    • Gets the list of services to be started (and, consequently, the total number of services);
    • For each service:
      • Gets the service configuration;
      • Starts the service;
      • Adds the service to the services' list;
      • Once the service is up, a dbus message is sent to the monitor, which ...
        • Sets up the sbus* connection to communicate with the service;
        • Marks the service as started;

    Now, the monitor does (considering an empty services' line):

    • Once the service is up, a dbus message is sent to the monitor;
      • The number of services is increased;
      • Gets the service configuration;
      • Adds the service to the services' list
      • Sets up the sbus connection to communicate with the service;
      • Sets up a destructor to the sbus connection in order to properly shutdown the service when this connection is closed;
      • Marks the service as started;

    By looking at those two different processes done by the monitor, some of you may have realized an extra step needed when the service has been {dbus,socket}-activated that was needed at all before. Yep, "Sets up a destructor to the sbus connection in order to properly shutdown the service when this connection is closed" is a completely new thing as, previously, the services were just shut down when SSSD was shut down and now the services are shutdown when they become idle.

    So, what's basically done now is:
     - Once there's no communication to the service, it's (sbus) connection with the monitor is closed;
     - Closing the (sbus) connection triggers the following actions:
        - The number of services is decreased;
        - The connection destructor is unset (otherwise it would be called again on the service has been freed);
        - Service is shut down:

    *sbus: SSSD uses dbus protocol over a private socket to handle its internal communication, so the services do not talk over system bus.

    And how do the unit files look like?

    SSSD has 7 services: autofs, ifp, nss, pac, pam, ssh and sudo. From those 7 services 4 of them have pretty much these unit files:

    AutoFS, PAC, SSH and Sudo unit files:


    sssd-$responder.service:
    [Unit]
    Description=SSSD $(responder) Service responder
    Documentation=man:sssd.conf(5)
    After=sssd.service
    BindsTo=sssd.service

    [Install]
    Also=sssd-$responder.socket

    [Service]
    ExecStartPre=-/bin/chown $sssd_user:$sssd_user /var/log/sssd/sssd_autofs.log
    ExecStart=/usr/libexec/sssd/sssd_$responder --debug-to-files --socket-activated
    Restart=on-failure
    User=$sssd_user
    Group=$ssd_user
    PermissionsStartOnly=true

    sssd-$responder.socket:
    [Unit]
    Description=SSSD $(responder) Service responder socket
    Documentation=man:sssd.conf(5)
    BindsTo=sssd.service

    [Socket]
    ListenStream=/var/lib/sss/pipes/$responder
    SocketUser=$sssd_user
    SocketGroup=$sssd_user

    [Install]
    WantedBy=sssd.service


    And about the different ones? We will get there ... and also explain why they are different.

    The infopipe (ifp) unit file:

    As the infopipe won't be socket-activated, it doesn't have the its respective .socket unit.
    Also, differently than the others responders the infopipe responder can only be run as root nowadays.
    In the end, its .service unit looks like:

    sssd-ifp.service:
    [Unit]
    Description=SSSD IFP Service responder
    Documentation=man:sssd-ifp(5)
    After=sssd.service
    BindsTo=sssd.service

    [Service]
    Type=dbus
    BusName=org.freedesktop.sssd.infopipe
    ExecStart=/usr/libexec/sssd/sssd_ifp --uid 0 --gid 0 --debug-to-files --dbus-activated
    Restart=on-failure

    The PAM unit files:

    The main difference between PAM responder and the others is that PAM has two sockets that can end up socket-activating its service. Also, these sockets have a special permission.
    In the end, its unit files look like:

    sssd-pam.service:
    [Unit]
    Description=SSSD PAM Service responder
    Documentation=man:sssd.conf(5)
    After=sssd.service
    BindsTo=sssd.service

    [Install]
    Also=sssd-pam.socket sssd-pam-priv.socket

    [Service]
    ExecStartPre=-/bin/chown $sssd_user:$sssd_user @logpath@/sssd_pam.log
    ExecStart=@libexecdir@/sssd/sssd_pam --debug-to-files --socket-activated
    Restart=on-failure
    User=$sssd_user
    Group=$sssd_user
    PermissionsStartOnly=true

    sssd-pam.socket:
    [Unit]
    Description=SSSD PAM Service responder socket
    Documentation=man:sssd.conf(5)
    BindsTo=sssd.service
    BindsTo=sssd-pam-priv.socket

    [Socket]
    ListenStream=@pipepath@/pam
    SocketUser=root
    SocketGroup=root

    [Install]
    WantedBy=sssd.service

    sssd-pam-priv.socket:
    [Unit]
    Description=SSSD PAM Service responder private socket
    Documentation=man:sssd.conf(5)
    BindsTo=sssd.service
    BindsTo=sssd-pam.socket

    [Socket]
    Service=sssd-pam.service
    ListenStream=@pipepath@/private/pam
    SocketUser=root
    SocketGroup=root
    SocketMode=0600

    [Install]
    WantedBy=sssd.service

    The NSS unit files:

    The NSS responder was the trickiest one to have working properly, mainly because when socket-activated it has to run as root.
    The reason behind this is that systemd calls getpwnam() and getgrnam() when using "User="/"Group=" different than root. By doing this libc ends up querying for $sssd_user, trying to talk to NSS responder which is not up yet and then the clients would end up hanging for a few minutes (due to our default_client_timeout) which is something we really want to avoid.

    In the end, its unit files look like:

    sssd-nss.service:
    Description=SSSD NSS Service responder
    Documentation=man:sssd.conf(5)
    After=sssd.service
    BindsTo=sssd.service

    [Install]
    Also=sssd-nss.socket

    [Service]
    ExecStartPre=-/bin/chown root:root @logpath@/sssd_nss.log
    ExecStart=@libexecdir@/sssd/sssd_nss --debug-to-files --socket-activated
    Restart=on-failure

    sssd-nss.socket:
    [Unit]
    Description=SSSD NSS Service responder socket
    Documentation=man:sssd.conf(5)
    BindsTo=sssd.service

    [Socket]
    ListenStream=@pipepath@/nss
    SocketUser=$sssd_user
    SocketGroup=$sssd_user

    All the services' units have a "BindsTo=sssd.service" in order to ensure that the service will be stopped when sssd.service is stopped so in case SSSD is shutdown/restart those actions will be propagated to the responders as well.

    Similarly to "BindsTo=ssssd.service" there's "WantedBy=sssd.service" in every socket unit and it's there to ensure that, once the socket is enabled it will be automatically started by SSSD when SSSD is started.

    And that's pretty much all changes that I've covered with this work.

    I really have to say a big thank you to ...

    • Lukas Nykryn and Michal Sekletar who patiently reviewed the unit files we're using and gave me a lot if good tips while doing this work;
    • Sumit Bose who helped me to find out the issue with the NSS responder when trying to run it as a non-privileged user;
    • Jakub Hrozek, Lukas Slebodnik and Pavel Brezina for reviewing and helping me to find bugs, crashes, regressions that fortunately were avoided.

    And what's next?

    There's already a patch making the {dbus,socket}-activated automatically enabled when SSSD starts, which changes our approach from having to explicit enable the sockets in order to take advantage of this work to explicitly mask the disable (actually, mask) the sockets of the processes that shouldn't be {dbus,socket}-activated.

    Also, a bigger work for the future is to also have the providers being socket-activated, but this is material for a different blob post. ;-)

    Nice, nice. But I'm having issues with what you've described!

    In case it happens to you, please, keep in mind that the referred way to diagnose any issues would be:

    • Inspecting sssd.conf in order to check which are the explicitly activated responders in the services' line;
    • `systemctl status sssd.service`;
    • `systemctl status sssd-$responder.service` (for the {dbus,socket}-activated ones);
    • `journalctl -u sssd.service`;
    • `journalctl -u sssd-$responder.service` (for the {dbus,socket}-activated ones);
    • `journalctl -br`;
    • Checking SSSD debug logs in order to see whether SSSD sockets where communicated

    by noreply@blogger.com (Fabiano Fidêncio) at January 31, 2017 04:27 PM

    January 24, 2017

    Red Hat Blog

    PCI Series: Requirement 10 – Track and Monitor All Access to Network Resources and Cardholder Data

    This is my last post dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS). This specific post is related to requirement ten (i.e. the requirement to track and monitor all access to network resources and cardholder data). The outline and mapping of individual articles to the requirements can be found in the overarching post that started the series.

    Requirement ten focuses on audit and monitoring. Many components of an IdM-based solution, including client components like SSSD and certmonger, generate a detailed audit trail about authentication and user activity. Linux systems have an audit subsystem and all critical authentication and access related events are sent there. One can then use different technologies (or third party software) to collect and centralize these audit trails. Red Hat is working to provide a log collection, aggregation, and correlation solution across different components and products in the Red Hat portfolio. This is an ongoing effort and I plan to write about it (in the future) when there is more to show. This solution is expected to become a foundation for another offering that allows for capturing, centralizing, and correlating recorded user sessions. A demo of this session recording technology is available here. The working plan is to allow for not only the recording and playback of captured sessions but also correlation with an audit trail from the same system – enabling full introspection into the user activity on the system.

    Questions about how Identity Management relates to requirement ten? Did you enjoy this series and/or find it to be useful?  I encourage you to reach out using the comments section (below).

    by Dmitri Pal at January 24, 2017 06:15 PM

    Powered by Planet