FreeIPA Identity Management planet - technical blogs

June 26, 2017

Fraser Tweedale

Wildcard SAN certificates in FreeIPA

In an earlier post I discussed how to make a certificate profile for wildcard certificates in FreeIPA, where the wildcard name appeared in the Subject Common Name (CN) (but not the Subject Alternative Name (SAN) extension). Apart from the technical details that post also explained that wildcard certificates are deprecated, why they are deprecated, and therefore why I was not particularly interested in pursuing a way to get wildcard DNS names into the SAN extension.

But, as was portended long ago (more than 15 years, when RFC 2818 was published) DNS name assertions via the CN field are deprecated, and finally some client software removed CN name processing support. The Chrome browser is first off the rank, but it won’t be the last!

Unfortunately, programs that have typically used wildcard certificates (hosting services/platforms, PaaS, and sites with many subdomains) are mostly still using wildcard certificates, and FreeIPA still needs to support these programs. As much as I would like to say "just use Let’s Encrypt / ACME!", it is not realistic for all of these programs to update in so short a time. Some may never be updated. So for now, wildcard DNS names in SAN is more than a "nice to have" – it is a requirement for a handful of valid use cases.

Configuration

Here is how to do it in FreeIPA. Most of the steps are the same as in the earlier post so I will not repeat them here. The only substantive difference is in the Dogtag profile configuration.

In the profile configuration, set the following directives (note that the key serverCertSet and the index 12 are indicative only; the index does not matter as long as it is different from the other profile policy components):

policyset.serverCertSet.12.constraint.class_id=noConstraintImpl
policyset.serverCertSet.12.constraint.name=No Constraint
policyset.serverCertSet.12.default.class_id=subjectAltNameExtDefaultImpl
policyset.serverCertSet.12.default.name=Subject Alternative Name Extension Default
policyset.serverCertSet.12.default.params.subjAltNameNumGNs=2
policyset.serverCertSet.12.default.params.subjAltExtGNEnable_0=true
policyset.serverCertSet.12.default.params.subjAltExtType_0=DNSName
policyset.serverCertSet.12.default.params.subjAltExtPattern_0=*.$request.req_subject_name.cn$
policyset.serverCertSet.12.default.params.subjAltExtGNEnable_1=true
policyset.serverCertSet.12.default.params.subjAltExtType_1=DNSName
policyset.serverCertSet.12.default.params.subjAltExtPattern_1=$request.req_subject_name.cn$

Also be sure to add the index to the directive containing the list of profile policies:

policyset.serverCertSet.list=1,2,3,4,5,6,7,8,9,10,11,12

This configuration will cause two SAN DNSName values to be added to the certificate – one using the CN from the CSR, and the other using the CN from the CSR preceded by a wildcard label.

Finally, be aware that because the subjectAltNameExtDefaultImpl component adds the SAN extension to a certificate, it conflicts with the userExtensionDefault component when configured to copy the SAN extension from a CSR to the new certificate. This profile component will have a configuration like the following:

policyset.serverCertSet.11.constraint.class_id=noConstraintImpl
policyset.serverCertSet.11.constraint.name=No Constraint
policyset.serverCertSet.11.default.class_id=userExtensionDefaultImpl
policyset.serverCertSet.11.default.name=User Supplied Extension Default
policyset.serverCertSet.11.default.params.userExtOID=2.5.29.17

Again the numerical index is indicative only, but the OID is not; 2.5.29.17 is the OID for the SAN extension. If your starting profile configuration contains the same directives, remove them from the configuration, and remove the index from the policy list too:

policyset.serverCertSet.list=1,2,3,4,5,6,7,8,9,10,12

Discussion

The profile containing the configuration outlined above will issue certificates with a wildcard DNS name in the SAN extension, alongside the DNS name from the CN. Mission accomplished; but note the following caveats.

This configuration cannot contain the userExtensionDefaultImpl component, which copies the SAN extension from the CSR to the final certificate if present in the CSR, because any CSR that contains a SAN extension would cause Dogtag to attempt to add a second SAN extension to the certificate (this is an error). It would be better if the conflicting profile components somehow "merged" the SAN values, but this is not their current behaviour.

Because we are not copying the SAN extension from the CSR, any SAN extension in the CSR get ignored by Dogtag – but not by FreeIPA; the FreeIPA CSR validation machinery always fully validates the subject alternative names it sees in a CSR, regardless of the Dogtag profile configuration.

If you work on software or services that currently use wildcard certificates please start planning to move away from this. CN validation was deprecated for a long time and is finally being phased out; wildcard certificates are also deprecated (RFC 6125) and they too may eventually be phased out. Look at services and technologies like Let’s Encrypt (a free, automated, publicly trusted CA) and ACME (the protocol that powers it) for acquiring all the certificates you need without administrator or operator intervention.

by ftweedal at June 26, 2017 12:48 PM

June 21, 2017

Rob Crittenden

IPA configuration files and context

There are times when you may want more information out of the IPA server logs. I’ve seen people suggest adding debug = True to /etc/ipa/default.conf. This is fine (and it works) but it enables debugging in both the client and the server which can be annoying for command-line users.

What I do instead is create /etc/ipa/server.conf containing:

[global]
debug = True

The context that is set during initialization drives what configuration files are loaded so only the server will load this so the client remains quiet by default.

When the context is set during api.initialize it sets api.env.context. The original idea is this could drive different code paths depending on the context but in reality it hasn’t been used all that often. Being able to load context-specific configuration files is pretty neat though.

by rcritten at June 21, 2017 07:06 PM

June 12, 2017

Red Hat Blog

Migrating from third party Active Directory integration solutions

As predicted in one of my earlier posts, more and more customers are starting to seriously evaluate and move off of third party Active Directory integration solutions. They want to use or at least consider leveraging identity management technologies available in Red Hat Enterprise Linux.

In the calls and face to face meetings as well as during customer presentations at Red Hat Customer Convergence events, Red Hat Summit, Defence in Depth and other conferences I get a lot of questions about such migration. As it is becoming a common theme, I decided to consolidate some of the thoughts, ideas, and best practices on the matter in a single blog post.

Why do organizations consider migration?

There are several crucial factors that lead people to the path of migration. They fall into the following categories:

Technical

One of main technical reasons customers start to consider a different solution is because the solution they use stops to meet their technical needs. This usually happens when customers outgrow the solution they use. Inadvertently they start to push the technical limits of their deployment. The stress reveals architectural limitations as well as puts extra pressure on other aspects of the deployment like performance, scalability and manageability.  

In general this is a natural progression when more modern solutions take into the account limitations and pitfalls of the earlier attempts and make things temporarily better until they are in turn replaced by the next generation of technologies.

Organizational

Every customer’s identity management landscape is unique. However, we see some patterns again and again.

In some cases the whole identity management space is significantly controlled and influenced by the Active Directory side of the house. In other cases there is a clear organizational divide between Active Directory folks and the Linux team. One of the main attractions of the identity management technologies in Red Hat Enterprise Linux is the ability to provide a clear separation of responsibilities between the two parts of the organization.

This works really well in the cases where there are two teams, but it does not 100% resonate with Active Directory centric deployments where one team mostly from Windows side drives the whole deployment. I talked about this in some of my earlier blogs. Here I would just mention that migration from a third party solution is a good time to consider how you structure the organization to enable growth of your business for the next decade.

Economical

Identity Management technologies are included with Red Hat Enterprise Linux and are covered by the platform subscription. There is no extra cost.

Third party solutions are priced per client. If you have ten thousand clients and each costs at least $50 per year you have a half million cost right there. This money can be better spent elsewhere, for example on building a better support of the line of business requirements and moving to a more agile (develop-deploy) IT model.

Business

In many cases, business drives the need for a change. Current infrastructure just does not scale enough to meet the needs of the modern enterprise. If every payload you create in cloud requires a separate entitlement from a third party, it is quite cumbersome.

How long would you be able to deal with such arrangement before your IT processes reach the limits and start to fall behind the needs of the ever evolving business models? Keeping your environment lean and removing any obstacles would allow it to keep up with the challenging requirements of today and pave the way into tomorrow.

So the move is imminent but slow. Identity management is part of the core fabric that can’t be disrupted without a major impact on the whole organization. It must be made taking a lot of precautions and factoring in all sorts of different considerations. It needs to be a well planned move. It might be that you are not yet ready, but it might very well be that we are not ready yet for you with the solutions we offer. Well, let us work together to make sure what we offer would better suit your needs. Comments and suggestions are always welcome!

What options do I have?

Red Hat Enterprise Linux offers two main types of Active directory integration: direct where Linux systems are directly connected to Active Directory and indirect when we recommend deploying an Identity Management server as a gateway between your Active Directory and Linux/UNIX environments. These solutions are well covered in other blogs in this series. Here it is important to mention that indirect solution is the one that we recommend for migration from the third party solutions. The reason is that it much better addresses technical, organizational, economic and business drivers described above. But as I mentioned, it does not fit all and direct integration is still a good option for some of the situations.

How would I get there?

Let’s take a look at strategies to migrate from third party to Red Hat Enterprise Linux’s Identity Management (IdM).

Getting from a third party solution to a direct integration solution

The main component of Linux that provides direct integration is System Security Services Daemon (SSSD). The strategy would be to start deploying SSSD for your new payloads and cycling away your old payloads that leverage third party solution over time.

SSSD has a lot of capabilities but there are several challenges that you might face in this migration. If you only manage authentication, access control, and identity information — and your POSIX data is in Active Directory — SSSD will be a sufficient option.

However if you also manage sudo or SSH, or need to have sophisticated mapping of the POSIX identities for different subsets of clients, the direct integration solution is not sufficient. Also Red Hat, as well as Microsoft, does not provide means to manage POSIX data in Active Directory using Microsoft Management Console (MMC).

There is a web interface in Active Directory where after manual configuration POSIX attributes can be exposed, but it is unclear whether this would meet your management workflows and requirements. Bottom line is that direct integration is not a one-to-one replacement of what you have with the third party solution. This is why an indirect integration using Identity Management in Red Hat Enterprise Linux server is better and is a recommended option.

Getting from a third party solution to an indirect integration solution

In this case we are talking about deploying IdM in Red Hat Enterprise Linux as a replacement of your third party solution. It is an architectural change. There are several important differences that come with this change.

  • Identity Management server is a domain controller for Linux/UNIX environments. It is a combination of Kerberos, LDAP, PKI and DNS components some of which are optional (PKI, DNS). The solution assumes that you transition your Linux infrastructure into this domain and connect this Linux domain to the Active Directory via a cross-forest trust. With the trust in place users from Active Directory would be able to access resources and systems managed in Linux domain but all the policies for such users will be managed in IdM. POSIX data can come from Active Directory but it can also be overridden in IdM via a feature called ID Views.
  • With the introduction of the domains, the challenges related to DNS names come into play. This has been discussed in details on one of my earlier blogs.
  • Another important difference is that with the introduction of domains and trusts you would need to start using fully qualified domain names for users meaning that instead of user “foo” you will have to type “foo@myaddomain.com”. In some cases this is quite a challenge. It is a known hassle and there are some ideas how this can be addressed so hopefully you will see some improvements in this area down the road.

With the variety of vendors you might migrate from, variety of features that you would want to implement, and different constraints you have in the current environment it is so far very hard to determine any specific patterns and provide effective tools to help with migration.

If you want to perform migration yourself the following outline would help you with your effort:

  • Understand current environment, including architecture, and paying attention to numbers of users and systems. Note operating systems and versions. See how they are distributed around different datacenters. Record workflows that your users follow when they interact with your Linux systems as well as resources and applications running on them.
  • Formulate your goals, and make sure you clearly understand the reasons why you want to migrate. I hope that some information in this article will be helpful. Spell out the criteria which a final solution must satisfy. Make sure to factor in your growth. Projects take time and requirements change. You want to avoid getting into the situation when by the time you finish the migration your architecture does not scale to the needs of the business.
  • Become familiar with a solution you consider to migrate to. If you are interested in a Red Hat solution please contact your Technical Account Manager or sales representative. If you do not have either, open a support case with Red Hat support requesting some consulting about Red Hat IdM solution. If you are not a Red Hat customer yet, you might start by dropping a comment here or sending an email to a community mailing list freeipa-users@redhat.com. IdM is based in the FreeIPA community project so resources available on the FreeIPA web site might be of a value for you. Finally, there are a lot of shows and conferences where this technology is presented and promoted. You might want to consider attending one of those and get your questions answered first hand.
  • Once you know where you are, where you want to be and you are familiar with the offering you can make a decision whether the solution Red Hat offers would meet your short and long term needs. If it does the next step will be to build the architecture using the solution.
  • Defining architecture includes but not limited to understanding your DNS setup, firewalls, layout of data centers and mapping of servers to those datacenters as well defining replication topology and which components should be running on which servers. This requires a lot of knowledge and understanding of the details of the solution. The best way to gain this knowledge is to run a POC deployment in your lab mimicking a real world environment as much as possible. Once you are comfortable with it you will be able to draft an implementation plan.
  • Implementation plan would be a set of step by step actions that would move you from where you are to where you want to be. The important part of such plan is that each action should be concise and atomic. That would reduce the amount of disruption to your current environment that needs to continue function while you are preparing for the switch.

As you can see, migration is not a walk in the park. It is complex and hard. It requires a good coordination of internal teams and deep knowledge of the tools.  In some cases it is hard to do it on your own. In these situations Red Hat can help. Again, talk to your TAM or sales representative about how Red Hat can help in your effort.

Such migration projects render tools and scripts that would benefit others. If you are interested in sharing your work and getting some of the tools and ideas benefit others, you can contribute them to the common repository.

The best way to start contributing would be to get engaged on freeipa-users@redhat.com list and people there will help you with the next steps. By sharing your solutions you not only help others, but also help the FreeIPA project to better understand your challenges and for Red Hat to work with FreeIPA developer community to provide a better set of tools that would improve manageability and thus make your life easier.

Let us work together and make this migration path as smooth as possible.

 

by Dmitri Pal at June 12, 2017 03:49 PM

June 02, 2017

Florence Blanc-Renaud

Troubleshooting: mapping between a SmartCard certificate and an IdM user

Authentication with a SmartCard may fail when the SmartCard certificate is not linked to any IdM user, or to a user different from the one specified on the console.

In order to find which user is associated to a given SmartCard certificate, you can run the following command:

ipaclient$ ipa certmap-match cert.pem
--------------
1 user matched
--------------
 Domain: IPADOMAIN.COM
 User logins: demosc1
----------------------------
Number of entries returned 1
----------------------------

If the result is not what you were expecting, you need first to check how certificates are mapped to users.

By default, a certificate is associated to a user when the user entry contains the full certificate in its usercertificate attribute. But this behavior can be modified by defining certificate mapping rules:

ipaclient$ ipa certmaprule-find
-------------------------------------------
1 Certificate Identity Mapping Rule matched
-------------------------------------------
 Rule name: rulesmartcard
 Mapping rule: (ipacertmapdata=X509:<I>{issuer_dn}<S>{subject_dn})
 Matching rule: <ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG
 Enabled: FALSE
----------------------------
Number of entries returned 1
----------------------------

 

Mapping with full certificate content

When the mapping is based on the full certificate content, you can check if the user entry contains the certificate:

root@ipaclient$ ipa user-show demosc1
 User login: demosc1
[...]
Certificate: MIIC...

If it is not the case, associate the certificate with the user entry using:

ipaclient$ CERT=`cat cert.pem | tail -n +2 | head -n -1 | tr -d '\r\n'`
ipaclient$ ipa user-add-cert demosc1 --certificate $CERT

Once this is done, you may need to clear sssd cache to force SSSD to reload the entries before retrying ipa certmap-match:

ipaclient$ sudo sss_cache -E

 

Flexible mapping with certificate identity mapping rule

When the mapping is based on certificate mapping rules, the same tool ipa certmap-match can be used to check which user entry is associated to a certificate. When the result is not what you expect, you can enable sssd domain logs by adding the following in /etc/sssd/sssd.conf on IdM master:

[domain/ipadomain.com]
...
debug_level = 9

then restart sssd with

root@ipaserver$ systemctl restart sssd

The logs will be located in /var/log/sssd/sssd_ipadomain.log.

 

Check that the certificate identity mapping rules are properly loaded

When sssd is restarted, it reads the mapping rules and should print the following in /var/log/sssd/sssd_ipadomain.log:

[sssd[be[ipadomain.com]]] [sss_certmap_init] (0x0040): sss_certmap initialized.
[sssd[be[ipadomain.com]]] [ipa_certmap_parse_results] (0x4000): Trying to add rule [rulesmartcard][-1][<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG][(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})].

If the rule has an invalid syntax, you will see instead:

[sssd[be[ipadomain.com]]] [sss_certmap_init] (0x0040): sss_certmap initialized.
[sssd[be[ipadomain.com]]] [ipa_certmap_parse_results] (0x4000): Trying to add rule [rulesmartcard][-1][<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG][(ipacertmapdata=X509:<I>{issuer_dn!x509}<S>{subject_dn})].
[sssd[be[ipadomain.com]]] [parse_template] (0x0040): Parse template invalid.
[sssd[be[ipadomain.com]]] [parse_ldap_mapping_rule] (0x0040): Failed to add template.
[sssd[be[ipadomain.com]]] [parse_mapping_rule] (0x0040): Failed to parse LDAP mapping rule.
[sssd[be[ipadomain.com]]] [ipa_certmap_parse_results] (0x0020): sss_certmap_add_rule failed for rule [rulesmartcard], skipping. Please check for typos and if rule syntax is supported.
[sssd[be[ipadomain.com]]] [ipa_subdomains_certmap_done] (0x0040): Unable to parse certmap results [22]: Invalid argument
[sssd[be[ipadomain.com]]] [ipa_subdomains_refresh_certmap_done] (0x0020): Failed to read certificate mapping rules [22]: Invalid argument

The log shows that the rule named rulesmartcard is invalid. Check the rule (see man page for sss-certmap for the supported syntax) and fix if needed:

ipaclient$ ipa certmaprule-show rulesmartcard
 Rule name: rulesmartcard
 Mapping rule: (ipacertmapdata=X509:<I>{issuer_dn!x509}<S>{subject_dn})
 Matching rule: <ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG
 Enabled: TRUE
ipaclient$ ipa certmaprule-mod rulesmartcard --maprule '(ipacertmapdata=X509:<I>{issuer_dn}<S>{subject_dn})'

 

Check that the matching rule corresponds to the certificate

When SSSD tries to associate the certificate to a user, it starts by finding which rule can be applied based on the matching rule (for instance rulesmartcard applies to all certificates issued by CN=Smart Card CA,O=EXAMPLE.ORG because its matching rule is <ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG).

If no matching rule applies to the certificate, SSSD will not be able to associate the certificate with a user, and will display the following in /var/log/sssd/sssd_ipadomain.log:

[sssd[be[ipadomain.com]]] [sss_cert_derb64_to_ldap_filter] (0x0040): Certificate does not match matching-rules.

In this case, you need to create or modify an identity mapping rule, so that the match rule applies to your certificate. See sss-certmap man page for the supported syntax of the –matchrule option of ipa certmaprule-add command.

Check that the expected certificate identity mapping rule is used

When SSSD tries to find the user associated to the certificate, you will see the following logs in /var/log/sssd/sssd_ipadomain.log:

[sssd[be[ipadomain.com]]] [dp_get_account_info_handler] (0x0200): Got request for [0x14][BE_REQ_BY_CERT][cert=MII..]
...
[sssd[be[ipadomain.com]]] [sdap_search_user_next_base] (0x0400): Searching for users with base [cn=accounts,dc=ipadomain,dc=com]
[sssd[be[ipadomain.com]]] [sdap_print_server] (0x2000): Searching 10.34.58.20:389
[sssd[be[ipadomain.com]]] [sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with [(&(ipacertmapdata=X509:<I>O=EXAMPLE.ORG,CN=Smart Card CA<S>CN=test,O=EXAMPLE.ORG)(objectclass=posixAccount)(uid=*)(&(uidNumber=*)(!(uidNumber=0))))][cn=accounts,dc=ipadomain,dc=com].
...
[sssd[be[ipadomain.com]]] [sdap_search_user_process] (0x0400): Search for users, returned 0 results.

The logs show the LDAP search performed by SSSD: IP address of the LDAP server, base and search filter. Carefully review this information and compare with what you would expect.

Check that the mapping rule defines a valid search filter

If the rule cannot be transformed to a valid search filter, you will see in /var/log/sssd/sssd_ipadomain.log:

[sssd[be[ipadomain.com]]] [sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with [(&(ipacertmapdata=X509:<I>O=EXAMPLE.ORG,CN=Smart Card CA<S>CN=test,O=EXAMPLE.ORG(objectclass=posixAccount)(uid=*)(&(uidNumber=*)(!(uidNumber=0))))][cn=accounts,dc=ipadomain,dc=com].
[...]
[sssd[be[ipadomain.com]]] [sdap_get_generic_ext_step] (0x0080): ldap_search_ext failed: Bad search filter

If it is the case, you need to fix the certmap rule using

ipaclient$ ipa certmaprule-mod rulesmartcard –maprule …

 

 


by floblanc at June 02, 2017 02:59 PM

Troubleshooting: authentication to the system console or Gnome Desktop Manager of an IdM host with a SmartCard

IdM allows to authenticate to an IdM enrolled-host by providing a SmartCard certificate instead of a username/password. The below steps are based on system console authentication but the process is similar when using Gnome desktop  login authentication.

When the authentication fails, the issue usually comes from a wrong configuration of the IdM system for SmartCard, or of PKINIT.

 

Configuration of the IdM host for SmartCard authentication

If the console does not even prompt for the SmartCard PIN, chances are high that the system was not properly configured for SmartCard authentication.

SSSD configuration for smart card

Check that /etc/sssd.conf contains

[pam]
pam_cert_auth = True

If you need to update the file, do not forget to restart sssd with

root@ipaclient$ systemctl restart sssd

 

SmartCard CA must be trusted

Check that the SmartCard CA is trusted in the /etc/pki/nssdb database:

root@ipaclient$ certutil -L -d /etc/pki/nssdb/

Certificate Nickname Trust Attributes     SSL,S/MIME,JAR/XPI

SmartCardCA                               CT,C,C

If the CA is not present, add it using:

root@ipaclient$ certutil -A -d /etc/pki/nssdb -n SmartCardCA -t CT,C,C -i ca.pem

 

IdM host PKCS#11 module

Check that the IdM host is properly configured for Smart Cards. The opensc package must be installed, the the SmartCard daemon must be running, and the PKCS#11 module must be loaded

root@ipaclient$ dnf install opensc
root@ipaclient$ systemctl start pcscd.service pcscd.socket
root@ipaclient$ modutil -dbdir /etc/pki/nssdb -add "OpenSC" -libfile /usr/lib64/opensc-pkcs11.so

 

Configuration for PKINIT

If the console prompts for the SmartCard PIN but displays

ipaclient login: demosc1
Pin for PIV Card:
Login incorrect

it is possible that the authentication fails trying to acquire a Kerberos ticket with PKINIT. In this case, login with username/password to the IdM host and try to manually perform kinit in order to get more information:

root@ipaclient$ kinit -X X509_user_identity='PKCS11:opensc-pkcs11.so' demosc1

 

If  the command outputs the following:

kinit: Pre-authentication failed: Failed to verify own certificate (depth 1): self signed certificate in certificate chain while getting initial credentials

then check the content of /etc/krb5.conf on the IdM host. The realms section must contain a configuration for ipadomain with pkinit_anchors:

[realms]
 IPADOMAIN.COM = {
 pkinit_anchors = FILE:/var/lib/ipa-client/pki/kdc-ca-bundle.pem
 pkinit_pool = FILE:/var/lib/ipa-client/pki/ca-bundle.pem

}

The file defined in pkinit_anchors must exist, be readable and contain the certificate of the CA which signed the SmartCard certificate. If it is not the case, run the following commands on any IPA server:

root@ipaserver$ ipa-cacert-manage install -n SmartCardCA -t CT,C,C -p $DM_PWD ca.pem
root@ipaserver$ ipa-certupdate

And run the ipa-certupdate command on all IdM hosts in order to download the certificate.

If the kinit command output the following:

kinit: Certificate mismatch while getting initial credentials

check that the SmartCard certificate is associated to the username provided in the console (see mapping between a SmartCard certificate and an IdM user).


by floblanc at June 02, 2017 01:55 PM

Troubleshooting: ssh to an IdM host with a SmartCard

IdM allows to perform ssh from a non-enrolled host into an IdM enrolled host, using Smart Card authentication instead of ssh authorized keys. The ssh command would be the following to log as demosc1 into the host ipaclient.ipadomain.com:

localuser@localhost$ ssh -I /usr/lib64/opensc-pkcs11.so -l demosc1 ipaclient.ipadomain.com
Enter PIN for 'PIV_II (PIV Card Holder pin)':

The -I option specifies a PKCS#11 shared library, and -l the username on the remote host.

 

Configuration of the local host

First check that the local host is properly configured for Smart Cards. The opensc package must be installed, and the the SmartCard daemon must be running.

localuser@localhost$ sudo dnf install opensc
localuser@localhost$ sudo systemctl start pcscd.service pcscd.socket

 

Configuration of the remote (IdM) host

When IdM is properly configured, ssh will prompt for the SmartCard PIN and authenticate the user. If there is an issue with the certificate, ssh will revert to another authentication type (private keys or username/password).

In this case, enable debug logs for ssh authentication on IdM host. Edit /etc/sssd/sssd.conf and add the following line in [ssh] section:

[ssh]
debug_level = 9

then restart sssd using

root@ipaclient$ systemctl restart sssd

The logs will be located on the IdM host in /var/log/sssd/sssd_ssh.log.

 

The Smart Card CA is not trusted by SSSD

If you see the following in /var/log/sssd/sssd_ssh.log:

[sssd[ssh]] [cert_to_ssh_key] (0x0020): CERT_VerifyCertificateNow failed [-8179].
[sssd[ssh]] [get_valid_certs_keys] (0x0040): cert_to_ssh_key failed, ignoring.

then it means that the CA that signed the Smart Card certificate is not trusted. The trusted certs are stored in /etc/pki/nssdb and can be found using:

root@ipaclient$ certutil -L -d /etc/pki/nssdb

Certificate Nickname Trust Attributes     SSL,S/MIME,JAR/XPI

SmartCardCA                               CT,C,C

 

If the CA cert is missing, add it using

root@ipaclient$ certutil -A -d /etc/pki/nssdb -n SmartCardCA -t CT,C,C -i ca.pem

 

The user is not an IdM user

If the ssh operation does not log any line in /var/log/sssd/sssd_ssh.log, it probably means that the supplied user name is not a user defined in IdM. You can check with:

root@ipaclient$ ipa user-find demosc1
---------------
0 users matched
---------------
----------------------------
Number of entries returned 0
----------------------------

Check that you provided the right user name, or define an IdM user and associate the SmartCard certificate with this user.

 

The certificate is not mapped to the IdM user

If you see the following in /var/log/sssd/sssd_ssh.log:

Found 1 entries in domain ipadomain.com

but the authentication fails, check that the SmartCard certificate is associated to the provided username (refer to mapping between a SmartCard certificate and an IdM user)


by floblanc at June 02, 2017 12:59 PM

FreeIPA: troubleshooting SmartCard authentication

RHEL 7.4 beta is now available, delivering a new version of IPA which contains the support for Smart Card authentication. This feature allows to use a certificate contained in a SmartCard in order to login to IdM WebUI, to ssh to an IdM-enrolled host, or to login to the console or Gnome Desktop Manager of an IdM-enrolled host.

This feature is really powerful but may also seem difficult to troubleshoot. I will explain where to look for additional information when authentication fails, and how to fix the most common issues.

The information is split into posts specific to each authentication method:

 

 


by floblanc at June 02, 2017 12:12 PM

May 30, 2017

Striker Leggette

Deploy Windows 2016 AD and Fedora 25 IPA with a One-way Trust

Introduction

For the purpose of this post, the two machines I used for these instructions are VMs running atop a Fedora 25 hypervisor which was configured as outlined here:

Configuring Fedora 25 as a Hypervisor using Virtual Machine Manager

Note: Make sure that when deploying IPA and AD that you do so on separate domains.  Otherwise, IPA clients will be querying the AD server directly when they dig the domain for ldap records.

INCORRECT: IPA Server: ipa01.example.com | AD Server: ad01.example.com
CORRECT: IPA Server: ipa01.linux.example.com | AD Server: ad01.example.com
CORRECT: IPA Server: ipa01.example.com | AD Server: ad01.win.example.com
CORRECT: IPA Server: ipa01.linux.example.com | AD Server: ad01.win.example.com
CORRECT: IPA Server: ipa01.somedomain.com | AD Server: ad01.otherdomain.com

Deploying Windows 2016 AD

  1. On my first VM, I booted using a Trial ISO of Windows Server 2016:
  2. Begin the installation with Windows Server 2016 Standard Evaluation (Desktop Experience).
  3. After the machine boots from installation, configure the Hostname:
    1. Server Manager – Local Server.
    2. Click on the machine’s current hostname.
    3. Click Change and change the hostname to your preference.
      • Example: win16ad01
  4. Configure Active Directory and DNS:
    1. Server Manager – Dashboard.
    2. Add Roles and Features.
    3. For Installation Type, choose Role-based or feature-based installation.
    4. For Server Roles, click Active Directory Domain Services and DNS Server.
    5. Within Server Manager, go to AD DS and click on More.
    6. Click on Promote this server to a domain….
    7. In the next window, choose Add a new Forest.
      • Here, set the full DN of your Forest.
        • Example: win.terranforge.com

Deploying Fedora 25 IPA

  1. For the second VM, I booted using the HTTP link to Fedora 25 Server:
  2. During pre-installation:
    1. Choose Minimal at Software Selection.
    2. In Network & Host Name, set the full hostname of the machine.
      • Example: f25ipa01.linux.terranforge.com
    3. Make sure to give /var a large amount of space, as this is where the IPA Database and Logs will be stored.
  3. After installation and reaching a root prompt:
    1. Install the IPA packages and the RNG package:
      • dnf install ipa-server ipa-server-dns ipa-server-trust-ad rng-tools -y
      • The RNG daemon will generate free entropy to be used during the certificate database creation, otherwise that process can take a very long time to complete.
    2. Open the correct ports that IPA will use:
      • firewall-cmd --add-service=freeipa-ldap --add-service=freeipa-ldaps --add-service=freeipa-trust --permanent
      • firewall-cmd --reload
    3. Start the RNG Daemon:
      • systemctl start rngd
    4. Configure the IPA instance:
      • ipa-server-install --setup-dns
        1. For Server host name, press Enter (Hostname was set during pre-install).
        2. For Domain name and Realm name, press Enter.
        3. Press Enter when prompted for DNS forwarders.
        4. For Enter an IP address for a DNS forwarder, enter the IP Address of your Windows 2016 AD.
        5. Type yes and press Enter to finalize the pre-configuration and begin installation.

Configure the One-way Trust

  1. From the Fedora root prompt, prepare IPA for the trust:
    • ipa-adtrust-install
    • All options should be default.
  2. Configure and Verify the trust:
    1. ipa trust-add --type=ad --admin Administrator --password
      • Example: ipa trust-add --type=ad win.terranforge.com --admin Administrator --password
    2. id administrator@win.terranforge.com

Get involved and ask questions

You can get in touch with the IPA community by joining the #freeipa and #sssd channels on Freenode and the freeipa-users and sssd-users mailing lists.


by Striker at May 30, 2017 05:38 AM

May 05, 2017

Justin Stephenson

Measuring SSSD performance with SystemTap

This post is intended to provide information about finding SSSD bottlenecks with SystemTap.

One of the most common complaints with SSSD is slowness during login or NSS commands such as ‘getent’ or ‘id’ especially in large LDAP/Active Directory environments. Log analysis alone can be difficult to track down the source of the delay, especially with certain configurations(Indirect AD Integration) where there is a significant number of backend operations that occur during login.

In SSSD 1.14, performance enhancements were made to optimize cache write operations decreasing overall time spent updating the filesystem cache. These bottlenecks were discovered by developers based on userspace probing in certain areas of the SSSD code with SystemTap.

Below are some steps on getting started with SystemTap and SSSD, in this example we will use recent additions of High-Level Data Provider request probes.

  • First, install the necessary packages mentioned here: Installation and Setup

    • It is not required to install kernel-debuginfo or sssd-debuginfo to run these userspace systemtap scripts.
  • You can now check if the probe markers are available with:

# stap -L 'process("/usr/libexec/sssd/sssd_be").mark("*")'
process("/usr/libexec/sssd/sssd_be").mark("dp_req_done") $arg1:long $arg2:long $arg3:long
process("/usr/libexec/sssd/sssd_be").mark("dp_req_send") $arg1:long $arg2:long

# stap -L 'process("/usr/lib64/sssd/libsss_ldap_common.so").mark("*")' | head
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_acct_req_recv") $arg1:long $arg2:long $arg3:long $arg4:long
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_acct_req_send") $arg1:long $arg2:long $arg3:long $arg4:long
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_deref_search_recv") $arg1:long $arg2:long
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_deref_search_send") $arg1:long $arg2:long
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_get_generic_ext_recv") $arg1:long $arg2:long $arg3:long
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_get_generic_ext_send") $arg1:long $arg2:long $arg3:long
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_nested_group_check_cache_post")
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_nested_group_check_cache_pre")
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_nested_group_deref_process_post")
process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_nested_group_deref_process_pre")

  • The existing SystemTap scripts are located in /usr/share/sssd/systemtap. The id_perf.stp can be used to measure performance specifically with the ‘id’ command, while the nested_group_perf.stp generates metrics and useful information associated with nested group processing code.
# ll /usr/share/sssd/systemtap/
-rw-r--r--. 1 root root 2038 May 4 18:16 dp_request.stp
-rw-r--r--. 1 root root 3854 May 4 13:56 id_perf.stp
-rw-r--r--. 1 root root 8613 May 4 14:44 nested_group_perf.stp

  • Running the dp_request.stp script will track Data Provider requests and provide information about the request which took the most time to complete.
# vim /usr/share/sssd/systemtap/dp_request.stp 
/* Start Run with:
* stap -v dp_request.stp
*
* Then reproduce slow login or id/getent in another terminal.
* Ctrl-C running stap once login completes.

# stap -v /usr/share/sssd/systemtap/dp_request.stp
Pass 1: parsed user script and 469 library scripts using 244964virt/45004res/7588shr/37596data kb, in 100usr/20sys/128real ms.
Pass 2: analyzed script: 4 probes, 13 functions, 5 embeds, 11 globals using 246992virt/48356res/8816shr/39624data kb, in 30usr/160sys/396real ms.
Pass 3: using cached /root/.systemtap/cache/d5/stap_d5d7fd869e61741e13b43b7a6932a761_11210.c
Pass 4: using cached /root/.systemtap/cache/d5/stap_d5d7fd869e61741e13b43b7a6932a761_11210.ko
Pass 5: starting run.
*** Beginning run! ***
--> DP Request [Account #1] sent for domain [AD.JSTEPHEN]
DP Request [Account #1] finished with return code [0]: [Success]
Elapsed time [0m8.476s]

--> DP Request [Account #2] sent for domain [idm.jstephen]
DP Request [Account #2] finished with return code [0]: [Success]
Elapsed time [0m0.003s]

--> DP Request [Initgroups #3] sent for domain [AD.JSTEPHEN]
DP Request [Initgroups #3] finished with return code [0]: [Success]
Elapsed time [0m0.115s]

--> DP Request [Account #4] sent for domain [idm.jstephen]
DP Request [Account #4] finished with return code [0]: [Success]
Elapsed time [0m0.001s]

--> DP Request [Account #5] sent for domain [idm.jstephen]
DP Request [Account #5] finished with return code [0]: [Success]
Elapsed time [0m0.002s]

--> DP Request [Account #6] sent for domain [idm.jstephen]
DP Request [Account #6] finished with return code [0]: [Success]
Elapsed time [0m0.001s]

--> DP Request [Account #7] sent for domain [idm.jstephen]
DP Request [Account #7] finished with return code [0]: [Success]
Elapsed time [0m0.000s]

--> DP Request [Account #8] sent for domain [idm.jstephen]
DP Request [Account #8] finished with return code [0]: [Success]
Elapsed time [0m0.001s]

--> DP Request [Account #9] sent for domain [idm.jstephen]
DP Request [Account #9] finished with return code [0]: [Success]
Elapsed time [0m0.001s]

^C
Ending Systemtap Run - Providing Summary
Total Number of DP requests: [9]
Total time in DP requests: [0m8.600s]
Slowest request data:
Request: [Account #1]
Start Time: [Fri May 5 10:47:14 2017 EDT]
End Time: [Fri May 5 10:47:23 2017 EDT]
Duration: [0m8.476s]

Pass 5: run completed in 0usr/40sys/15329real ms.

  • We can see that the getAccountInfo #1 DP request completed in 8.476 seconds, the Start Time/End Time provided here can be used to help narrow down log analysis.
(Fri May  5 10:47:14 2017) [sssd[be[idm.jstephen]]] [dp_get_account_info_handler] (0x0200): Got request for [0x1][BE_REQ_USER][name=trustuser1@ad.jstephen]
(Fri May 5 10:47:14 2017) [sssd[be[idm.jstephen]]] [dp_attach_req] (0x0400): DP Request [Account #1]: New request. Flags [0x0001].
(Fri May 5 10:47:14 2017) [sssd[be[idm.jstephen]]] [dp_attach_req] (0x0400): Number of active DP request: 1
...
<snip>
...
(Fri May 5 10:47:23 2017) [sssd[be[idm.jstephen]]] [dp_req_done] (0x0400): DP Request [Account #1]: Request handler finished [0]: Success

  • The existing SystemTap scripts can be modified or new scripts can be created for a certain use-case as long as the existing probes/tapsets in /usr/share/systemtap/tapset/sssd.stp are used.
# LDAP search probes
probe sdap_search_send = process("/usr/lib64/sssd/libsss_ldap_common.so").mark("sdap_get_generic_ext_send")
{
base = user_string($arg1);
scope = $arg2;
filter = user_string($arg3);

probestr = sprintf("-> search base [%s] scope [%d] filter [%s]",
base, scope, filter);
}

The stap -L command shown previously will list out the functions where probes were added making these markers available for writing scripts with.

The goal will be to add more low-level probes to iterative functions where SSSD spends a lot of time. This will allow developers and administrators to analyze performance issues in detail.

by noreply@blogger.com (Justin Stephenson) at May 05, 2017 04:39 PM

April 28, 2017

Alexander Bokovoy

How to debug FreeIPA privilege separation issues

FreeIPA 4.5 has a lot of internal changes. A server side of the FreeIPA framework now runs in a privilege separation mode. This improves security of FreeIPA management operations but complicates debugging of the server. During FreeIPA 4.5 development phase Simo Sorce and I spent a lot of time debugging regressions and decided to document how we log events and how to debug server side operations. As result, this article details on what privilege separation means in FreeIPA management framework context and how to debug it.

April 28, 2017 07:00 PM

Powered by Planet