FreeIPA Identity Management planet - technical blogs

September 22, 2016

Red Hat Blog

PCI Series: Requirement 6 – Develop and Maintain Secure Systems and Applications

This post is the fifth installment in my PCI DSS series – a series dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS). This specific post is related to requirement six (i.e. the requirement to develop and maintain secure systems and applications). The outline and mapping of individual articles to requirements can be found in the overarching post that started the series.

Section six of the PCI DSS standard covers guidelines related to secure application development and testing. IdM and its ecosystem can help in multiple ways to address requirements in this part of the PCI-DSS standard. First of all, IdM includes a set of Apache modules for different methods of authentication. These modules externalize authentication logic from a web application so that the application does not need to re-implement different authentication methods itself. Such an approach significantly reduces the effort that developers need to invest into building different authentication capabilities into their applications – allowing them to focus on the business logic of the application itself and to deliver results faster. Externalized authentication based on Apache modules is (just) one of the best practices currently being adopted in the industry. There are a number of modules that provide different authentication methods, including:

  • A forms based password or one-time-password (OTP) authentication module (…a module that integrates with a given application’s login page and uses the PAM stack and SSSD in particular).
  • A Kerberos based single-sign-on (GSSAPI) module that allows for login into an application without prompting a given user for his or her credentials if he (or she) is already authenticated against a Kerberos server and holds proof of the authentication.
  • Certificate based modules based on either NSS or OpenSSL crypto libraries that enable certificate based authentication into an application.
  • A SAML module that connects an application to an identity provider (IdP); IdP-based federation uses redirection of the application login to an IdP – then accepting an authentication assertion as issued by the IdP.
  • An OpenID Connect module (similar to the SAML module) that allows an application to accept an OpenID Connect ticket from an authentication server.

The modules and details on how to integrate them are described on the following wiki page. Of note: all of the aforementioned modules are available in the current shipping version of Red Hat Enterprise Linux except for the OpenID Connect one.

As mentioned (above), externalizing authentication saves a lot of effort and is a good practice. To make developer life even easier we have been working on a container-based developer environment that would provide an application container, Apache web server (with pre-configured modules), an authentication server based on IdM (FreeIPA), and a client that allows for the testing of an application via browser. A prototype of this setup can be found here and the following video demonstrates how it can be used for development.

There is also an existing feature of the IdM server that allows for the management of SSH keys for different environments. Imagine you have an application with an administrative account. There are some operations that are done using this account, including SSH-ing into the system the application is running on. If you are developing this application, or if you are testing this application, or (perhaps) if you are deploying this appliaction – you would (likely) want to have different credentials for administrative accounts. IdM allows for the creation of ID views. Loading different SSH keys into different views enables use of the same administrative account across different environments with different SSH keys. Together, with different credentials, IdM allows for defining access control rules that are different for different environments and thus (for example) addresses requirement 6.4.1 (…or, to some extent, requirement 6.5.8).

Finally, it’s worth mentioning that it’s generally not a good idea to store passwords in configuration files. That said, indeed, some applications were built this way (in the past). To help developers to deal with secrets that an application needs to use, there are plans to provide a secrets API that would allow applications to fetch or store secrets in a more secure way without putting them in clear text in configuration files. You can read more about this capability here. A Technology Preview of the API is included as a part of SSSD (System Security Services Daemon) in the beta release of Red Hat Enterprise Linux 7.3.  Please reach out if you are interested in using this feature – our Technical Account Managers and Solution Architects would love to speak with you.

Questions about how Identity Management relates to requirement six?  Reach out using the comments section (below).

by Dmitri Pal at September 22, 2016 06:06 PM

September 20, 2016

Adam Young

Mirroring Keystone Delegations in FreeIPA/389DS

This is more musing than a practical design.

Most application servers have a means to query LDAP for the authorization information for a user.  This is separate from, and follows after, authentication which may be using one of multiple mechanism, possibly not even querying LDAP (although that would be strange).

And there are other mechanisms (SAML2, SSSD+mod_lookup_identity) that can, also, provide the authorization attributes.

Separating mechanism from meaning, however, we are left with the fact that applications need a way to query attributes to make authorization decisions.  In Keystone, the general pattern is this:

A project is a group of resources.

A user is assigned a role on a project.

A user requests a token for a project. That token references the users roles.

The user passes the token to the server when accessing and API. Access control is based on the roles that the user has in the associated token.

The key point here is that it is the roles associated with the token in question that matter.  From that point on, we have the ability to inject layers of indirection.

Here is where things fall down today. If we take an app like WordPress, and tried to make it query against Red Hat’s LDAP server for the groups to use, there is no mapping  between the groups assigned and the permissions that the user should have.  As the WordPress instance might be run by any one of several organizations within Red Hat, there is no direct mapping possible.

If we map this problem domain to IPA, we see where things fall down.

WordPress, here, is a service.  If the host it is running on is owned by a particular organization (say, EMEA-Sales) it should be the EMEA Sales group that determines who gets what permissions on WordPress.

Aside: WordPress, by the way, makes a great example to use, as it has very clear, well defined roles,  which have a clear scope of authorization for operations.

Subscriber < Contributor < Author < Editor < Administrator

Back to our regular article:

If we define and actor as either a user or agroup of users, a Role assignment is a : (actor, organization, application, role)

 

role-assignment-1

Now, a user should not have to go to IPA, get a token, and hand that to WordPress.  When a user connects to WordPress, and attempts to do any non-public action, they are prompted for credentials, and are authenticated.  At this point, WordPress can do the LDAP query. And here is the question:

“what should an application query for in LDAP”

If we use groups, then we have a nasty naming scheme.  EMEA-sales_wordpress_admin versus LATAM-sales_worpress_admin.  This is appending the query  (organization, application) and the result (role).

Ideally, we would tag the role on the service.  The service already reflects organization and application.

In the RFC based schemas, there is a organizationalRole objectclass which almost mirrors what we want.  But I think the most important thing is to return an object that looks like a Group, most specifically groupofnames.  Fortunately, I think this is just the ‘cn’.

Can we put a group of names under a service?  Its not a container.

‘ipaService’ DESC ‘IPA service objectclass’ AUXILIARY MAY ( memberOf $ managedBy $ ipaKrbAuthzData) X-ORIGIN ‘IPA v2’ )

objectClass: ipaobject
objectClass: top
objectClass: ipaservice
objectClass: pkiuser
objectClass: ipakrbprincipal
objectClass: krbprincipal
objectClass: krbprincipalaux
objectClass: krbTicketPolicyAux

It probably would make more sense to have a separate subtree service-roles,  with each service-name a container, and each role a group-of-names under that container. The application would  filter on (service-name) to get the set of roles.  For a specific user, the service would add an additional filter for memberof.

Now, that is a lot of embedded knowledge in the application, and does not provide any way to do additional business logic in the IPA server or to hide that complexity from the end user.  Ideally, we would have something like automember to populate these role assignments, or, even better, a light-weight way for a user with a role assignment to re-delegate that to another user or principal.

That is what really gets valuable:  user self service for delegation.  We want to make it such that you do not need to be an admin to create a role assignment, but rather (with exceptions) you can delegate to others any role that you have assigned to yourself.  This is a question of scale.

However, more than just scale, we want to be able to track responsibility;  who assigned a user the role that they have, and how did they have the authority to assign it?  When a user no longer has authority, should the people they have delegated to also lose it, or does that delegation get transferred?  Both patterns are required for some uses.

I think this fast gets beyond what can be represented easily in an LDAP schema.  Probably the right step is to use something like automember to place users into role assignments.  Expanding nested groups, while nice, might be too complicated.

by Adam Young at September 20, 2016 03:37 AM

September 19, 2016

Alexander Bokovoy

Samba and identity tales

Samba is built to bridge Windows and POSIX worlds. Apart from the file system semantics, there are many other differences. The story I’m about to tell concerns users and groups. They have different meaning and representation in both worlds, so translation is required, similar to a real life. In real life translators often have to take into account cultural differences and sometimes lack of certain concepts in the language they are translating to.

Protocol communications which Samba implements, end up bringing in objects which have a certain meaning in one world that doesn’t really have a one to one counterpart on the other side. One of tasks samba undertakes is translating the concepts between Windows and POSIX. It does this translation with the help of mapping databases.

Security identifiers

In Windows access controls are built around a concept of a security identifiers and security descriptors. Security identifier (SID) is associated with the object it represents. Internal processes in Windows refer to security identifiers of the objects rather than their names. Security descriptor is used to list what security identifiers can have access to a certain resource and what kind of access it could be. An important part of the story is that security identifiers have the same structure regardless of an object they represent. When security identifier is expressed in a textual form, in general we cannot say what object they represent – a user, a group, or a machine account, apart from so called ‘well-known’ SIDs. A nice property of a SID is that it is a global identifier – for two different domains their SIDs are guaranteed to be different even for ‘well-known’ objects within the domains.

POSIX identifiers

In POSIX world access controls are built around a simple model of rights for the resource owner, rights for the resource group ownership, and rights for all others. The model is further extended with POSIX Access Control Lists (ACLs) which allow to associate multiple simple model descriptors with a single resource but resulting access descriptor is still far from its Windows counterpart.

To a kernel of POSIX-compatible operating system access checks are done using numbers which represent users and groups. The kernel application interfaces don’t deal with user or group names, they deal with integer-based identifiers. Standard language library is supposed to translate user or group names to their numeric identifiers when talking to the kernel.

When operating on files and directories, Samba needs to translate NTFS-like semantics to POSIX file semantics. This includes translating security identifiers of SMB clients to POSIX identifiers of the users and their group membership. There are no SID-like structures in the kernel of POSIX operating system that Samba could directly map to; instead, it has to maintain such mapping in user space.

However, POSIX operating system already has own databases for users and groups which all POSIX applications are utilizing. In a primitive form these databases are stored as textual files, /etc/passwd and /etc/group, with a well-defined format. On Linux systems there are other ways to store information about POSIX users and groups, with the help of so-called ‘name service switch’ modules (NSS modules). How multiple modules are stacked up in an effort to deliver information about users, groups, and other resources is defined in /etc/nsswitch.conf configuration file. Standard C library reads this configuration file at application start and loads modules responsible for the resources. Standard application interfaces then will call the modules as defined in /etc/nsswitch.conf to retrieve required information.

Identity mapping

The information NSS modules provide includes nothing related to SMB protocol. Applications can query by user or group name but that’s all: they cannot query by SID value. Also, the interface functions differentiate between user and group information. When Samba gets a SID, it does not know whether it corresponds to a user or to a group, it cannot chose which interface function to call.

Let’s step aside at this point. Samba needs to deal with the system-level databases for users and groups. Samba needs to deal with SIDs that could be mapped to users, groups, and machine accounts. When user is referenced in SMB protocol communication, it can be in the form of a user name or a SID associated with the user object. When group is referenced in SMB protocol communication, it can also be in the form of a group name or a SID associated with the group object. Finally, the same applies for machine accounts but here Samba (and Windows) cheat and represent machine accounts as a special type of a user object.

The fact that Samba sits in the middle between the SMB protocol communication and the system-level databases for users and groups means Samba has to maintain own mapping between information relevant to SMB protocol and the information relevant to system level references to users and groups. In Windows a system level interface and a database for users, groups, and machine accounts is called Security Account Manager, SAM. Samba implements an abstraction level that allows to handle SAM-like requests. In fact, it implements two of those layers, not one.

IDMAP layer

To map security identifier to a POSIX identifier Samba uses identity mapping interfaces, IDMAP. IDMAP interface is very simple, it only has three functions:

  • map SID to a POSIX ID
  • map POSIX ID to a SID
  • allocate POSIX ID for a SID

A mapping of SID and POSIX ID is handled by an IDMAP module. SID name space is larger than POSIX ID name spaces (combined for users and groups). A relative identifier part of the SID, RID, is 32-bit long and identifies resources within a single domain, but there could be multiple domains involved. Samba has to potentially map all of those RIDs from all domains to a single 32-bit user and single 32-bit group name spaces. Such mapping most likely is a compression scheme with a collision potential when done algorithmically. There could be limiting factors in what particular 32-bit values for user and group identifiers could be chosen. Finally, manual assignment is something that could also be done. Thus, there are many IDMAP modules in Samba to cater to different needs.

A default IDMAP module in Samba is idmap_tdb. This module stores SID to POSIX ID mapping in a Samba native database format, so-called ‘trivial database’, TDB. When Samba requests a look up by SID, idmap_tdb module may allocate new POSIX ID if this SID is not mapped yet and there are enough POSIX IDs in the range defined for the domain. As result, when range is big enough to cover all users and groups from the domain, all SIDs will be mapped. However, there is no guarantee that SIDs will be mapped to the same POSIX IDs on all Samba servers in the domain. The order in which SID mapping request comes influences POSIX ID which is allocated for the SID. If different Samba servers get requests in the different order, they would assign different POSIX IDs to the same SIDs. This is, of course, a problem when accessing files on a distributed file system.

To solve this problem, other IDMAP modules were created. idmap_rid module algorithmically maps relative identifier of the SID to the range associated with the domain. idmap_ad looks up POSIX IDs at a domain controller of the Active Directory domain. In a similar approach, idmap_ldap looks up POSIX IDs at LDAP server defined in the configuration.

For configurations, where users and groups are maintained in the system-level databases, Samba allows to use idmap_nss module. The module queries the system-level databases in case it is known what SID maps to – to a user or to a group. In case it is unknown, IDMAP module queries a primary domain controller of the domain to convert SID to a name. A primary domain controller should know all users and groups of the domain, thus it should be able to answer where the SID maps to, or fail the request. In the latter case idmap_nss will also fail the request and Samba will consider the SID as unmapped.

PASSDB layer

Users and groups need to be known to Samba before they can be used. The very same users and groups must be known to the operating system because Samba processes change identity when performing operations as a particular user. The second layer Samba uses for identity mapping also allows to manage users and groups: create new ones, delete existing ones, modify information about them and, in general, perform a lot of actions Windows expects from SAM interface.

PASSDB module is an abstraction over the system-level database about users. It allows to retrieve user information from LDAP server or other storage scheme. The reason for this is, again, a lack of needed information in the system-level database format. Samba needs to know a lot more details about the user than POSIX interfaces provide and some of this information is unique to SMB protocol. For example, for each user to be able to authenticate with password, Samba needs to known corresponding password hashes for NTLM negotiation. NT and LM hashes are not used by the POSIX-compatible operating systems. Also, the interface to retrieve user information does not give access to actual passwords. In fact, in many environments applications have no access to password hashes, not even passwords.

Default PASSDB module is tdbsam. Similar to idmap_tdb, it stores additional information Samba needs to know about users in its own ‘trivial database’, TDB. tdbsam expects that if user information is stored in the database, the very same user exists in the system-level databases.

One can also force IDMAP subsystem to look up SID to POSIX ID mappings in a PASSDB backend. For this IDMAP module idmap_passdb can be used. As result, Samba will look up SIDs and POSIX IDs in a PASSDB module defined in smb.conf.

Group mapping

Groups are not stored in Samba databases. Instead, Samba allows to map existing POSIX group to a group in a domain. Because groups in Windows world can have different scope, Samba provides a mechanism to specify which POSIX group is mapped to which Windows group and what scope it should have. The mapping is managed with the help of Samba’s net utility: net groupmap family includes commands to add, modify, and remove group mappings. It also allows to associate (alias) certain SIDs with existing groups and list members of the groups.

For distributed environments it is convenient to store POSIX and SMB information about users and groups in the same place. For example, LDAP server could be used to store and retrieve such information with ldapsam PASSDB module and idmap_ldap IDMAP module. However, group mapping would still be maintained locally with net groupmap set of commands.

Practical considerations

Let’s apply all discussed above to a practice. Consider a single Samba server which serves as a primary domain controller to its own domain. The server does not use LDAP or any other distributed storage for its POSIX and SMB information for users and groups.

A minimal smb.conf configuration file for a primary domain controller is following:

# Global parameters
[global]
    workgroup = SAMBA
    domain logons = Yes
    security = USER
    winbind offline logon = Yes
    winbind use default domain = Yes
    idmap config * : range = 1000-1000000
    idmap config * : backend = passdb
    passdb backend = tdbsam
    template homedir = /home/%U
    template shell = /bin/bash

[homes]
    comment = Home Directories
    browseable = No
    inherit acls = Yes
    read only = No
    valid users = %S %D%w%S

This configuration defines a single-domain SMB server with IDMAP configuration to look up SID to POSIX ID mappings in a PASSDB module. PASSDB module is set to tdbsam which is a default module.

As result of this configuration, all non-POSIX attributes of users need to be stored in the PASSDB module. To modify them one can use pdbedit tool. But before that we need to create users and groups at the system level first.

SMB domains have few ‘well-known’ groups: ‘Domain Users’, ‘Domain Administrators’, ‘Domain Guests’. For ‘Domain Users’ and ‘Domain Guests’ we can reuse POSIX groups ‘users’ and ‘nobody’, for ‘Domain Admins’ it is better to create a separate group, for example, ‘admins’.

On Fedora 24 there are existing POSIX groups ‘users’ and ‘nobody’:

# getent group users nobody
users:x:100:
nobody:x:99:

We can create ‘admins’ group using groupadd utility:

# groupadd admins

When groups are ready, we can associated them with the well-known domain groups using net groupmap commands:

# net groupmap add ntgroup="Domain Admins" unixgroup=admins rid=512 type=d
Successfully added group Domain Admins to the mapping db as a domain group
# net groupmap add ntgroup="Domain Users"  unixgroup=users rid=513 
Successfully added group Domain Users to the mapping db as a domain group
# net groupmap add ntgroup="Domain Guests"  unixgroup=nobody rid=514
Successfully added group Domain Guests to the mapping db as a domain group

Finally, add users. Users should have their primary group associated with any of the groups mapped to the domain because Samba needs to recognize them. So there should be SID to POSIX ID mapping for primary groups. Let’s pretend that all our users are members of ‘users’ group:

# useradd -m -g users -G admins administrator
# pdbedit -a -u admin
new password:
retype new password:
Unix username:        administrator
NT username:          
Account Flags:        [U          ]
User SID:             S-1-5-21-1345368309-3761995768-4153620981-1008
Primary Group SID:    S-1-5-21-1345368309-3761995768-4153620981-513
Full Name:            
Home Directory:       \\smb\administrator
HomeDir Drive:        
Logon Script:         
Profile Path:         \\smb\administrator\profile
Domain:               SAMBA
Account desc:         
Workstations:         
Munged dial:          
Logon time:           0
Logoff time:          Wed, 06 Feb 2036 17:06:39 EET
Kickoff time:         Wed, 06 Feb 2036 17:06:39 EET
Password last set:    Mon, 19 Sep 2016 12:43:45 EEST
Password can change:  Mon, 19 Sep 2016 12:43:45 EEST
Password must change: never
Last bad password   : 0
Bad password count  : 0
Logon hours         : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

In the screen output above ‘Primary Group SID’ was automatically inferred from the group mapping.

We can now ask winbindd to resolve user information based on the IDMAP and PASSDB databases:

# wbinfo -i administrator
administrator:*:1002:100::/home/administrator:/bin/bash
# wbinfo -n administrator
S-1-5-21-1345368309-3761995768-4153620981-1008 SID_USER (1)
# wbinfo -s S-1-5-21-1345368309-3761995768-4153620981-1008
SAMBA\administrator 1

September 19, 2016 09:52 AM

September 16, 2016

Rich Megginson

How to print field name with dash ("-") in a golang template

For example, let's say your OpenShift secret has been created like this:
$ oc secrets new logging-elasticsearch \
        key=$dir/keystore.jks truststore=$dir/truststore.jks \
        searchguard.key=$dir/searchguard_node_key \
        searchguard.truststore=$dir/searchguard_node_truststore \
        admin-key=$dir/${admin_user}.key admin-cert=$dir/${admin_user}.crt \
        admin-ca=$dir/ca.crt \
        admin.jks=$dir/${admin_user}.jks

Now you want to extract the CA cert:
$ oc get secret logging-elasticsearch --template='{{.data.admin-ca}}'
error: error parsing template {{.data.admin-ca}}, template: output:1: bad character U+002D '-'

It doesn't like the - character in the field name. You can work around this using index like so:
$ oc get secret logging-elasticsearch --template='{{index .data "admin-ca"}}' |base64 -d > ca
$ openssl x509 -in ca -text|more
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 1 (0x1)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=logging-signer-20160915173520
        Validity
            Not Before: Sep 15 17:35:19 2016 GMT
            Not After : Sep 14 17:35:20 2021 GMT
        Subject: CN=logging-signer-20160915173520
        Subject Public Key Info:

September 16, 2016 01:57 AM

September 08, 2016

Red Hat Blog

PCI Series: Requirement 3 – Protect Stored Cardholder Data

Welcome to another post dedicated to the use of Identity Management (IdM) and related technologies in addressing the Payment Card Industry Data Security Standard (PCI DSS). This specific post is related to requirement three (i.e. the requirement to protect stored cardholder data). In case you’re new to the series – the outline and mapping of individual articles to the requirements can be found in the overarching post that started the series.

Section three of the PCI DSS standard talks about storing cardholder data in a secure way. One of the technologies that can be used for secure storage of cardholder data is disk encryption called LUKS. But LUKS keys also need to be managed (as mentioned in requirement 3.6.3). One potential solution: IdM’s Vault – a secret store that can be used to escrow disk encryption passwords and implement policies and conditions for the recovery of such passwords (or keys). While in a Vault, the keys and passwords do not need to be in any way related to keys and passwords used by users that access the cardholder services; requirement 3.4.1 is thus fully met by this solution.

Requirement 3.5.3 creates a challenge demanding separation of keys. This usually leads to the need to involve a user to unlock their key to start a process. For example, a system volume can be encrypted but in case of a reboot an administrator has to come over and enter a password to continue the boot process. A new technology called Network Bound Disk Encryption addresses this problem by placing a special server on the network. While this technology is not currently included with Red Hat Enterprise Linux – here is a pointer to a demo.

Questions about how Identity Management relates to requirement three?  Reach out using the comments section (below).

by Dmitri Pal at September 08, 2016 07:47 PM

September 06, 2016

Red Hat Blog

PCI Series: Requirement 2 – Do Not Use Vendor-Supplied Defaults for System Passwords and Other Security Parameters

This article is third in a series dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS). This specific post covers the PCI DSS requirement related to not using vendor-supplied defaults for system passwords and other security parameters. The outline and mapping of individual articles to the requirements can be found in the overarching post that started the series.

The second section of the PCI-DSS standard applies to defaults – especially passwords and other security parameters. The standard calls for the reset of passwords (etc.) for any new system before placing it on the network. IdM can help here. Leveraging IdM for centralized accounts and policy information allows for a simple automated provisioning of new systems with tightened configurations. In addition, Red Hat Satellite 6 and IdM play well together – allowing for automatic enrollment of Linux systems into an IdM managed identity fabric.

Requirements 2.2.3 and 2.3 (also covered in Appendix A2) call for use of different security features like SSH or TLS. Both SSH and TLS require a solution that would provision and manage associated keys. IdM comes to rescue in both cases. For SSH IdM can manage and deliver user and host public keys to the systems joined to the IdM domain. For TLS both the client and server need to have proper certificates and private keys. Where do they come from? How they are tracked and renewed? IdM, together with a client side component called certmonger (integrated with the Linux operating system), allows for provisioning, tracking and rotation of the certificates. These key management aspects of the environment are usually left to the IT professionals to figure out. With IdM and certmonger, certificate management can really become an automated process making environment more secure and less susceptible to a human error or misconfiguration.

TLS is used in many places for many purposes and while automation is great… it’s not enough. If certificates are issued by a single certificate authority for multiple environments and use cases there is a chance that a certificate issued for one purpose will be misused to authenticate a different connection. This can be mitigated with fine grained access control rules implemented inside each of the services that accepts TLS based authentication. But this is error prone. Having a certificate authority (CA) for a domain of use would be preferable. Unfortunately creating such CAs is usually a hassle and a cost. This is why the IdM team is working on a solution called subCAs. With just a single command an administrator would be able to create a subCA dedicated to a particular domain of use. Then all the certificates issued by this subCA would be usable only within the context of that specific domain.

Finally, requirement 2.2.4 calls for configuring system security parameters. Once again, IdM, with central management of the host-based-access-control rules, privilege escalation (sudo), and SELinux (for user mapping) provides a relief and help with such configuration.

Questions about how Identity Management relates to requirement two?  Reach out using the comments section (below).

by Dmitri Pal at September 06, 2016 06:40 PM

September 02, 2016

Florence Blanc-Renaud

Using a Dogtag instance as external CA for FreeIPA installation

A FreeIPA user recently had issues installing FreeIPA with an external CA. He was using Dogtag certificate system as external CA and FreeIPA installation was failing, complaining about the certificate provided by Dogtag.

So I decided to try the same deployment and share my findings in this post.

A little background…

FreeIPA server can be configured to act as a Certificate Authority inside FreeIPA IDM domain. It will then be able to create the certificates used by the LDAP server, the Apache server used for the Web GUI or the users and hosts.

This CA can be set-up in different ways:

  • The CA is a root CA, meaning that its certificate is self-signed
  • or the CA is subordinate to an external, 3rd-party CA, meaning that its certificate is signed by the 3rd party CA.

There are a wide range of products that can be used as 3rd-party CAs, among which Dogtag certificate system. In this blog post, I will explain how Dogtag can provide the certificate for IPA CA.

Instructions

The following instructions apply to Fedora 24. They will:

  1. run the 1st step of ipa-server-install to generate a CSR
  2. submit the CSR to Dogtag and have Dogtag issue a certificate for FreeIPA server
  3. run the 2nd step of ipa-server-install with the certificate obtained in step 2.

For instructions to setup the Dogtag server, you can refer to this post: Dogtag installation.

 

FreeIPA server installation – step 1

In order to install FreeIPA with an externally-signed CA, we must use the –external-ca option of ipa-server-install. The installation is then a multi-step install, where:

  • ipa-server-install produces a CSR
  • we need to submit this CSR to the external CA, that will in return provide a certificate and certificate chain
  • we need to run ipa-server-install a 2nd time, with different options and providing the certificates obtained in the previous step.

So let’s run the first step of ipa-server-install:

root@ipaserver$ ipa-server-install --setup-dns \
 --auto-forwarders \
 --auto-reverse \
 -n ipadomain.com \
 -r IPADOMAIN.COM \
 -p Secret123 -a Secret123 \
 --external-ca \
 -U
[...]
Configuring certificate server (pki-tomcatd). Estimated time: 3 minutes 30 seconds
 [1/8]: creating certificate server user
 [2/8]: configuring certificate server instance
The next step is to get /root/ipa.csr signed by your CA and re-run /sbin/ipa-server-install as:
/sbin/ipa-server-install --external-cert-file=/path/to/signed_certificate --external-cert-file=/path/to/external_ca_certificate

 

Generation of the certificate using Dogtag

We then need to copy this CSR on the Dogtag instance and submit the CSR, approve it and export the certificate.

The submission is an important step as it allows to specify a profile. Basically, if we pick caCACert profile, we signal our intent to use the produced certificate as a Certificate Authority in our FreeIPA deployment, and the resulting certificate will contain the required extensions:

root@dogtag$ pki ca-cert-request-submit --profile caCACert --request-type pkcs10 --csr-file ipa.csr
-----------------------------
Submitted certificate request
-----------------------------
 Request ID: 7
 Type: enrollment
 Request Status: pending
 Operation Result: success

Note the Request ID as we will need it in order to approve the submission:

root@dogtag$ pki -c Secret123 -d /root/.dogtag/nssdb/ -n "PKI Administrator for example.com" cert-request-review 7 --action approve
------------------------------
Approved certificate request 7
------------------------------
 Request ID: 7
 Type: enrollment
 Request Status: complete
 Operation Result: success
 Certificate ID: 0x7

Note the Certificate ID as we will need it to export the certificate into a file ipa.cert:

root@dogtag$ pki -c Secret123 -d /root/.dogtag/nssdb/ -n "PKI Administrator for example.com" cert-show 7 --encoded --output ipa.cert

We will also need the dogtagca certificate chain:

root@dogtag$ pki ca-cert-show 1 --encoded --output dogtagca.cert

At this point, we have a new certificate and chain (ipa.cert and dogtagca.cert), that we need to copy on FreeIPA server. We can resume FreeIPA installation.

FreeIPA server installation – step 2

In order to resume FreeIPA installation, we will follow the instructions provided in step 1:

root@ipaserver$ /sbin/ipa-server-install --external-cert-file=ipa.cert --external-cert-file=dogtagca.cert

 

The installation will resume and use the ipa.cert for IPA Certificate Authority. That’s it!


by floblanc at September 02, 2016 12:29 PM

September 01, 2016

Ben Lipton

Thinking about templating, part 2: Handling missing data

Contents

Introduction

This post is a followup to Thinking about templating for automatic CSR generation. In it we will look at a requirement of the templating system that was not discussed in that post, and see how it is handled by the implementation.

Sometimes you might want to generate a certificate for a principal that doesn’t have all the fields referenced in the profile. This could be due to an error (e.g. used the “user” profile for a “service” principal) or just the way the data is (e.g. the principal has no email address, or the requesting user has no access to that field). We want to handle this cleanly by omitting the sections of config that have missing data.

Simple approach: data rules only

We can pretty simply update our data rules to do this partly right, like in this example:

{% if subject.fqdn.0 %}DNS = {{subject.fqdn.0}}{% endif %}

This adds some extra work for administrators creating new rules, and is another step that someone could forget, but could be manageble.

However, if none of the data rules for a field has any data, we need to avoid rendering the syntax rule for that field as well, otherwise we get weird empty sections that openssl doesn’t like. Modifying the rule templates can’t solve this problem, because the syntax rule intentionally doesn’t know what data it may depend on for different profiles; that all depends on the data rules.

Current solution: See if something renders

One way to make this work is to build syntax rules so they use jinja2 control tags to compute the output of any data rules first, then render their own text only if some data rule rendered successfully. In its raw form, this gets ugly (see [1] for explanation):

{% raw %}{% set contents %}{% endraw %}{{ datarules|join('\n') }}
{% raw %}{% endset %}{% if contents %}{% endraw %}
subjectAltName = @{% call openssl.section() %}{% raw %}{{ contents }}
{% endraw %}{% endcall %}{% raw %}{% endif %}{% endraw %}

For comparison, that rule used to look like this:

subjectAltName = @{% call openssl.section() %}
{{ datarules|join('\n') }}{% endcall %}

I think this might be a heavy burden for administrators who want to write new syntax rules.

However, we can introduce some macros to make this better. One macro, syntaxrule, computes the result of rendering the data rules it contains, but does not output these results unless a flag is set to true. That flag is controlled by another macro, datarule, which updates the flag to true when the enclosed data rule renders successfully. We can apply a similar technique to the fields in the data rules, rendering the rule only if all fields are present.

Now, the framework can automatically wrap all syntax rules in {% call ipa.syntaxrule() %}...{% endcall %} and all data rules in {% call ipa.datarule() %}...{% endcall %}. Writers of data rules must wrap all field references in ipa.datafield() to mark values that could be missing, such as {{ ipa.datafield(subject.mail.0) }}, but no other modifications to the rules are necessary.

This is the way rule suppression is currently implemented.

Issues

This system seems to be working fairly well, but it has a few drawbacks.

First, the macros to do this are a little arcane, as can be seen in [2], and can’t be commented very well because any whitespace becomes part of the macro output. They rely on global variables within the template, but this should be ok as long as we always nest datafields within datarules within syntaxrules, and never nest more than once.

Second, syntax rules with multiple assigned data rules present a problem. Generally we will want the results of those rules to be presented in the output with some character in between, e.g. {{datarules|join(',')}} for certutil. However, when we finally render this template with data, what if one of our datarules renders while another does not due to lack of data? The above rule segment would produce a template like:

{% call ipa.datarule() %}email:{{ipa.datafield(subject.mail.0)|quote}}{% endcall %},{% call ipa.datarule() %}uri:{{ipa.datafield(subject.inetuserhttpurl.0)|quote}}{% endcall %}

If this subject has no inetuserhttpurl field, the second ipa.datarule will be suppressed, leaving an empty string. But, the comma will still be there! This creates odd-looking output like the following:

--extSAN email:myuser@example.com,

Fortunately, certutil seems not to mind these extra commas, and openssl is also ok with the extra blank lines that arise the same way, so this isn’t breaking anything right now. But, it’s worrying not to be able to do much to improve this formatting.

Third, there is an unfortunate interaction between the macros created for this technique, the above issue, and the macro that produces openssl sections. That macro [3] also relies on side effects to do its job - the contents of the section are appended to a global list of sections, while only the section name is returned at the point where the macro is called. Since the technique discussed in this section evaluates each data rule to see if it produces any data, if the rule includes an openssl section, a section is stored on rule evaluation even if it has no data. Again, openssl is ok with the extra sections as long as they are not referenced within the config file, but the result is ugly.

Alternative: Declare data dependencies

Another approach to suppressing syntax rules when none of their data rules are going to render is to take the “simple approach” of listing the required data items in an {% if %} statement one step further. We could amend the schema for data rules to include a record of the included data item, so that each rule would know its dependencies. Data rules could then be automatically wrapped so they wouldn’t be rendered if this item was unavailable. Syntax rules could be treated similarly; by querying the dependencies of all the data rules it was configured to include, the whole syntax rule could be suppressed if none of those items were available.

In this scheme, the template produced would look like (linebreaks and indentation added):

{% if subject.mail.0 or subject.inethttpurl.0 %}--extSAN
  {% if subject.mail.0 %}email:{{subject.mail.0|quote}}{% endif %},
  {% if subject.inethttpurl.0 %}uri:{{subject.inethttpurl.0|quote}}{% endif %}
{% endif %}

This takes care of the third problem of the previous solution, because data rules with missing data will never be evaluated, meaning that superfluous openssl sections will not be added. However, the second problem still persists, because the commas and newlines are part of the syntax rule (which is rendered) not the data rules (some of which aren’t rendered).

Suppressing excess commas and newlines

The challenge with preventing these extra commas and newlines is that they must be evaluated during the final render, when the subject data is available, not when the syntax rules are evaluated to build the final template. Using the join filter in the syntax rule is insufficient, because it is evaluated before that data is available. What we really want is to pass the output of all the data rules to the join filter, at final render time.

This is not a polished solution, but an image of what this could look like is for the data rule to be:

--extSAN {{datarules|filternonempty("join(',')")}}

Which would create a final template like:

{% filternonempty join(',') %}
<data rule 1>
{% filterpart %}
<data rule 2>
{% endfilternonempty %}

And the filternonempty tag would be implemented so the effect of this would be approximately:

{% set parts = [] %}
{% set part %}
<data rule 1>
{% endset %}
{% if part %}{% do parts.append(part %}{% endif %}
{% set part %}
<data rule 2>
{% endset %}
{% if part %}{% do parts.append(part %}{% endif %}
{{ parts|join(',') }}

I think this is doable, but I don’t have a prototype yet.

Conclusions

The current implementation is working ok, but the “Declaring data dependencies” solution is also appealing. Recording in data rules what data they depend on is only slightly more involved than wrapping that reference in ipa.datafield(), and could also be useful for other purposes. Plus, it would get rid of the empty sections in openssl configs, as well as some of the complex macros.

The extra templating and new tags required to get rid of extra commas and newlines don’t seem worth it to me, unless we discover a version of openssl or certutil that can’t consume the current output.

Finally, I think the number of hoops needing to be jumped through to fine-tune the output format hint at this “template interpolation” approach being less successful than originally expected. While it was expected that inserting data rule templates into syntax rule templates and rendering the whole thing would produce similar results to rendering data rules first and inserting the output into syntax rules, that is not turning out to be the case. It might be wise to reconsider the simpler option - it may be easier to implement reliable jinja2 template markup escaping than to build templates smart enough to handle any combination of data that’s available.

Appendix

[1] In case you’re having trouble parsing this mess, when rendered to insert data rules, and with whitespace added for readability, it turns into this:

{% set contents %}
    {% if subject.mail.0 %}email = {{subject.mail.0}}{% endif %} <-- this is the data rule
{% endset %}
{% if contents %}
    subjectAltName = @{% call openssl.section() %}{{ contents }}{% endcall %}
{% endif %}

[2]

{% set rendersyntax = {} %}

{% set renderdata = {} %}

{# Wrapper for syntax rules. We render the contents of the rule into a
variable, so that if we find that none of the contained data rules rendered we
can suppress the whole syntax rule. That is, a syntax rule is rendered either
if no data rules are specified (unusual) or if at least one of the data rules
rendered successfully. #}
{% macro syntaxrule() -%}
{% do rendersyntax.update(none=true, any=false) -%}
{% set contents -%}
{{ caller() -}}
{% endset -%}
{% if rendersyntax['none'] or rendersyntax['any'] -%}
{{ contents -}}
{% endif -%}
{% endmacro %}

{# Wrapper for data rules. A data rule is rendered only when all of the data
fields it contains have data available. #}
{% macro datarule() -%}
{% do rendersyntax.update(none=false) -%}
{% do renderdata.update(all=true) -%}
{% set contents -%}
{{ caller() -}}
{% endset -%}
{% if renderdata['all'] -%}
{% do rendersyntax.update(any=true) -%}
{{ contents -}}
{% endif -%}
{% endmacro %}

{# Wrapper for fields in data rules. If any value wrapped by this macro
produces an empty string, the entire data rule will be suppressed. #}
{% macro datafield(value) -%}
{% if value -%}
{{ value -}}
{% else -%}
{% do renderdata.update(all=false) -%}
{% endif -%}
{% endmacro %}

[3]

{# List containing rendered sections to be included at end #}
{% set openssl_sections = [] %}

{#
List containing one entry for each section name allocated. Because of
scoping rules, we need to use a list so that it can be a "per-render global"
that gets updated in place. Real globals are shared by all templates with the
same environment, and variables defined in the macro don't persist after the
macro invocation ends.
#}
{% set openssl_section_num = [] %}

{% macro section() -%}
{% set name -%}
sec{{ openssl_section_num|length -}}
{% endset -%}
{% do openssl_section_num.append('') -%}
{% set contents %}{{ caller() }}{% endset -%}
{% if contents -%}
{% set sectiondata = formatsection(name, contents) -%}
{% do openssl_sections.append(sectiondata) -%}
{% endif -%}
{{ name -}}
{% endmacro %}

{% macro formatsection(name, contents) -%}
[ {{ name }} ]
{{ contents -}}
{% endmacro %}

September 01, 2016 12:00 AM

August 31, 2016

Red Hat Blog

PCI Series: Requirement 1 – Install and Maintain a Firewall Configuration to Protect Cardholder Data

This article is one of the blog posts dedicated to use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS). This specific post is related to requirement one – install and maintain a firewall configuration to protect cardholder data. The outline and mapping of individual articles to the requirements can be found in the overarching post that started the series.

The first requirement of the PCI standard talks about the firewalls and networking. While Red Hat’s Identity Management solution is not directly related to setting up networks and firewall rules, there are several aspects of IdM that need to be mentioned in this context. The first is that IdM servers can be deployed inside and outside a firewall. In either case IdM servers need to communicate with clients and to each other using the LDAP and Kerberos protocols.

IdM servers that are deployed inside the firewall create challenges for authenticating clients that are located outside the firewall on a separate network or in a DMZ. The IdM solution leverages Kerberos heavily. The main reason for this is that the Kerberos protocol ensures that end user passwords are not sent “over the wire” thereby reducing the risk of password interception or leak. However the use of Kerberos creates a challenge for administrators who traditionally had to open a Kerberos port in the firewall to allow the authentication to go through. This, in many cases, is a non-starter. The IdM version that comes with Red Hat Enterprise Linux 7.2 includes a feature called KDC proxy. Several years ago Microsoft authored a standard that allows for proxying the Kerberos protocol over HTTPS. KDC proxy is the open source implementation of this protocol. This solution avoids the need to open a Kerberos port in the firewall and leads to a tighter firewall configuration that is in the spirit of the PCI DSS standard.

The solution still requires opening an LDAP port so that clients can download identity information. For purposes of identity lookup the IdM server in the DMZ can act as a proxy between clients in the DMZ and Active Directory (AD) servers behind the firewall. The firewall rule in this case can be set to allow connection only from the IdM server host in the DMZ to AD inside the firewall thus significantly limiting the attack surface. Placing an IdM server in the DMZ to serve clients there enables a more secure integration of those systems into an AD fabric.

The other aspect that is worth mentioning is IPSec VPNs. The IPSec VPN specification has been extended to allow for Kerberos authentication. The implementation of IPSec VPN (libreswan) is underway. This enhancement combined with placing IdM outside the firewall will allow a VPN user to authenticate against an IdM server first using, for example, OTP authentication over Kerberos, to then acquire proof of authentication (ticket), and (finally) to connect to the VPN server without being prompted. Such an approach, when integrated with desktop login, would allow for signing into the network and logging into the system at the same time – eliminating multiple steps and prompts.

Questions about how Identity Management relates to requirement one?  Reach out using the comments section (below).

by Dmitri Pal at August 31, 2016 08:31 PM

August 30, 2016

Red Hat Blog

Identity Management and Related Technologies and their Applicability to PCI DSS

The Payment Card Industry Data Security Standard (PCI DSS) is not new. It has existed for several years and provides security guidelines and best practices for the storage and processing of personal cardholder data. This article takes a look at PCI DSS 3.2 (published in April of 2016) and shows how Identity Management in Red Hat Enterprise Linux (IdM) and related technologies can help customers to address PCI DSS requirements to achieve and stay compliant with the standard. If you need a copy of the PCI DSS document it can be acquired from the document library at the following site: www.pcisecuritystandards.org

In October of 2015 Red Hat published a paper that gives an overview of the PCI DSS standard and shows how Red Hat Satellite and other parts of the Red Hat portfolio can help customers to address their PCI compliance challenges. In this post I would like to expand on this paper and drill down into more detail about the Identity Management solution Red Hat provides and how it can be leveraged to achieve PCI DSS compliance in conjunction with other technologies as covered in the paper.

Note that this post assumes familiarity with the Red Hat IdM solution. If you’re not “up-to-speed” – please review our Identity Management documentation. Also, my previous blog posts provide a good foundation for the problem space and understanding of the solution. Identity Management in Red Hat Enterprise Linux is an open source solution based on the FreeIPA community project. There is a public instance of the FreeIPA server running in the cloud that you can connect to and explore using the following link: http://www.freeipa.org/page/Demo

Since the standard is quite big I will break this article into a series of individual posts – addressing one section at a time. The following table will help in terms of mapping each section of the PCI document to each follow-up post.

 

Requirement Number Requirement Description Link to Blog Post / Reference
1 Install and maintain a firewall configuration to protect cardholder data. PCI Series: Requirement 1 – Install and Maintain a Firewall Configuration to Protect Cardholder Data
2 Do not use vendor-supplied defaults for system passwords and other security parameters. PCI Series: Requirement 2 – Do Not Use Vendor-Supplied Defaults for System Passwords and Other Security Parameters
3 Protect stored cardholder data. PCI Series: Requirement 3 – Protect Stored Cardholder Data
4 Encrypt transmission of cardholder data across open, public networks. The same approach as discussed for requirement number two (2) can be employed to meet requirements in this part of the PCI DSS standard.
5 Protect all systems against malware and regularly update anti-virus software or programs. Red Hat Identity Management is not directly related to this section. Reference / review section five (5) of the PCI DSS standard.
6 Develop and maintain secure systems and applications. PCI Series: Requirement 6 – Develop and Maintain Secure Systems and Applications
7 Restrict access to cardholder data by business need to know. PCI Series: Requirement 7 – Restrict Access to Cardholder Data by Business Need to Know
8 Identify and authenticate access to system components. PCI Series: Requirement 8 – Identify and Authenticate Access to System Components
9 Restrict physical access to cardholder data. Red Hat Identity Management is not directly related to this section. Reference / review section nine (9) of the PCI DSS standard.
10 Track and monitor all access to network resources and cardholder data. PCI Series: Requirement 10 – Track and Monitor All Access to Network Resources and Cardholder Data
11 Regularly test security systems and processes.
Requirements 11 and 12 talk about testing of the security controls. This includes scanning and monitoring and best practices around the security policy itself that organizations should create and maintain. Red Hat Identity Management is not directly related to these sections.
12 Maintain a policy that addresses information security for all personnel.

 

It’s worth mentioning that while this series is focused on IdM and its ecosystem – there are other parts of Red Hat portfolio that would allow for addressing some of the PCI DSS requirements that we did not drill down into here. For example, the OpenSCAP scanner that’s integrated into Red Hat Satellite 6 allows for the regular detection of unaddressed CVEs and misconfigurations according to a defined policy. To get more information about these technologies and how they help to address PCI DSS requirements please see the Achieving and Maintaining PCI DSS Compliance with Red Hat paper on the Red Hat site.

In closing – stay tuned for my future posts on PCI DSS.  If they’re already live – you’ll see active links in the table (above).  General questions about PCI DSS and IdM?  Feel free to reach out using the comments section (below).

by Dmitri Pal at August 30, 2016 07:17 PM

Alexander Bokovoy

Creating permissions in FreeIPA

FreeIPA has quite flexible system to define access rights for any resources in the LDAP store. The system consists of three different parts:

  • a permission object
  • a privilege object, and
  • a role object.

Permission object specifies the target of the access grant: what attributes of which objects in LDAP would be subject of the checks.

A privilege allows to combine several permissions together in a logical task. A role defines who can have access to privileges.

An example below is a somewhat complex use of the permission system to allow groups of administrators to manage specific hosts. We want administrators in group ‘my-admins’ to manage hosts in ‘my-hostgroup’ but otherwise have no other privileges.

Let’s start with a host group ‘my-hostgroup’:

# ipa hostgroup-add my-hostgroup
-----------------------------
Added hostgroup "my-hostgroup"
-----------------------------
  Host-group: my-hostgroup

And with a group ‘my-admins’:

# ipa group-add my-admins
-----------------------
Added group "my-admins"
-----------------------
  Group name: my-admins
  GID: 903200040

A member of ‘my-admins’ should be able to edit all attributes of the hosts in the host group ‘my-hostgroup’.

To manage permissions, use ipa permission family of commands. You need to create a basic permission which applies to hosts:

# ipa permission-add manage-my-hostgroup --right=all --bindtype=permission --type=host
--------------------------------------
Added permission "manage-my-hostgroup"
--------------------------------------
  Permission name: manage-my-hostgroup
  Granted rights: all
  Bind rule type: permission
  Subtree: cn=computers,cn=accounts,dc=ipa,dc=ad,dc=test
  Type: host
  Permission flags: V2, SYSTEM

A permission automatically generates an access control item (ACI) in the LDAP. To check all low-level details of the permission, use --all and --raw options:

# ipa permission-show --all --raw manage-my-hostgroup
  dn: cn=manage-my-hostgroup,cn=permissions,cn=pbac,dc=ipa,dc=ad,dc=test
  cn: manage-my-hostgroup
  ipapermright: all
  ipapermbindruletype: permission
  ipapermlocation: cn=computers,cn=accounts,dc=ipa,dc=ad,dc=test
  ipapermtargetfilter: (objectclass=ipahost)
  ipapermissiontype: V2
  ipapermissiontype: SYSTEM
  aci: (targetfilter = "(objectclass=ipahost)")
       (version 3.0; acl "permission:manage-my-hostgroup";
                     allow (all)
                     groupdn = "ldap:///cn=manage-my-hostgroup,cn=permissions,cn=pbac,dc=ipa,dc=ad,dc=test";)
  objectclass: ipapermission
  objectclass: top
  objectclass: groupofnames
  objectclass: ipapermissionv2

As you can see, it applies to hosts: cn=computers,cn=accounts,$SUFFIX subtree, and target filter is set to (objectclass=ipahost). So it would apply to any host. To further limit the permission, you have to add more target filters.

To define raw target filter, we need to know a DN of the hostgroup that will be our target limit:

# ipa hostgroup-show --raw --all my-hostgroup
  dn: cn=my-hostgroup,cn=hostgroups,cn=accounts,dc=ipa,dc=ad,dc=test
  cn: my-hostgroup
  ipaUniqueID: 6d8c72f2-6e6d-11e6-b9e4-525400bf08fe
  mepManagedEntry: cn=my-hostgroup,cn=ng,cn=alt,dc=ipa,dc=ad,dc=test
  objectClass: ipahostgroup
  objectClass: ipaobject
  objectClass: nestedGroup
  objectClass: groupOfNames
  objectClass: top
  objectClass: mepOriginEntry

Using the DN of the my-hostgroup, we can now add a filter to the permission:

# ipa permission-mod manage-my-hostgroup --filter '(memberOf=cn=my-hostgroup,cn=hostgroups,cn=accounts,dc=ipa,dc=ad,dc=test)'
-----------------------------------------
Modified permission "manage-my-hostgroup"
-----------------------------------------
  Permission name: manage-my-hostgroup
  Granted rights: all
  Bind rule type: permission
  Subtree: cn=computers,cn=accounts,dc=ipa,dc=ad,dc=test
  Extra target filter: (memberOf=cn=my-hostgroup,cn=hostgroups,cn=accounts,dc=ipa,dc=ad,dc=test)
  Type: host
  Permission flags: V2, SYSTEM

Take a look at the permission in detail:

# ipa permission-show --all --raw manage-my-hostgroup
  dn: cn=manage-my-hostgroup,cn=permissions,cn=pbac,dc=ipa,dc=ad,dc=test
  cn: manage-my-hostgroup
  ipapermright: all
  ipapermbindruletype: permission
  ipapermlocation: cn=computers,cn=accounts,dc=ipa,dc=ad,dc=test
  ipapermtargetfilter: (objectclass=ipahost)
  ipapermtargetfilter: (memberOf=cn=my-hostgroup,cn=hostgroups,cn=accounts,dc=ipa,dc=ad,dc=test)
  ipapermissiontype: V2
  ipapermissiontype: SYSTEM
  aci: (targetfilter = "(&(memberOf=cn=my-hostgroup,cn=hostgroups,cn=accounts,dc=ipa,dc=ad,dc=test)(objectclass=ipahost))")
       (version 3.0;acl "permission:manage-my-hostgroup";
        allow (all) groupdn = "ldap:///cn=manage-my-hostgroup,cn=permissions,cn=pbac,dc=ipa,dc=ad,dc=test";)
  objectclass: ipapermission
  objectclass: top
  objectclass: groupofnames
  objectclass: ipapermissionv2

Our ACI says: “Allow any changes to be done in all objects of objectclass ipahost that belong to a host group my-hostgroup to members of the permission group manage-my-hostgroup

Now you can add the manage-my-hostgroup permission to a new privilege and add that privilege to a role, and then assign users of the group my-admins to that role. Those users will be able to manage hosts targeted by the permission.

Start with a privilege:

# ipa privilege-add 'manage-hostgroup-my-hostgroup'
-----------------------------------------------
Added privilege "manage-hostgroup-my-hostgroup"
-----------------------------------------------
  Privilege name: manage-hostgroup-my-hostgroup

# ipa privilege-add-permission 'manage-hostgroup-my-hostgroup'
[permission]: manage-my-hostgroup
  Privilege name: manage-hostgroup-my-hostgroup
  Permissions: manage-my-hostgroup
-----------------------------
Number of permissions added 1
-----------------------------

Finally, create a role and add a privilege to it, and then add members that could use the privilege:

# ipa role-add role-manage-hostgroup-my-hostgroup
-----------------------------------------------
Added role "role-manage-hostgroup-my-hostgroup"
-----------------------------------------------
  Role name: role-manage-hostgroup-my-hostgroup

# ipa role-add-privilege role-manage-hostgroup-my-hostgroup
[privilege]: manage-hostgroup-my-hostgroup
  Role name: role-manage-hostgroup-my-hostgroup
  Privileges: manage-hostgroup-my-hostgroup
----------------------------
Number of privileges added 1
----------------------------

# ipa role-add-member role-manage-hostgroup-my-hostgroup --groups=my-admins
  Role name: role-manage-hostgroup-my-hostgroup
  Member groups: my-admins
  Privileges: manage-hostgroup-my-hostgroup
-------------------------
Number of members added 1
-------------------------

If we look at the original permission, we can see it is now an indirect member of a role:

# ipa permission-show manage-my-hostgroup
  Permission name: manage-my-hostgroup
  Granted rights: all
  Bind rule type: permission
  Subtree: cn=computers,cn=accounts,dc=ipa,dc=ad,dc=test
  Extra target filter: (memberOf=cn=my-hostgroup,cn=hostgroups,cn=accounts,dc=ipa,dc=ad,dc=test)
  Type: host
  Permission flags: V2, SYSTEM
  Granted to Privilege: manage-hostgroup-my-hostgroup
  Indirect Member of roles: role-manage-hostgroup-my-hostgroup

When user is added to the my-admins group, it automatically assumes a role that allows to manage the host group:

# ipa user-add hadmin
First name: Joe
Last name: Doe
-------------------
Added user "hadmin"
-------------------
  User login: hadmin
  First name: Joe
  Last name: Doe
  Full name: Joe Doe
  Display name: Joe Doe
  Initials: JD
  Home directory: /home/hadmin
  GECOS: Joe Doe
  Login shell: /bin/sh
  Principal name: hadmin@IPA.AD.TEST
  Principal alias: hadmin@IPA.AD.TEST
  Email address: hadmin@ipa.ad.test
  UID: 903200041
  GID: 903200041
  Password: False
  Member of groups: ipausers
  Kerberos keys available: False

# ipa group-add-member my-admins --users=hadmin
  Group name: my-admins
  GID: 903200040
  Member users: hadmin
  Roles: role-manage-hostgroup-my-hostgroup
-------------------------
Number of members added 1
-------------------------

In real life scenario we would probably like to tune our permission a bit more. For example, we definitely don’t want to allow full access to all attributes of the host – if users can write to objectclass attribute, they can turn that host into anything else in LDAP. But before tuning it, we need to see if our permission actually works:

# kinit hadmin
Password for hadmin@IPA.AD.TEST:

# ipa host-mod my-host --random
ipa: ERROR: Insufficient access: Insufficient 'write' privilege to the 'userPassword' 
            attribute of entry 'fqdn=my-host.ipa.ad.test,cn=computers,cn=accounts,dc=ipa,dc=ad,dc=test'.

Oops, it does not work – we cannot write to a userPassword attribute of the host. What is wrong? To answer this question we need to look at the documentation of the LDAP server FreeIPA builds upon: 389-ds. Red Hat Directory Server Administration Guide says the following in the section “Targeting Entries or Attributes Using LDAP Filters”:

Note Although using LDAP filters can be useful when you are targeting entries and attributes that are spread across the directory, the results are sometimes unpredictable because filters do not directly name the object for which you are managing access. The set of entries targeted by a filtered ACI is likely to change as attributes are added or deleted. Therefore, if you use LDAP filters in ACIs, you should verify that they target the correct entries and attributes by using the same filter in an ldapsearch operation.

The documentation doesn’t tell this explicitly, but when targetattr is missing, the default for matching target attributes when matching with target filter for modification is none, not *. This is done to deny modrdn (entry rename).

To allow modification of the host entries, we need to list attributes which can be modified by our host group admins. The list below is an example only: it allows to set meta-data about the host, change one-time enrollment password, assigned ID view, add certificates and SSH public keys. One needs to carefully review what attributes should be allowed to modify.

# kinit admin
Password for admin@IPA.AD.TEST: 

# ipa permission-mod manage-my-hostgroup --attrs={'userPassword','description','l',\
               'nshardwareplatform','nsosversion','usercertificate','userclass',\
               'macaddress','ipaassignedidview','ipasshpubkey'}
-----------------------------------------
Modified permission "manage-my-hostgroup"
-----------------------------------------
  Permission name: manage-my-hostgroup
  Granted rights: all
  Effective attributes: description, ipaassignedidview, ipakrbauthzdata, ipasshpubkey,
                        l, macaddress, nshardwareplatform, nsosversion, userPassword,
                        usercertificate, userclass
  Bind rule type: permission
  Subtree: cn=computers,cn=accounts,dc=ipa,dc=ad,dc=test
  Extra target filter: (memberOf=cn=my-hostgroup,cn=hostgroups,cn=accounts,dc=ipa,dc=ad,dc=test)
  Type: host
  Permission flags: V2, SYSTEM
  Granted to Privilege: manage-hostgroup-my-hostgroup
  Indirect Member of roles: role-manage-hostgroup-my-hostgroup

With these changes, our admin can now set a random one-time password:

# kinit hadmin
Password for hadmin@IPA.AD.TEST:

# ipa host-mod my-host --random
-----------------------
Modified host "my-host"
-----------------------
  Host name: my-host.ipa.ad.test
  Random password: 5Krkbj_eW7UR@SUxj0lx22
  Principal name: host/my-host.ipa.ad.test@IPA.AD.TEST
  Principal alias: host/my-host.ipa.ad.test@IPA.AD.TEST
  Password: True
  Member of host-groups: my-hostgroup
  Keytab: False
  Managed by: my-host.ipa.ad.test

However, this is not all. The permission we created above doesn’t answer a very important question: how the host my-host would appear in the host group? We surely want to be able to add and remove hosts from the host group. But if we create a permission that allows per-hostgroup admins to add and remove members of the host group at will, they could take over any host – by simply adding it in the host group they manage.

An easiest way to solve this problem, no surprise, is organizational: do not give host group admin rights to include hosts to the hostgroup or delete them, only allow them to manage what’s in the host group.

A separation of rights requires to create a separate permission for ‘add’/’del’ rights against ‘member’ attribute that would allow to include/remove hosts. That’s easy but it would not allow us to limit what hosts could be added/removed from the host group.

Unfortunately, to make that possible, permission-add/permission-mod should be extended to allow specifying target attribute’s values like described in the RHDS Administration Guide.

Even then to define something like this, we’d need to have specific naming of hosts to be able to specify a pattern as a ‘member’ attribute value.

An alternative is to use automembership rules, defined with ipa automember family of commands. It might work with predictable host names but would probably be hard to implement in case of host names coming out of existing cloud provider where you don’t have control over the undercloud.

This is why I’m saying it is an organizational issue, not really a technical one.

August 30, 2016 05:00 AM

August 12, 2016

Fraser Tweedale

Smart card login with YubiKey NEO

In this post I give an overview of smart cards and their potential advantages, and share my adventures in using a Yubico YubiKey NEO device for smart card authentication with FreeIPA and SSSD.

Smart card overview

Smart cards with cryptographic processors and secure key storage (private key generated on-device and cannot be extracted) are an increasingly popular technology for secure system and service login, as well as for signing and encryption applications (e.g. code signing, OpenPGP). They may offer a security advantage over traditional passwords because private key operations typically require the user to enter a PIN. Therefore the smart card is two factors in one: both something I have and something I know.

The inability to extract the private key from a smart card also provides an advantage over software HOTP/TOTP tokens which, in the absense of other security measures such as encrypted filesystem on the mobile device, allow an attacker to extract the OTP seed. And because public key cryptography is used, there is no OTP seed or password hash sitting on a server, waiting to be exfiltrated and subjected to offline attacks.

For authentication applications, a smart card carries an X.509 certificate alongside a private key. A login application would read the certificate from the card and validate it against trusted CAs (e.g. a company’s CA for issuing smart cards). Typically an OCSP or CRL check would also be performed. The login application then challenges the card to sign a nonce, and validates the signature with the public key from the certificate. A valid signature attests that the bearer of the smart card is indeed the subject of the certificate. Finally, the certificate is then mapped to a user either by looking for an exact certificate match or by extracting information about the user from the certificate.

Test environment

In my smart card investigations I had a FreeIPA server with a single Fedora 24 desktop host enrolled. alice was the user I tested with. To begin with, she had no certificates and used her password to log in.

I was doing all of my testing on virtual machines, so I had to enable USB passthrough for the YubiKey device. This is straightforward but you have to ensure the IOMMU is enabled in both BIOS and kernel (for Intel CPUs add intel_iommu=on to the kernel command line in GRUB).

In virt-manager, after you have created the VM (it doesn’t need to be running) you can Add Hardware in the Details view, then choose the YubiKey NEO device. There are no doubt virsh incantations or other ways to establish the passthrough.

Finally, on the host I stopped the pcscd smart card daemon to prevent it from interfering with passthrough:

# systemctl stop pcscd.service pcscd.socket

Provisioning the YubiKey

For general smart card provisioning steps, I recommend Nathan Kinder’s post on the topic. But the YubiKey NEO is special with its own steps to follow! First install the ykpers and yubico-piv-tool packages:

sudo dnf install -y ykpers yubico-piv-tool

If we run yubico-piv-tool to find out the version of the PIV applet, we run into a problem because a new YubiKey comes configured in OTP mode:

[dhcp-40-8:~] ftweedal% yubico-piv-tool -a version
Failed to connect to reader.

The YubiKey NEO supports a variety of operation modes, including hybrid modes:

0    OTP device only.
1    CCID device only.
2    OTP/CCID composite device.
3    U2F device only.
4    OTP/U2F composite device.
5    U2F/CCID composite device.
6    OTP/U2F/CCID composite device.

(You can also add 80 to any of the modes to configure touch to eject, or touch to switch modes for hybrid modes).

We need to put the YubiKey into CCID (Chip Card Interface Device, a standard USB protocol for smart cards) mode. I originally configured the YubiKey in mode 86 but could not get the card to work properly with USB passthrough to the virtual machine. Whether this was caused by the eject behaviour or the fact that it was a hybrid mode I do not know, but reconfiguring it to mode 1 (CCID only) allowed me to use the card on the guest.

[dhcp-40-8:~] ftweedal% ykpersonalize -m 1
Firmware version 3.4.6 Touch level 1541 Program sequence 1

The USB mode will be set to: 0x1

Commit? (y/n) [n]: y

Now yubico-piv-tool can see the card:

[dhcp-40-8:~] ftweedal% yubico-piv-tool -a version
Application version 1.0.4 found.

Now we can initialise the YubiKey by setting a new management key, PIN and PIN Unblocking Key (PUK). As you can probably guess, the management key protects actions like generating keys and importing certificates, the PIN protects private key operations in regular use, the the PUK is kind of in between, allowing the PIN to be reset if the maximum attempts are exceeded. The current (default) PIN and PUK need to be given in order to reset them.

% KEY=`dd if=/dev/random bs=1 count=24 2>/dev/null | hexdump -v -e '/1 "%02X"'`
% echo $KEY
CC044321D49AC1FC40146AD049830DB09C5AFF05CD843766
% yubico-piv-tool -a set-mgm-key -n $KEY
Successfully set new management key.

% PIN=`dd if=/dev/random bs=1 count=6 2>/dev/null | hexdump -v -e '/1 "%u"'|cut -c1-6`
% echo $PIN
167246
% yubico-piv-tool -a change-pin -P 123456 -N $PIN
Successfully changed the pin code.

% PUK=`dd if=/dev/random bs=1 count=6 2>/dev/null | hexdump -v -e '/1 "%u"'|cut -c1-8`
% echo $PUK
24985117
% yubico-piv-tool -a change-puk -P 12345678 -N $PUK
Successfully changed the puk code.

Next we must generate a private/public keypair on the smart card. Various slots are available for different purposes, with different PIN-checking behaviour. The Certificate slots page on the Yubico wiki gives the full details. We will use slot 9e which is for Card Authentication (PIN is not needed for private key operations). It is necessary to provide the management key on the command line, but the program also prompts for it (I’m not sure why this is the case).

% yubico-piv-tool -k $KEY -a generate -s 9e
Enter management key: CC044321D49AC1FC40146AD049830DB09C5AFF05CD843766
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEApT5tb99jr7qA8zN66Dbl
fu/Jh+F0nZvp7FXZRJQH12KgEeX4Lzu1S10b1HQ0lpHZWcqPQh2wbHaC8U7uYSLW
LqsjmFeJrskAerVAAH8v+tzy6DKlJKaLjAt8qWEJ1UWf5stJO3r9RD6Z80rOYPXT
MsKxmsb22v5lbvZTa0mILQeP2e6m4rwPKluQrODYkQkQcYIfedQggmYwo7Cxl5Lu
smtes1/FeUlJ+DG3mga3TrZd1Fb+wDJqQU3ghLul9qLNdPYyxdwDKSWkIOt5UusZ
2A8qECKZ8Wzv0IGI0bReSZYHKjhdm4aMMNubtKDuem/nUwBebRHFGU8zXTSFXeAd
gQIDAQAB
-----END PUBLIC KEY-----
Successfully generated a new private key.

We then use this key to create a certificate signing request (CSR) via yubico-piv-tool. Although slot 9e does not require the PIN, other slots do require it, so I’ve included the verify-pin action for completeness:

% yubico-piv-tool -a verify-pin \
    -a request-certificate -s 9e -S "/CN=alice/"
Enter PIN: 167246
Successfully verified PIN.
Please paste the public key...
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEApT5tb99jr7qA8zN66Dbl
fu/Jh+F0nZvp7FXZRJQH12KgEeX4Lzu1S10b1HQ0lpHZWcqPQh2wbHaC8U7uYSLW
LqsjmFeJrskAerVAAH8v+tzy6DKlJKaLjAt8qWEJ1UWf5stJO3r9RD6Z80rOYPXT
MsKxmsb22v5lbvZTa0mILQeP2e6m4rwPKluQrODYkQkQcYIfedQggmYwo7Cxl5Lu
smtes1/FeUlJ+DG3mga3TrZd1Fb+wDJqQU3ghLul9qLNdPYyxdwDKSWkIOt5UusZ
2A8qECKZ8Wzv0IGI0bReSZYHKjhdm4aMMNubtKDuem/nUwBebRHFGU8zXTSFXeAd
gQIDAQAB
-----END PUBLIC KEY-----
-----BEGIN CERTIFICATE REQUEST-----
MIICUzCCAT0CAQAwEDEOMAwGA1UEAwwFYWxpY2UwggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQClPm1v32OvuoDzM3roNuV+78mH4XSdm+nsVdlElAfXYqAR
5fgvO7VLXRvUdDSWkdlZyo9CHbBsdoLxTu5hItYuqyOYV4muyQB6tUAAfy/63PLo
MqUkpouMC3ypYQnVRZ/my0k7ev1EPpnzSs5g9dMywrGaxvba/mVu9lNrSYgtB4/Z
7qbivA8qW5Cs4NiRCRBxgh951CCCZjCjsLGXku6ya16zX8V5SUn4MbeaBrdOtl3U
Vv7AMmpBTeCEu6X2os109jLF3AMpJaQg63lS6xnYDyoQIpnxbO/QgYjRtF5Jlgcq
OF2bhoww25u0oO56b+dTAF5tEcUZTzNdNIVd4B2BAgMBAAGgADALBgkqhkiG9w0B
AQsDggEBADvyL13ayXRDWmRJ1dSi4lE9l128fy3Lt/1XoAC1D+000hWkXOPA+K8j
gR/Yg99K9v3U2wm6wtk2taEeogc4TebVawXezjw/hu4wq2sta3zVVJC9+yRrUeai
P+Gvj0KNesXK5MyHGpeiPb3SA/2GYYK04suM6a1vpA+sBvrca39klpgBrYY0N/9s
VE4gBBNhQa9jN8E9VMQXEPxYVH1tDrp7bRxg6V5spJb2oit6H+7Pe7xSC95ByCXw
Msprhk+B2nkrVaco5R/ZOG0jZdMOMOJXCuTbWKOaCDEN5hsLNdua6uBpiDCJ5v1I
l0Xmf53DC7jceF/XgZ0LUzbGzTDcr3o=
-----END CERTIFICATE REQUEST-----

yubico-piv-tool -a request-certificate is not very flexible; for example, it cannot create a CSR with request extensions such as including the user’s email address or Kerberos principal name in the Subject Alternative Name extension. For such non-trivial use cases, openssl req or other programs can be used instead, with a PKCS #11 module providing acesss to the smart card’s signing capability. Nathan Kinder’s post provides full details.

With CSR in hand, alice can now request a certificate from the IPA CA. I have covered this procedure in previous articles so I’ll skip it here, except to add that it is necessary to use a profile that saves the newly issued certificate to the subject’s userCertificate LDAP attribute. This is how SSSD matches certificates in smart cards with users.

Once we have the certificate (in file alice.pem) we can import it onto the card:

% yubico-piv-tool -k $KEY -a import-certificate -s 9e -i alice.pem
Enter management key: CC044321D49AC1FC40146AD049830DB09C5AFF05CD843766
Successfully imported a new certificate.

Configuring smart card login

OpenSC provides a PKCS #11 module for interfacing with PIV smart cards, among other things:

# dnf install -y opensc

Enable smart card authentication in /etc/sssd.conf:

[pam]
pam_cert_auth = True

Then restart SSSD:

# systemctl restart sssd

Next, enable the OpenSC PKCS #11 module in the system NSS database:

# modutil -dbdir /etc/pki/nssdb \
    -add "OpenSC" -libfile opensc-pkcs11.so

We also need to add the IPA CA cert to the system NSSDB. This will allow SSSD to validate certificates from smart cards. If smart card certificates are issued by a sub-CA or an external CA, import that CA’s certificate instead.

# certutil -d /etc/ipa/nssdb -L -n 'IPA.LOCAL IPA CA' -a \
  | certutil -d /etc/pki/nssdb -A -n 'IPA.LOCAL IPA CA' -t 'CT,C,C'

One hiccup I had was that SSSD could not talk to the OCSP server indicated in the Authority Information Access extension on the certificate (due to my DNS not being set up correctly). I had to tell SSSD not to perform OCSP checks. The sssd.conf snippet follows. Do not do this in a production environment.

[sssd]
...
certificate_verification = no_ocsp

That’s pretty much all there is to it. After this, I was able to log in as alice using the YubiKey NEO. When logging in with the card inserted, instead of being prompted for a password, GDM prompts for the PIN. Enter the pin, and it lets you in!

Screenshot of login PIN prompt

Conclusion

I mentioned (or didn’t mention) a few standards related to smart card authentication. A quick review of them is warranted:

  • CCID is a USB smart card interface standard.
  • PIV (Personal Identify Verification) is a smart card standard from NIST. It defines the slots, PIN behaviour, etc.
  • PKCS #15 is a token information format. OpenSC provides an PKCS #15 emulation layer for PIV cards.
  • PKCS #11 is a software interface to cryptographic tokens. Token and HSM vendors provide PKCS #11 modules for their devices. OpenSC provides a PKCS #11 interface to PKCS #15 tokens (including emulated PIV tokens).

It is appropriate to mention pam_pkcs11, which is also part of the OpenSC project, as an alternative to SSSD. More configuration is involved, but if you don’t have (or don’t want) an external identity management system it looks like a good approach.

You might remember that I was using slot 9e which doesn’t require a PIN, yet I was still prompted for a PIN when logging in. There are a couple of issues to tease apart here. The first issue is that although PIV cards do not require the PIN for private key operations on slot 9e, the opensc-pkcs11.so PKCS #11 module does not correctly report this. As an alternative to OpenSC, Yubico provide their own PKCS #11 module called YKCS11 as part of yubico-piv-tool but modutil did not like it. Nevertheless, a peek at its source code leads me to believe that it too declares that the PIN is required regardless of the slot in use. I could not find much discussion of this discrepancy so I will raise some tickets and hopefully it can be addressed.

The second issue is that SSSD requires the PIN and uses it to log into the token, even if the token says that a PIN is not required. Again, I will start a discussion to see if this is really the intended behaviour (perhaps it is).

The YubiKey NEO features a wireless (NFC) interface. I haven’t played with it yet, but all the smart card features are available over that interface. This lends weight to fixing the issues preventing PIN-less usage.

A final thought I have about the user experience is that it would be nice if user information could be derived or looked up based on the certificate(s) in the smart card, and a user automatically selected, instead of having to first specify "I am alice" or whoever. The information is there on the card after all, and it is one less step for users to perform. If PIN-less usage can be addressed, it would mean that a user can just approach a machine, plug in their smart card and hi ho, off to work they go. There are some indications that this does work with GDM and pam_pkcs11, so if you know how to get it going with SSSD I would love to know!

by ftweedal at August 12, 2016 02:55 AM

August 11, 2016

Adam Young

Tripleo HA Federation Proof-of-Concept

Keystone has supported identity federation for several releases. I have been working on a proof-of-concept integration of identity federation in a TripleO deployment. I was able to successfully login to Horizon via WebSSO, and want to share my notes.

A federation deployment requires changes to the network topology, Keystone, the HTTPD service, and Horizon. The various OpenStack deployment tools will have their own ways of applying these changes. While this proof-of-concept can’t be called production-ready, it does demonstrate that TripleO can support Federation using SAML. From this proof-of-concept, we should be to deduce the necessary steps needed for a production deployment.

Prerequisites

  • Single physical node – Large enough to run multiple virtual machines.  I only ended up using 3, but scaled up to 6 at one point and ran out of resources.  Tested with 8 CPUs and 32 GB RAM.
  • Centos 7.2 – Running as the base operating system.
  • FreeIPA – Particularly, the CentOS repackage of Red Hat Identity Management. Running on the base OS.
  • Keycloak – Actually an alpha build of Red Hat SSO, running on the base OS. This was fronted by Apache HTTPD, and proxied through ajp://localhost:8109. This gave me HTTPS support using the CA Certificate from the IPA server.  This will be important later when the controller nodes need to talk to the identity provider to set up metadata.
  • Tripleo Quickstart – deployed in HA mode, using an undercloud.
    • ./quickstart.sh –config config/general_config/ha.yml ayoung-dell-t1700.test

In addition, I did some sanity checking of the cluster, but deploying the overcloud using the quickstart helper script, and tore it down using heat stack-delete overcloud.

Reproducing Results

When doing development testing, you can expect to rebuild and teardown your cloud on a regular basis.  When you redeploy, you want to make sure that the changes are just the delta from what you tried last time.  As the number of artifacts grew, I found I needed to maintain a repository of files that included the environment passed to openstack overcloud deploy.  To manage these, I create a git repository in /home/stack/deployment. Inside that directory, I copied the overcloud-deploy.sh and deploy_env.yml files generated by the overcloud, and modified them accordingly.

In my version of overcloud-deploy.sh, I wanted to remove the deploy_env.yml generation, to avoid confusion during later deployments.  I also wanted to preserve the environment file across deployments (and did not want it in /tmp). This file has three parts: the Keystone configuration values, HTTPS/Network setup, and configuration for a single node deployment. This last part was essential for development, as chasing down fixes across three HA nodes was time-consuming and error prone. The DNS server value I used is particular to my deployment, and reflects the IPA server running on the base host.

For reference, I’ve included those files at the end of this post.

Identity Provider Registration and Metadata

While it would have been possible to run the registration of the identity provider on one of the nodes, the Heat-managed deployment process does not provide a clean way to gather those files and package them for deployment to other nodes.  While I deployed on a single node for development, it took me a while to realize that I could do that, and had already worked out an approach to call the registration from the undercloud node, and produce a tarball.

As a result, I created a script, again to allow for reproducing this in the future:

register_sp_rhsso.sh

#!/bin/sh 

basedir=$(dirname $0)
ipa_domain=`hostname -d`
rhsso_master_admin_password=FreeIPA4All

keycloak-httpd-client-install \
   --client-originate-method registration \
   --force \
   --mellon-https-port 5000 \
   --mellon-hostname openstack.$ipa_domain  \
   --mellon-root '/v3' \
   --keycloak-server-url https://identity.$ipa_domain  \
   --keycloak-auth-role root-admin \
   --keycloak-admin-password  $rhsso_master_admin_password \
   --app-name v3 \
   --keycloak-realm openstack \
   --mellon-https-port 5000 \
   --log-file $basedir/rhsso.log \
   --httpd-dir $basedir/rhsso/etc/httpd \
   -l "/v3/auth/OS-FEDERATION/websso/saml2" \
   -l "/v3/auth/OS-FEDERATION/identity_providers/rhsso/protocols/saml2/websso" \
   -l "/v3/OS-FEDERATION/identity_providers/rhsso/protocols/saml2/auth"

This does not quite generate the right paths, as it turns out that the $basename is not quite what we want, so I had to post-edit the generated file: rhsso/etc/httpd/conf.d/v3_mellon_keycloak_openstack.conf

Specifically, the path:
./rhsso/etc/httpd/saml2/v3_keycloak_openstack_idp_metadata.xml

has to be changed to:
/etc/httpd/saml2/v3_keycloak_openstack_idp_metadata.xml

While I created a tarball that I then manually deployed, the preferred approach would be to use tripleo-heat-templates/puppet/deploy-artifacts.yaml to deploy them. The problem I faced is that the generated files include Apache module directives from mod_auth_mellon.  If mod_auth_mellon has not been installed into the controller, the Apache server won’t start, and the deployment will fail.

Federation Operations

The Federation setup requires a few calls. I documented them in Rippowam, and attempted to reproduce them locally using Ansible and the Rippowam code. I was not a purist though, as A) I needed to get this done and B) the end solution is not going to use Ansible anyway. The general steps I performed:

  • yum install mod_auth_mellon
  • Copy over the metadata tarball, expand it, and tweak the configuration (could be done prior to building the tarball).
  • Run the following commands.
openstack identity provider create --remote-id https://identity.{{ ipa_domain }}/auth/realms/openstack
openstack mapping create --rules ./mapping_rhsso_saml2.json rhsso_mapping
openstack federation protocol create --identity-provider rhsso --mapping rhsso_mapping saml2

The Mapping file is the one from Rippowm

The keystone service calls only need to be performed once, as they are stored in the database. The expansion of the tarball needs to be performed on every node.

Dashboard

As in previous Federation setups, I needed to modify the values used for WebSSO. The values I ended up setting in /etc/openstack-dashboard/local_settings resembled this:

OPENSTACK_KEYSTONE_URL = "https://openstack.ayoung-dell-t1700.test:5000/v3"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"
WEBSSO_ENABLED = True
WEBSSO_INITIAL_CHOICE = "saml2"
WEBSSO_CHOICES = (
    ("saml2", _("Rhsso")),
    ("credentials", _("Keystone Credentials")),
)

Important: Make sure that the auth URL is using a FQDN name that matches the value in the signed certificate.

Redirect Support for SAML

The several differences between how HTTPD and HA Proxy operate require us to perform certain configuration modifications.  Keystone runs internally over HTTP, not HTTPS.  However, the SAML Identity Providers are public, and are transmitting cryptographic data, and need to be protected using HTTPS.  As a result, HA Proxy needs to expose an HTTPS-based endpoint for the Keystone public service.  In addition, the redirects that come from mod_auth_mellon need to reflect the public protocol, hostname, and port.

The solution I ended up with involved changes on both sides:

In haproxy.cfg, I modified the keystone public stanza so it looks like this:

listen keystone_public
bind 10.0.0.4:13000 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 10.0.0.4:5000 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 172.16.2.4:5000 transparent
redirect scheme https code 301 if { hdr(host) -i 10.0.0.4 } !{ ssl_fc }
rsprep ^Location:\ http://(.*) Location:\ https://\1

While this was necessary, it also proved to be insufficient. When the signed assertion from the Identity Provider is posted to the Keystone server, mod_auth_mellon checks that the destination value matches what it expects the hostname should be. Consequently, in order to get this to match in the file:

/etc/httpd/conf.d/10-keystone_wsgi_main.conf

I had to set the following:

<VirtualHost 172.16.2.6:5000>
ServerName https://openstack.ayoung-dell-t1700.test

Note that the protocol is set to https even though the Keystone server is handling HTTP. This might break elswhere. If if does, then the Keystone configuration in Apache may have to be duplicated.

Federation Mapping

For the WebSSO login to successfully complete, the user needs to have a role on at least one project. The Rippowam mapping file maps the user to the Member role in the demo group, so the most straightforward steps to complete are to add a demo group, add a demo project, and assign the Member role on the demo project to the demo group. All this should be done with a v3 token:

openstack group create demo
openstack role create Member
openstack project create demo
openstack role add --group demo --project demo Member

Complete helper files

Below are the complete files that were too long to put inline.

overcloud-deploy.sh

#!/bin/bash
# Simple overcloud deploy script

set -eux

# Source in undercloud credentials.
source /home/stack/stackrc

# Wait until there are hypervisors available.
while true; do
    count=$(openstack hypervisor stats show -c count -f value)
    if [ $count -gt 0 ]; then
        break
    fi
done

deploy_status=0

# Deploy the overcloud!
openstack overcloud deploy --debug --templates --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e $HOME/deployment/network-environment.yaml --control-scale 3 --neutron-network-type vxlan --neutron-tunnel-types vxlan -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server pool.ntp.org -e $HOME/deployment/deploy_env.yaml   --force-postconfig "$@"    || deploy_status=1

# We don't always get a useful error code from the openstack deploy command,
# so check `heat stack-list` for a CREATE_FAILED status.
if heat stack-list | grep -q 'CREATE_FAILED'; then
    deploy_status=1

    for failed in $(heat resource-list \
        --nested-depth 5 overcloud | grep FAILED |
        grep 'StructuredDeployment ' | cut -d '|' -f3)
    do heat deployment-show $failed > failed_deployment_$failed.log
    done
fi

exit $deploy_status

deploy-env.yml

parameter_defaults:
  controllerExtraConfig:
    keystone::using_domain_config: true
    keystone::config::keystone_config:
      identity/domain_configurations_from_database:
        value: true
      auth/methods:
        value: external,password,token,oauth1,saml2
      federation/trusted_dashboard:
        value: http://openstack.ayoung-dell-t1700.test/dashboard/auth/websso/
      federation/sso_callback_template:
        value: /etc/keystone/sso_callback_template.html
      federation/remote_id_attribute:
        value: MELLON_IDP

    # In releases before Mitaka, HeatWorkers doesn't modify
    # num_engine_workers, so handle via heat::config 
    heat::config::heat_config:
      DEFAULT/num_engine_workers:
        value: 1
    heat::api_cloudwatch::enabled: false
    heat::api_cfn::enabled: false
  HeatWorkers: 1
  CeilometerWorkers: 1
  CinderWorkers: 1
  GlanceWorkers: 1
  KeystoneWorkers: 1
  NeutronWorkers: 1
  NovaWorkers: 1
  SwiftWorkers: 1
  CloudName: openstack.ayoung-dell-t1700.test
  CloudDomain: ayoung-dell-t1700.test
  DnsServers: 10.18.57.26


  #TLS Setup from enable-tls.yaml
  PublicVirtualFixedIPs: [{'ip_address':'10.0.0.4'}]
  SSLCertificate: |
    -----BEGIN CERTIFICATE-----
    #certificate removed for space
    -----END CERTIFICATE-----

    The contents of your certificate go here
  SSLIntermediateCertificate: ''
  SSLKey: |
    -----BEGIN RSA PRIVATE KEY-----
    #key removed for space
    -----END RSA PRIVATE KEY-----

  EndpointMap:
    AodhAdmin: {protocol: 'http', port: '8042', host: 'IP_ADDRESS'}
    AodhInternal: {protocol: 'http', port: '8042', host: 'IP_ADDRESS'}
    AodhPublic: {protocol: 'https', port: '13042', host: 'CLOUDNAME'}
    CeilometerAdmin: {protocol: 'http', port: '8777', host: 'IP_ADDRESS'}
    CeilometerInternal: {protocol: 'http', port: '8777', host: 'IP_ADDRESS'}
    CeilometerPublic: {protocol: 'https', port: '13777', host: 'CLOUDNAME'}
    CinderAdmin: {protocol: 'http', port: '8776', host: 'IP_ADDRESS'}
    CinderInternal: {protocol: 'http', port: '8776', host: 'IP_ADDRESS'}
    CinderPublic: {protocol: 'https', port: '13776', host: 'CLOUDNAME'}
    GlanceAdmin: {protocol: 'http', port: '9292', host: 'IP_ADDRESS'}
    GlanceInternal: {protocol: 'http', port: '9292', host: 'IP_ADDRESS'}
    GlancePublic: {protocol: 'https', port: '13292', host: 'CLOUDNAME'}
    GnocchiAdmin: {protocol: 'http', port: '8041', host: 'IP_ADDRESS'}
    GnocchiInternal: {protocol: 'http', port: '8041', host: 'IP_ADDRESS'}
    GnocchiPublic: {protocol: 'https', port: '13041', host: 'CLOUDNAME'}
    HeatAdmin: {protocol: 'http', port: '8004', host: 'IP_ADDRESS'}
    HeatInternal: {protocol: 'http', port: '8004', host: 'IP_ADDRESS'}
    HeatPublic: {protocol: 'https', port: '13004', host: 'CLOUDNAME'}
    HorizonPublic: {protocol: 'https', port: '443', host: 'CLOUDNAME'}
    KeystoneAdmin: {protocol: 'http', port: '35357', host: 'IP_ADDRESS'}
    KeystoneInternal: {protocol: 'http', port: '5000', host: 'IP_ADDRESS'}
    KeystonePublic: {protocol: 'https', port: '13000', host: 'CLOUDNAME'}
    NeutronAdmin: {protocol: 'http', port: '9696', host: 'IP_ADDRESS'}
    NeutronInternal: {protocol: 'http', port: '9696', host: 'IP_ADDRESS'}
    NeutronPublic: {protocol: 'https', port: '13696', host: 'CLOUDNAME'}
    NovaAdmin: {protocol: 'http', port: '8774', host: 'IP_ADDRESS'}
    NovaInternal: {protocol: 'http', port: '8774', host: 'IP_ADDRESS'}
    NovaPublic: {protocol: 'https', port: '13774', host: 'CLOUDNAME'}
    NovaEC2Admin: {protocol: 'http', port: '8773', host: 'IP_ADDRESS'}
    NovaEC2Internal: {protocol: 'http', port: '8773', host: 'IP_ADDRESS'}
    NovaEC2Public: {protocol: 'https', port: '13773', host: 'CLOUDNAME'}
    NovaVNCProxyAdmin: {protocol: 'http', port: '6080', host: 'IP_ADDRESS'}
    NovaVNCProxyInternal: {protocol: 'http', port: '6080', host: 'IP_ADDRESS'}
    NovaVNCProxyPublic: {protocol: 'https', port: '13080', host: 'CLOUDNAME'}
    SaharaAdmin: {protocol: 'http', port: '8386', host: 'IP_ADDRESS'}
    SaharaInternal: {protocol: 'http', port: '8386', host: 'IP_ADDRESS'}
    SaharaPublic: {protocol: 'https', port: '13386', host: 'CLOUDNAME'}
    SwiftAdmin: {protocol: 'http', port: '8080', host: 'IP_ADDRESS'}
    SwiftInternal: {protocol: 'http', port: '8080', host: 'IP_ADDRESS'}
    SwiftPublic: {protocol: 'https', port: '13808', host: 'CLOUDNAME'}

resource_registry:
  OS::TripleO::NodeTLSData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/tls-cert-inject.yaml

parameters:
   ControllerCount: 1 

by Adam Young at August 11, 2016 05:53 PM

August 10, 2016

Rich Megginson

How to do python dict setdefault with ruby hashes

setdefault is a very useful Python Dict method.
>python
Python 2.7.11 (default, Jul  8 2016, 19:45:00) 
[GCC 5.3.1 20160406 (Red Hat 5.3.1-6)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> dd = {}
>>> dd.setdefault('a', {}).setdefault('b', {})['c'] = 'd'
>>> dd
{'a': {'b': {'c': 'd'}}}
>>> dd.setdefault('a', {}).setdefault('b', {})['e'] = 'f'
>>> dd
{'a': {'b': {'c': 'd', 'e': 'f'}}}
>>> dd.setdefault('g', {}).setdefault('b', {})['e'] = 'f'
>>> dd
{'a': {'b': {'c': 'd', 'e': 'f'}}, 'g': {'b': {'e': 'f'}}}

You can do the same thing in ruby with a little hackery.
>irb
irb(main):001:0> dd = {}
=> {}
irb(main):002:0> ((dd['a'] ||= {})['b'] ||= {})['c'] = 'd'
=> "d"
irb(main):003:0> dd
=> {"a"=>{"b"=>{"c"=>"d"}}}
irb(main):004:0> ((dd['a'] ||= {})['b'] ||= {})['e'] = 'f'
=> "f"
irb(main):005:0> dd
=> {"a"=>{"b"=>{"c"=>"d", "e"=>"f"}}}
irb(main):006:0> ((dd['g'] ||= {})['b'] ||= {})['e'] = 'f'
=> "f"
irb(main):007:0> dd
=> {"a"=>{"b"=>{"c"=>"d", "e"=>"f"}}, "g"=>{"b"=>{"e"=>"f"}}}

August 10, 2016 04:38 PM

August 03, 2016

James Shubin

Seen in downtown Montreal…

The Technical Blog of James was seen on an outdoor electronic display in downtown Montreal! Thanks to one of my readers for sending this in.

I guess the smart phone revolution is over, and people are taking to reading my articles on bigger screens!

I guess the smart phone revolution is over, and people are taking to reading my articles on bigger screens! The “poutine” is decent proof that this is probably Montreal.

If you’ve got access to a large electronic display, put up the blog, snap a photo, and send it my way! I’ll post it here and send you some random stickers!

Happy Hacking,

James

PS: If you have some comments about this blog, please don’t be shy, send them my way.


by purpleidea at August 03, 2016 05:59 AM

July 26, 2016

Fraser Tweedale

FreeIPA Lightweight CA internals

In the preceding post, I explained the use cases for the FreeIPA lightweight sub-CAs feature, how to manage CAs and use them to issue certificates, and current limitations. In this post I detail some of the internals of how the feature works, including how signing keys are distributed to replicas, and how sub-CA certificate renewal works. I conclude with a brief retrospective on delivering the feature.

Full details of the design of the feature can be found on the design page. This post does not cover everything from the design page, but we will look at the aspects that are covered from the perspective of the system administrator, i.e. "what is happening on my systems?"

Dogtag lightweight CA creation

The PKI system used by FreeIPA is called Dogtag. It is a separate project with its own interfaces; most FreeIPA certificate management features are simply reflecting a subset of the corresponding Dogtag interface, often integrating some additional access controls or identity management concepts. This is certainly the case for FreeIPA sub-CAs. The Dogtag lightweight CAs feature was implemented initially to support the FreeIPA use case, yet not all aspects of the Dogtag feature are used in FreeIPA as of v4.4, and other consumers of the Dogtag feature are likely to emerge (in particular: OpenStack).

The Dogtag lightweight CAs feature has its own design page which documents the feature in detail, but it is worth mentioning some important aspects of the Dogtag feature and their impact on how FreeIPA uses the feature.

  • Dogtag lightweight CAs are managed via a REST API. The FreeIPA framework uses this API to create and manage lightweight CAs, using the privileged RA Agent certificate to authenticate. In a future release we hope to remove the RA Agent and authenticate as the FreeIPA user using GSS-API proxy credentials.
  • Each CA in a Dogtag instance, including the "main" CA, has an LDAP entry with object class authority. The schema includes fields such as subject and issuer DN, certificate serial number, and a UUID primary key, which is randomly generated for each CA. When FreeIPA creates a CA, it stores this UUID so that it can map the FreeIPA CA’s common name (CN) to the Dogtag authority ID in certificate requests or other management operations (e.g. CA deletion).
  • The "nickname" of the lightweight CA signing key and certificate in Dogtag’s NSSDB is the nickname of the "main" CA signing key, with the lightweight CA’s UUID appended. In general operation FreeIPA does not need to know this, but the ipa-certupdate program has been enhanced to set up Certmonger tracking requests for FreeIPA-managed lightweight CAs and therefore it needs to know the nicknames.
  • Dogtag lightweight CAs may be nested, but FreeIPA as of v4.4 does not make use of this capability.

So, let’s see what actually happens on a FreeIPA server when we add a lightweight CA. We will use the sc example from the previous post. The command executed to add the CA, with its output, was:

% ipa ca-add sc --subject "CN=Smart Card CA, O=IPA.LOCAL" \
    --desc "Smart Card CA"
---------------
Created CA "sc"
---------------
  Name: sc
  Description: Smart Card CA
  Authority ID: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
  Subject DN: CN=Smart Card CA,O=IPA.LOCAL
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330

The LDAP entry added to the Dogtag database was:

dn: cn=660ad30b-7be4-4909-aa2c-2c7d874c84fd,ou=authorities,ou=ca,o=ipaca
authoritySerial: 63
objectClass: authority
objectClass: top
cn: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
authorityID: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
authorityKeyNickname: caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d87
 4c84fd
authorityKeyHost: f24b-0.ipa.local:443
authorityEnabled: TRUE
authorityDN: CN=Smart Card CA,O=IPA.LOCAL
authorityParentDN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
authorityParentID: d3e62e89-df27-4a89-bce4-e721042be730

We see the authority UUID in the authorityID attribute as well as cn and the DN. authorityKeyNickname records the nickname of the signing key in Dogtag’s NSSDB. authorityKeyHost records which hosts possess the signing key – currently just the host on which the CA was created. authoritySerial records the serial number of the certificate (more that that later). The meaning of the rest of the fields should be clear.

If we have a peek into Dogtag’s NSSDB, we can see the new CA’s certificate:

# certutil -d /etc/pki/pki-tomcat/alias -L

Certificate Nickname              Trust Attributes
                                  SSL,S/MIME,JAR/XPI

caSigningCert cert-pki-ca         CTu,Cu,Cu
auditSigningCert cert-pki-ca      u,u,Pu
Server-Cert cert-pki-ca           u,u,u
caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd u,u,u
ocspSigningCert cert-pki-ca       u,u,u
subsystemCert cert-pki-ca         u,u,u

There it is, alongside the main CA signing certificate and other certificates used by Dogtag. The trust flags u,u,u indicate that the private key is also present in the NSSDB. If we pretty print the certificate we will see a few interesting things:

# certutil -d /etc/pki/pki-tomcat/alias -L \
    -n 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd'
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 63 (0x3f)
        Signature Algorithm: PKCS #1 SHA-256 With RSA Encryption
        Issuer: "CN=Certificate Authority,O=IPA.LOCAL 201606201330"
        Validity:
            Not Before: Fri Jul 15 05:46:00 2016
            Not After : Tue Jul 15 05:46:00 2036
        Subject: "CN=Smart Card CA,O=IPA.LOCAL"
        ...
        Signed Extensions:
            ...
            Name: Certificate Basic Constraints
            Critical: True
            Data: Is a CA with no maximum path length.
            ...

Observe that:

  • The certificate is indeed a CA.
  • The serial number (63) agrees with the CA’s LDAP entry.
  • The validity period is 20 years, the default for CAs in Dogtag. This cannot be overridden on a per-CA basis right now, but addressing this is a priority.

Finally, let’s look at the raw entry for the CA in the FreeIPA database:

dn: cn=sc,cn=cas,cn=ca,dc=ipa,dc=local
cn: sc
ipaCaIssuerDN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
objectClass: ipaca
objectClass: top
ipaCaSubjectDN: CN=Smart Card CA,O=IPA.LOCAL
ipaCaId: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
description: Smart Card CA

We can see that this entry also contains the subject and issuer DNs, and the ipaCaId attribute holds the Dogtag authority ID, which allows the FreeIPA framework to dereference the local ID (sc) to the Dogtag ID as needed. We also see that the description attribute is local to FreeIPA; Dogtag also has a description attribute for lightweight CAs but FreeIPA uses its own.

Lightweight CA replication

FreeIPA servers replicate objects in the FreeIPA directory among themselves, as do Dogtag replicas (note: in Dogtag, the term clone is often used). All Dogtag instances in a replicated environment need to observe changes to lightweight CAs (creation, modification, deletion) that were performed on another replica and update their own view so that they can respond to requests consistently. This is accomplished via an LDAP persistent search which is run in a monitor thread. Care was needed to avoid race conditions. Fortunately, the solution for LDAP-based profile storage provided a fine starting point for the authority monitor; although lightweight CAs are more complex, many of the same race conditions can occur and these were already addressed in the LDAP profile monitor implementation.

But unlike LDAP-based profiles, a lightweight CA consists of more than just an LDAP object; there is also the signing key. The signing key lives in Dogtag’s NSSDB and for security reasons cannot be transported through LDAP. This means that when a Dogtag clone observes the addition of a lightweight CA, an out-of-band mechanism to transport the signing key must also be triggered.

This mechanism is covered in the design pages but the summarised process is:

  1. A Dogtag clone observes the creation of a CA on another server and starts a KeyRetriever thread. The KeyRetriever is implemented as part of Dogtag, but it is configured to run the /usr/libexec/ipa/ipa-pki-retrieve-key program, which is part of FreeIPA. The program is invoked with arguments of the server to request the key from (this was stored in the authorityKeyHost attribute mentioned earlier), and the nickname of the key to request.
  2. ipa-pki-retrieve-key requests the key from the Custodia daemon on the source server. It authenticates as the dogtag/<requestor-hostname>@REALM service principal. If authenticated and authorised, the Custodia daemon exports the signing key from Dogtag’s NSSDB wrapped by the main CA’s private key, and delivers it to the requesting server. ipa-pki-retrieve-key outputs the wrapped key then exits.
  3. The KeyRetriever reads the wrapped key and imports (unwraps) it into the Dogtag clone’s NSSDB. It then initialises the Dogtag CA’s Signing Unit allowing the CA to service signing requests on that clone, and adds its own hostname to the CA’s authorityKeyHost attribute.

Some excerpts of the CA debug log on the clone (not the server on which the sub-CA was first created) shows this process in action. The CA debug log is found at /var/log/pki/pki-tomcat/ca/debug. Some irrelevant messages have been omitted.

[25/Jul/2016:15:45:56][authorityMonitor]: authorityMonitor: Processed change controls.
[25/Jul/2016:15:45:56][authorityMonitor]: authorityMonitor: ADD
[25/Jul/2016:15:45:56][authorityMonitor]: readAuthority: new entryUSN = 109
[25/Jul/2016:15:45:56][authorityMonitor]: CertificateAuthority init 
[25/Jul/2016:15:45:56][authorityMonitor]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[25/Jul/2016:15:45:56][authorityMonitor]: SigningUnit init: debug Certificate object not found
[25/Jul/2016:15:45:56][authorityMonitor]: CA signing key and cert not (yet) present in NSSDB
[25/Jul/2016:15:45:56][authorityMonitor]: Starting KeyRetrieverRunner thread

Above we see the authorityMonitor thread observe the addition of a CA. It adds the CA to its internal map and attempts to initialise it, which fails because the key and certificate are not available, so it starts a KeyRetrieverRunner in a new thread.

[25/Jul/2016:15:45:56][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Running ExternalProcessKeyRetriever
[25/Jul/2016:15:45:56][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: About to execute command: [/usr/libexec/ipa/ipa-pki-retrieve-key, caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd, f24b-0.ipa.local]

The KeyRetrieverRunner thread invokes ipa-pki-retrieve-key with the nickname of the key it wants, and a host from which it can retrieve it. If a CA has multiple sources, the KeyRetrieverRunner will try these in order with multiple invocations of the helper, until one succeeds. If none succeed, the thread goes to sleep and retries when it wakes up initially after 10 seconds, then backing off exponentially.

[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Importing key and cert
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Reinitialising SigningUnit
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Got token Internal Key Storage Token by name
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Found cert by nickname: 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd' with serial number: 63
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Got private key from cert
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Got public key from cert
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: in init - got CA name CN=Smart Card CA,O=IPA.LOCAL

The key retriever successfully returned the key data and import succeeded. The signing unit then gets initialised.

[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Adding self to authorityKeyHosts attribute
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: In LdapBoundConnFactory::getConn()
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: postCommit: new entryUSN = 361
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: postCommit: nsUniqueId = 4dd42782-4a4f11e6-b003b01c-c8916432
[25/Jul/2016:15:47:14][authorityMonitor]: authorityMonitor: Processed change controls.
[25/Jul/2016:15:47:14][authorityMonitor]: authorityMonitor: MODIFY
[25/Jul/2016:15:47:14][authorityMonitor]: readAuthority: new entryUSN = 361
[25/Jul/2016:15:47:14][authorityMonitor]: readAuthority: known entryUSN = 361
[25/Jul/2016:15:47:14][authorityMonitor]: readAuthority: data is current

Finally, the Dogtag clone adds itself to the CA’s authorityKeyHosts attribute. The authorityMonitor observes this change but ignores it because its view is current.

Certificate renewal

CA signing certificates will eventually expire, and therefore require renewal. Because the FreeIPA framework operates with low privileges, it cannot add a Certmonger tracking request for sub-CAs when it creates them. Furthermore, although the renewal (i.e. the actual signing of a new certificate for the CA) should only happen on one server, the certificate must be updated in the NSSDB of all Dogtag clones.

As mentioned earlier, the ipa-certupdate command has been enhanced to add Certmonger tracking requests for FreeIPA-managed lightweight CAs. The actual renewal will only be performed on whichever server is the renewal master when Certmonger decides it is time to renew the certificate (assuming that the tracking request has been added on that server).

Let’s run ipa-certupdate on the renewal master to add the tracking request for the new CA. First observe that the tracking request does not exist yet:

# getcert list -d /etc/pki/pki-tomcat/alias |grep subject
        subject: CN=CA Audit,O=IPA.LOCAL 201606201330
        subject: CN=OCSP Subsystem,O=IPA.LOCAL 201606201330
        subject: CN=CA Subsystem,O=IPA.LOCAL 201606201330
        subject: CN=Certificate Authority,O=IPA.LOCAL 201606201330
        subject: CN=f24b-0.ipa.local,O=IPA.LOCAL 201606201330

As expected, we do not see our sub-CA certificate above. After running ipa-certupdate the following tracking request appears:

Request ID '20160725222909':
        status: MONITORING
        stuck: no
        key pair storage: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd',token='NSS Certificate DB',pin set
        certificate: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd',token='NSS Certificate DB'
        CA: dogtag-ipa-ca-renew-agent
        issuer: CN=Certificate Authority,O=IPA.LOCAL 201606201330
        subject: CN=Smart Card CA,O=IPA.LOCAL
        expires: 2036-07-15 05:46:00 UTC
        key usage: digitalSignature,nonRepudiation,keyCertSign,cRLSign
        pre-save command: /usr/libexec/ipa/certmonger/stop_pkicad
        post-save command: /usr/libexec/ipa/certmonger/renew_ca_cert "caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd"
        track: yes
        auto-renew: yes

As for updating the certificate in each clone’s NSSDB, Dogtag itself takes care of that. All that is required is for the renewal master to update the CA’s authoritySerial attribute in the Dogtag database. The renew_ca_cert Certmonger post-renewal hook script performs this step. Each Dogtag clone observes the update (in the monitor thread), looks up the certificate with the indicated serial number in its certificate repository (a new entry that will also have been recently replicated to the clone), and adds that certificate to its NSSDB. Again, let’s observe this process by forcing a certificate renewal:

# getcert resubmit -i 20160725222909
Resubmitting "20160725222909" to "dogtag-ipa-ca-renew-agent".

After about 30 seconds the renewal process is complete. When we examine the certificate in the NSSDB we see, as expected, a new serial number:

# certutil -d /etc/pki/pki-tomcat/alias -L \
    -n "caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd" \
    | grep -i serial
        Serial Number: 74 (0x4a)

We also see that the renew_ca_cert script has updated the serial in Dogtag’s database:

# ldapsearch -D cn="Directory Manager" -w4me2Test -b o=ipaca \
    '(cn=660ad30b-7be4-4909-aa2c-2c7d874c84fd)' authoritySerial
dn: cn=660ad30b-7be4-4909-aa2c-2c7d874c84fd,ou=authorities,ou=ca,o=ipaca
authoritySerial: 74

Finally, if we look at the CA debug log on the clone, we’ll see that the the authority monitor observes the serial number change and updates the certificate in its own NSSDB (again, some irrelevant or low-information messages have been omitted):

[26/Jul/2016:10:43:28][authorityMonitor]: authorityMonitor: Processed change controls.
[26/Jul/2016:10:43:28][authorityMonitor]: authorityMonitor: MODIFY
[26/Jul/2016:10:43:28][authorityMonitor]: readAuthority: new entryUSN = 1832
[26/Jul/2016:10:43:28][authorityMonitor]: readAuthority: known entryUSN = 361
[26/Jul/2016:10:43:28][authorityMonitor]: CertificateAuthority init 
[26/Jul/2016:10:43:28][authorityMonitor]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[26/Jul/2016:10:43:28][authorityMonitor]: Got token Internal Key Storage Token by name
[26/Jul/2016:10:43:28][authorityMonitor]: Found cert by nickname: 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd' with serial number: 63
[26/Jul/2016:10:43:28][authorityMonitor]: Got private key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: Got public key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: CA signing unit inited
[26/Jul/2016:10:43:28][authorityMonitor]: in init - got CA name CN=Smart Card CA,O=IPA.LOCAL
[26/Jul/2016:10:43:28][authorityMonitor]: Updating certificate in NSSDB; new serial number: 74

When the authority monitor processes the change, it reinitialises the CA including its signing unit. Then it observes that the serial number of the certificate in its NSSDB differs from the serial number from LDAP. It pulls the certificate with the new serial number from its certificate repository, imports it into NSSDB, then reinitialises the signing unit once more and sees the correct serial number:

[26/Jul/2016:10:43:28][authorityMonitor]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[26/Jul/2016:10:43:28][authorityMonitor]: Got token Internal Key Storage Token by name
[26/Jul/2016:10:43:28][authorityMonitor]: Found cert by nickname: 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd' with serial number: 74
[26/Jul/2016:10:43:28][authorityMonitor]: Got private key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: Got public key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: CA signing unit inited
[26/Jul/2016:10:43:28][authorityMonitor]: in init - got CA name CN=Smart Card CA,O=IPA.LOCAL

Currently this update mechanism is only used for lightweight CAs, but it would work just as well for the main CA too, and we plan to switch at some stage so that the process is consistent for all CAs.

Wrapping up

I hope you have enjoyed this tour of some of the lightweight CA internals, and in particular seeing how the design actually plays out on your systems in the real world.

FreeIPA lightweight CAs has been the most complex and challenging project I have ever undertaken. It took the best part of a year from early design and proof of concept, to implementing the Dogtag lightweight CAs feature, then FreeIPA integration, and numerous bug fixes, refinements or outright redesigns along the way. Although there are still some rough edges, some important missing features and, I expect, many an RFE to come, I am pleased with what has been delivered and the overall design.

Thanks are due to all of my colleagues who contributed to the design and review of the feature; each bit of input from all of you has been valuable. I especially thank Ade Lee and Endi Dewata from the Dogtag team for their help with API design and many code reviews over a long period of time, and from the FreeIPA team Jan Cholasta and Martin Babinsky for a their invaluable input into the design, and much code review and testing. I could not have delivered this feature without your help; thank you for your collaboration!

by ftweedal at July 26, 2016 02:01 AM

July 25, 2016

Fraser Tweedale

Lightweight Sub-CAs in FreeIPA 4.4

Last year FreeIPA 4.2 brought us some great new certificate management features, including custom certificate profiles and user certificates. The upcoming FreeIPA 4.4 release builds upon this groundwork and introduces lightweight sub-CAs, a feature that lets admins to mint new CAs under the main FreeIPA CA and allows certificates for different purposes to be issued in different certificate domains. In this post I will review the use cases and demonstrate the process of creating, managing and issuing certificates from sub-CAs. (A follow-up post will detail some of the mechanisms that operate behind the scenes to make the feature work.)

Use cases

Currently, all certificates issued by FreeIPA are issued by a single CA. Say you want to issue certificates for various purposes: regular server certificates, and user certificates for VPN authentication, and authentication to a particular web service. Currently, assuming the certificate bore the appropriate Key Usage and Extended Key Usages extensions (with the default profile, they do), a certificate issued for one of these purposes could be used for all of the other purposes.

Issuing certificates for particular purposes (especially client authentication scenarios) from a sub-CA allows an administrator to configure the endpoint authenticating the clients to use the immediate issuer certificate for validation client certificates. Therefore, if you had a sub-CA for issuing VPN authentication certificates, and a different sub-CA for issuing certificates for authenticating to the web service, one could configure these services to accept certificates issued by the relevant CA only. Thus, where previously the scope of usability may have been unacceptably broad, administrators now have more fine-grained control over how certificates can be used.

Finally, another important consideration is that while revoking the main IPA CA is usually out of the question, it is now possible to revoke an intermediate CA certificate. If you create a CA for a particular organisational unit (e.g. some department or working group) or service, if or when that unit or service ceases to operate or exist, the related CA certificate can be revoked, rendering certificates issued by that CA useless, as long as relying endpoints perform CRL or OCSP checks.

Creating and managing sub-CAs

In this scenario, we will add a sub-CA that will be used to issue certificates for users’ smart cards. We assume that a profile for this purpose already exists, called userSmartCard.

To begin with, we are authenticated as admin or another user that has CA management privileges. Let’s see what CAs FreeIPA already knows about:

% ipa ca-find
------------
1 CA matched
------------
  Name: ipa
  Description: IPA CA
  Authority ID: d3e62e89-df27-4a89-bce4-e721042be730
  Subject DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
----------------------------
Number of entries returned 1
----------------------------

We can see that FreeIPA knows about the ipa CA. This is the "main" CA in the FreeIPA infrastructure. Depending on how FreeIPA was installed, it could be a root CA or it could be chained to an external CA. The ipa CA entry is added automatically when installing or upgrading to FreeIPA 4.4.

Now, let’s add a new sub-CA called sc:

% ipa ca-add sc --subject "CN=Smart Card CA, O=IPA.LOCAL" \
    --desc "Smart Card CA"
---------------
Created CA "sc"
---------------
  Name: sc
  Description: Smart Card CA
  Authority ID: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
  Subject DN: CN=Smart Card CA,O=IPA.LOCAL
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330

The --subject option gives the full Subject Distinguished Name for the new CA; it is mandatory, and must be unique among CAs managed by FreeIPA. An optional description can be given with --desc. In the output we see that the Issuer DN is that of the IPA CA.

Having created the new CA, we must add it to one or more CA ACLs to allow it to be used. CA ACLs were added in FreeIPA 4.2 for defining policies about which profiles could be used for issuing certificates to which subject principals (note: the subject principal is not necessarily the principal performing the certificate request). In FreeIPA 4.4 the CA ACL concept has been extended to also include which CA is being asked to issue the certificate.

We will add a CA ACL called user-sc-userSmartCard and associate it with all users, with the userSmartCard profile, and with the sc CA:

% ipa caacl-add user-sc-userSmartCard --usercat=all
------------------------------------
Added CA ACL "user-sc-userSmartCard"
------------------------------------
  ACL name: user-sc-userSmartCard
  Enabled: TRUE
  User category: all

% ipa caacl-add-profile user-sc-userSmartCard --certprofile userSmartCard
  ACL name: user-sc-userSmartCard
  Enabled: TRUE
  User category: all
  CAs: sc
  Profiles: userSmartCard
-------------------------
Number of members added 1
-------------------------

% ipa caacl-add-ca user-sc-userSmartCard --ca sc
  ACL name: user-sc-userSmartCard
  Enabled: TRUE
  User category: all
  CAs: sc
-------------------------
Number of members added 1
-------------------------

A CA ACL can reference multiple CAs individually, or, like we saw with users above, we can associate a CA ACL with all CAs by setting --cacat=all when we create the CA ACL, or via the ipa ca-mod command.

A special behaviour of CA ACLs with respect to CAs must be mentioned: if a CA ACL is associated with no CAs (either individually or by category), then it allows access to the ipa CA (and only that CA). This behaviour, though inconsistent with other aspects of CA ACLs, is for compatibility with pre-sub-CAs CA ACLs. An alternative approach is being discussed and could be implemented before the final release.

Requesting certificates from sub-CAs

The ipa cert-request command has learned the --ca argument for directing the certificate request to a particular sub-CA. If it is not given, it defaults to ipa.

alice already has a CSR for the key in her smart card, so now she can request a certificate from the sc CA:

% ipa cert-request --principal alice \
    --profile userSmartCard --ca sc /path/to/csr.req
  Certificate: MIIDmDCCAoCgAwIBAgIBQDANBgkqhkiG9w0BA...
  Subject: CN=alice,O=IPA.LOCAL
  Issuer: CN=Smart Card CA,O=IPA.LOCAL
  Not Before: Fri Jul 15 05:57:04 2016 UTC
  Not After: Mon Jul 16 05:57:04 2018 UTC
  Fingerprint (MD5): 6f:67:ab:4e:0c:3d:37:7e:e6:02:fc:bb:5d:fe:aa:88
  Fingerprint (SHA1): 0d:52:a7:c4:e1:b9:33:56:0e:94:8e:24:8b:2d:85:6e:9d:26:e6:aa
  Serial number: 64
  Serial number (hex): 0x40

Certmonger has also learned the -X/--issuer option for specifying that the request be directed to the named issuer. There is a clash of terminology here; the "CA" terminology in Certmonger is already used to refer to a particular CA "endpoint". Various kinds of CAs and multiple instances thereof are supported. But now, with Dogtag and FreeIPA, a single CA may actually host many CAs. Conceptually this is similar to HTTP virtual hosts, with the -X option corresponding to the Host: header for disambiguating the CA to be used.

If the -X option was given when creating the tracking request, the Certmonger FreeIPA submit helper uses its value in the --ca option to ipa cert-request. These requests are subject to CA ACLs.

Limitations

It is worth mentioning a few of the limitations of the sub-CAs feature, as it will be delivered in FreeIPA 4.4.

All sub-CAs are signed by the ipa CA; there is no support for "nesting" CAs. This limitation is imposed by FreeIPA – the lightweight CAs feature in Dogtag does not have this limitation. It could be easily lifted in a future release, if there is a demand for it.

There is no support for introducing unrelated CAs into the infrastructure, either by creating a new root CA or by importing an unrelated external CA. Dogtag does not have support for this yet, either, but the lightweight CAs feature was designed so that this would be possible to implement. This is also why all the commands and argument names mention "CA" instead of "Sub-CA". I expect that there will be demand for this feature at some stage in the future.

Currently, the key type and size are fixed at RSA 2048. Same is true in Dogtag, and this is a fairly high priority to address. Similarly, the validity period is fixed, and we will need to address this also, probably by allowing custom CA profiles to be used.

Conclusion

The Sub-CAs feature will round out FreeIPA’s certificate management capabilities making FreeIPA a more attractive solution for organisations with sophisticated certificate requirements. Multiple security domains can be created for issuing certificates with different purposes or scopes. Administrators have a simple interface for creating and managing CAs, and rules for how those CAs can be used.

There are some limitations which may be addressed in a future release; the ability to control key type/size and CA validity period will be the highest priority among them.

This post examined the use cases and high-level user/administrator experience of sub-CAs. In the next post, I will detail some of the machinery that makes the sub-CAs feature work.

by ftweedal at July 25, 2016 02:32 AM

July 23, 2016

Rich Megginson

How to find build-time vs. run-time dependencies of a gem

Using ruby 2.2.5p319 (2016-04-26 revision 54774) [x86_64-linux]
gem2rpm 0.11.3
gem 2.4.8

I'm trying to convert gems to rpms. Unfortunately, gem2rpm -d does not separate/classify the dependencies. What I really need is a separate list of run-time dependencies. I can get this with gem spec --ruby. For example:
$ gem spec --ruby systemd-journal-1.2.2.gem
# -*- encoding: utf-8 -*-
# stub: systemd-journal 1.2.2 ruby lib

Gem::Specification.new do |s|
  s.name = "systemd-journal"
  s.version = "1.2.2"
...
  if s.respond_to? :specification_version then
    s.specification_version = 4

    if Gem::Version.new(Gem::VERSION) >= Gem::Version.new('1.2.0') then
      s.add_runtime_dependency(%q<ffi>, ["~> 1.9.0"])
      s.add_development_dependency(%q<rspec>, ["~> 3.1"])
      s.add_development_dependency(%q<simplecov>, ["~> 0.9"])
      s.add_development_dependency(%q<rubocop>, ["~> 0.26"])
      s.add_development_dependency(%q<rake>, ["~> 10.3"])
      s.add_development_dependency(%q<yard>, ["~> 0.8.7"])
      s.add_development_dependency(%q<pry>, ["~> 0.10"])
    else

So I need to add Requires: rubygem(ffi) to the spec.

July 23, 2016 02:17 AM

July 21, 2016

Rob Crittenden

novajoin microservice integration

novajoin is a project for Openstack and IPA integration. It is a service that will allow instances created in nova to be added to IPA and a host OTP generated automatically. This OTP will then be passed into the instance to be used for enrollment during the cloud-init stage.

The end result is that a new instance will seamlessly be enrolled as an IPA client upon first boot.

Additionally, a class can be associated with an instance using Glance metadata so that IPA automember rules will automatically assign this new host to the appropriate hostgroups. Once that is done you can setup HBAC and sudo rules to grant the appropriate permissons/capabilities for all hosts in that group.

In short it can simplify administration significantly.

In the current iteration, novajoin consists of two pieces: a REST microservice and an AMQP notification listener.

The REST microservice is used to return dynamically generated metadata that will be added to the information that describes a given nova instance. This metadata is available at first boot and this is how novajoin injects the OTP into the instance for use with ipa-client-install. The framework for this change is being implemented in nova in this review: https://review.openstack.org/317739 .

The REST server just handles the  metadata, cloud-init does the rest. A cloud-init script is provided which glues the two together. It installs the needed packages, retrieves the metadata, then calls ipa-client-install with the requisite options.

The other server is an AMQP listener that will identify when an IPA-enrolled instance is deleted and remove host from IPA . It may eventually handle floating IP changes as well, automatically updating IPA DNS entries. The issue here is knowing what hostname to assign to the floating IP.

Glance images can have metadata as well which describes the image, such as OS distribution and version. If these have been set then novajoin will include this in the IPA entry it creates.

The basic flow looks something like this:

  1. Boot instance in nova. Add IPA metadata, specifying ipa_enroll True and optionally ipa_hostclass
  2. Instance boots. During cloud-init it will retrieve metadata
  3. During metadata retrieval ipa host-add is executed, adding the host to IPA and generating an OTP and any image metadata available.
  4. OTP and FQDN is returned in the metadata
  5. Our cloud-init script is called to install the IPA client packages and retrieve the OTP and FQDN
  6. Call ipa-client-install –hostname FQDN –password OTP

This leaves us with an IPA-enrolled client which can have permissions granted via HBAC and sudo rules (like who is allowed to log into this instance, what sudo commands are allowed, etc).

by rcritten at July 21, 2016 06:09 PM

Red Hat Blog

Thinking Through an Identity Management Deployment

As the number of production deployments of Identity Management (IdM) grows and as many more pilots and proof of concepts come into being, it becomes (more and more) important to talk about best practices. Every production deployment needs to deal with things like failover, scalability, and performance.  In turn, there are a few practical questions that need to be answered, namely:

  • How many replicas do I need?
  • How should these replicas be distributed between my datacenters?
  • How should these replicas be connected to each other?

The answer to these questions depends on the specifics of your environment. But before we dive into how to determine the answers to these questions it is important to realise that replicas (for example) N and M can have one replication agreement to replicate main identity data and another replication agreement to replicate certificate information. These two replication channels are completely independent. The reason for this is that the Certificate Authority (CA) component of IdM is optional. If you do not use it then you do not have any certificates to replicate and thus you can skip configuration of the replication topology for your CAs.

IdM is built with a general assumption that the CA component, if used, will be installed on some machines and not on others. However, practice shows that having different images or deployment scripts for different replicas is more overhead as compared to having a single full image and thus having CAs installed on every replica. If you prefer a CA on every replica then you can use the same topology for main and CA related replication agreements. Unfortunately, up until recent times, there was no tool that would allow someone to visualize the layout of your deployment and manage replication agreements in an intuitive fashion. To address this problem the FreeIPA project added a topology management tool that provides a nice graphical view. Take a look at the following demo that was shown at the Identity Management booth at Red Hat Summit (2016).

Another important challenge to consider is that not all replicas are the same – even if they each have the same components installed. The first server that you install becomes the tracker for certificates and keys and is responsible for CRL generation. Only one system in the whole deployment can bear this responsibility. This means that one should:

  • Know which server was deployed first.
  • If something happens to that server – transition its tracking and CRL generation responsibility to some other server.
  • Make sure you know which server is now responsible for these special functions.

In the future we expect the topology user interface to help with this task – but this capability is yet not implemented.

Having covered some of the “groundwork” in terms of replication – we can now jump into a simple list of questions that will help you to determine the best parameters for your deployment.

How many datacenters do you have?

Let’s, for example, imagine that you have three datacenters in different geographies Datacenter A, Datacenter B, and Datacenter C.

How many clients do you have in each datacenter and what operating systems (and versions) do they run?

Let’s use the data in the following table for reference:

Datacenter Total # of Servers Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 Red Hat Enterprise Linux 7 UNIX Application(s)
A 10K 2K 6K 1K 1K 50
B 6K 1K 3K 2K
C 7K 3K 3K 1K 30

Clients can also be divided into several buckets by type:

  • Caching clients – clients that use SSSD and cache a lot of information so that they do not need to query the server all the time.
  • Moderate clients – clients that do not use SSSD or some other caching mechanism and query servers on every authentication (but don’t query more information than they actually need).
  • Chatty clients – these are the clients that do a lot of queries and don’t necessarily cache information or care if they request more information than is needed.

Moderate and chatty clients may have a significant impact on your environment but, until you determine that you have such a client, you can assume that you do not have any. If you determine that some clients or applications are chatty – it might make sense to budget an extra replica or two for your datacenter(s).

The recommended clients to server ratio is about 2-3K clients per server, assuming that users authenticate multiple times over the course of the day but not every minute.

Datacenter Total # of Servers Caching Clients Moderate Clients Chatty Clients Replicas
A 10K 9K 1K 10 5
B 6K 5K 1K 0 2
C 7K 6K 1K 5 3

For Datacenter A we have about 9K clients that do caching well. That amounts to about 3-4 replicas. Three would be insufficient if there were many users logging in. So we will assume to employ four replicas. One extra replica should be able to serve the rest of the clients and a number of chatty applications so five looks like a good number.

For Datacenter B two replicas should be enough. If you see issues with that amount you can add another replica later.

In Datacenter C one would need a couple of replicas for caching clients and at least one for the remaining moderate and chatty clients – a total of three seems like a good number.

The whole deployment amounts to 10 replicas. As of Red Hat Enterprise Linux 7.2 topologies with up to 20 replicas are supported.

So far we have managed to answer the first two questions. The last one – about the topology – can be solved by adhering to the following rules:

  1. Connect a replica to at least two other replicas.
  2. Do not connect a replica to more than four other replicas.

Note that these first two recommendations are not hard requirements. Under some conditions it might make sense to have a single replication agreement or to have five. The maximum of four replication agreements was established as a way to prevent the replication overhead to start causing performance issues on the node and degrade its ability to serve clients.

  1. Connect datacenters with each other so that a datacenter is connected to at least a couple of other datacenters.
  2. Connect datacenters with at least a pair of replication agreements.
  3. Have at least two servers per datacenter.

In following these rules it is quite easy to create a topology that resembles the following:

image_one

As one can see the topology meets all of the above listed guidelines.

In general, if one has datacenters of a similar size, the topology per datacenter can be the same. In fact, it might make it easier to start with the following diagram and add or remove replicas on an as needed basis.

image_two

As always – your comments, experiences, and feedback are welcome.

by Dmitri Pal at July 21, 2016 03:25 PM

July 19, 2016

Ben Lipton

Thinking about templating for automatic CSR generation

Contents

Background

I am working on a project (ticket, design) to simplify creating certificates in FreeIPA. Currently administrators must generate a Certificate Signing Request (CSR) matching the type of certificate they wish to issue. They submit this CSR to FreeIPA using the ipa cert-request command, and if all the specified fields match the data FreeIPA has about the certificate subject, a cert will be issued. This seems a bit silly; if FreeIPA has this information already, can’t it automatically generate a CSR with the correct data?

However, different certificate applications require different data, so the Certificate Profile (a concept from the Dogtag CA that specifies the fields in the cert, constraints on their values, and how the final values should be constructed) needs to contain information on how to transform the data in FreeIPA into the fields of the certificate. Further, different administrators may want to use different tools to manage their private keys, so we must be able to communicate these certificate field values back in formats understood by different utilities such as openssl and certutil. Those tools will be responsible for generating the actual CSR from the provided configuration.

As suggested in the Mapping Rules design, the first implementation of this system used python to implement the low-level formatting rules, such as return the user’s email address, prefixed by the string ‘email:’. However, it is a goal of this project to allow new rules to be added at runtime, so these rules must be text-based rather than part of the code. This post will try to imagine what the rules would look like if implemented using the Jinja2 templating language.

Requirements

We must at a minimum be able to generate two different types of configuration, the openssl config file:

[ req ]
prompt = no
encrypt_key = no

distinguished_name = dn
req_extensions = exts

[ dn ]
O=DOMAIN.EXAMPLE.COM
CN=user

[ exts ]
subjectAltName=@SAN

[ SAN ]
email=user@example.com
dirName=SANdn

[ SANdn ]
1.DC=com
2.DC=example
CN=users
UID=user

and the certutil command line:

certutil -R -a -s "CN=user,O=DOMAIN.EXAMPLE.COM" --extSAN "email:user@example.com,dn:UID=user;CN=users;DC=example;DC=com"

Some interesting things to note about these formats:

  • The contents of an extension can be constructed from multiple sources, such as an email address and a distinguished name.
  • The openssl format is hierarchical. Some parameters, such as req_extensions and dirName always refer to the name of a new config section. Others can optionally refer to a config section using an @.
  • In openssl, the certificate subject is created under the [req] section, while extensions are created under their own section.
  • Openssl has a quirky way of denoting distinguished names. They are ordered from least to most specific (opposite how LDAP presents them). And if two AVAs have the same attribute type, they must be prefixed with different strings ending in . (or : or ,) as the config file format will otherwise discard all but one.
  • Certutil is also a bit quirky about distinguished names in the Subject Alt Name extension. Because the argument to the extSAN flag is comma-delimited, the components of the DN must be separated using a different delimiter, like a semicolon.

Implementations

Two-pass data interpolation

((user data -> data rules) -> syntax rules) -> output

One way we can approach constructing one extension from multiple sources it to use two sets of rules - one rule for each data item that provides a value for the extension, and one rule specifying the name and syntax of the extension as a whole. We evaulate the data rules first, then feed the values produced into the associated syntax rules to get the final output for that extension. Finally, the extension output is passed to the formatter, to produce the final output. We wish to express the data and syntax rules using the templating language, but the formatters (one for each CSR generation tool) will be implemented as python classes.

So how do we represent openssl sections in this scheme? The formatter needs to accept input in a (very limited) markup language, which defines where the sections are, what goes into them, and perhaps whether a given line should be placed under [req] or [exts]. Even with the features of the formatter markup very limited, it would still be possible for a user to accidentally or intentionally inject some markup that would make it impossible to generate a certificate for them. So, some kind of escaping is also needed, but it would be jinja2 template markup escaping, not the HTML escaping that jinja2 already supports.

Example data rules:

email={{subject.email}}
O={{config.ipacertificatesubjectbase}}\nCN={{subject.username}}

Example syntax rules:

--extSAN {{values|join(',')}}
subjectAltName=@{{'{% section %}'}}{{values|join('\n')}}{{'{% endsection %}'}}

That’s a lot of braces! We have to escape the section and endsection tags sequences so they will appear verbatim in the final template, producing something like:

subjectAltName=@{% section %}email={{subject.email}}
URI={{subject.inetuserhttpurl}}{% endsection %}

If we used a different type of markup for the user data interpolation and for denoting sections, the escaping would not be necessary; however, we would still need to preprocess the values to escape any jinja2 markup that comes from the user data, and we would still have two types of markup being used in parallel.

Note, too, that the section tag does not exist yet in jinja2; it would need to be implemented as an extension.

Two-pass template interpolation

(user data -> (data rules -> syntax rules)) -> output

Alternatively, we can do the substitution on the templates themselves before interpolating user data, building up one big template that we then render with the data from the database. This is safer because the user-specified data never gets interpreted as a template, so we don’t have to worry about escaping the user data or limiting the features of the template language. On the other hand, this may be challenging for the rule writer, because one must keep in mind the number of times a given rule will be run through the templating engine to get the escaping correct. Data rules will be used as templates only once (consuming user data) but syntax rules will be used as templates once to incorporate the data rules into the templates, and then again when the user data is included. Thus, any template tags relating to the processing of user data (such as, I imagine, ones for specifying openssl sections) need to be escaped.

Surprisingly, this hardly changes the way the rules are written! All of the example rules given above would still be valid, but the values would be the data rules themselves rather than data rules with interpolated user data. And of course, the values would not be escaped beforehand.

Template-based hierarchical rules

user data -> collected rules -> output

One way to get away from escaping and multiple evaluations is to redesign the template so that the order of its elements no longer matters. That is, the hierarchical relationships between data items, certificate extensions, and the CSR as a whole could be encoded using jinja2 tags. It’s probably easiest to explain this idea with an example:

{% group req %}
{% entry req %}extensions={% group exts %}{% endentry %}
{% entry req %}distinguished_name={% group subjectDN %}{% endentry %}
{% entry subjectDN %}O={{config.ipacertificatesubjectbase}}\nCN={{subject.username}}{% endentry %}
{% entry exts %}subjectAltName=@{% group SAN %}{% endentry %}
{% entry SAN %}email={{subject.email}}{% endentry %}
{% entry SAN %}URI={{subject.inetuserhttpurl}}{% endentry %}

The config for certutil would be quite similar:

certutil -R -a {% group opts %}
{% entry opts %}-s {% group subjectDN %}{% endentry %}
{% entry opts %}--extSAN {% group SAN %}{% endentry %}
{% entry subjectDN %}CN={{subject.username}},O={{config.ipacertificatesubjectbase}}{% endentry %}
{% entry SAN %}email:{{subject.email}}{% endentry %}
{% entry SAN %}uri:{{subject.inetuserhttpurl}}{% endentry %}

Each CSR generation helper would have its own notion of “groups,” which would be implemented as jinja2 extensions. The entries of a group would be collected and inserted into the group in whatever way was appropriate for that helper. Each line of these templates would be either a cert mapping rule referenced in the cert profile, or something built into the formatter for the CSR generation helper. There would be no distinction between data rules and syntax rules, and the order that rules appeared in the template would be irrelevant because the tags specified the hierarchy.

This approach has some downsides, too:

  1. Section names are now specified in the rules, which means there could be conflicts between different rules, and that a rule can only ever be included in a particular section. If two sections need the same data, two different rules are needed.
  2. Some types of groups are formatted differently from others (e.g. in certutil, opts is space-separated, while SAN is comma-separated. It’s not entirely clear where this should be encoded, and how.

Concern #1 is probably an ok tradeoff, as it’s not clear how broadly reusable rules will be anyway. However, #2 would need to be addressed in any actual implementation.

Formatter-based hierarchical rules

user data -> low-level rule -> formatting code -> group objects
group objects -> higher-level rule -> formatting code -> group objects
...
group objects -> top-level rule -> output

Instead of linking rules together into a hierarchy using tags, leaving it to the templating engine to interpret that structure, we could encode the structure in the rule entities themselves and use multiple evaluations to handle the hierarchy in the formatter, before the data even gets to the templating engine. Each rule would be stored with the name of the group within which it should be rendered, as well as the names of any groups that the rule includes. For example, to adapt the rule {% entry exts %}subjectAltName=@{% group SAN %}{% endentry %} to this schema, we would say that it is an element of the “exts” group, and provides the “SAN” group. By linking up group elements to group providers, we construct a tree of rules.

The formatter would evaluate these rules beginning at the leaves and passing the results of child nodes into variables in the parent node templates. The formatter is responsible for determining what exactly gets passed into the parent node, such as an object representing an openssl config section, or just a list of formatted strings. Parent nodes decide how to present the passed objects, such as by comma-separating the strings or referencing the name of the section. This addresses concern #2 from the previous approach, because the tools of the jinja2 language are now available for expressing how to format the results of groups of rules.

Example leaf rules:

group: SAN
template: email={{subject.email}}
group: subjectDN
template: O={{config.ipacertificatesubjectbase}}\nCN={{subject.username}}

Example parent rules:

group: opts
groupProvided: SAN
template: --extSAN {{ SAN|join(',') }}
group: exts
groupProvided: SAN
template: subjectAltName=@{{ SAN.section_name }}

This has several advantages over the two-pass interpolation approaches:

  1. Profiles are simpler to configure, because they just contain a list of references to rules rather than a structured list of groups of rules.
  2. Profiles are also simpler to implement, with no sub-objects in the database.
  3. It’s no longer necessary to pay attention to escaping when writing rules. Each rule is used as a template exactly once, and complex structures are handled by the formatter code rather than template tags so tags don’t need to be passed along.
  4. User data is never used as a template, which reduces the attack surface.

However, there are also some potential concerns:

  1. Whether the openssl and certutil hierarchies for rules are compatible (i.e. can the parent group can be listed in the mapping rule or must it be in the transformation rule?)
  2. Are there any instances where something needs to be a group but can’t be its own openssl section? How would we convey this to the openssl formatter?
  3. Conversely, are there cases where we would want to be able to create a section without creating a new rule? For example, a DN in a subject alternative name needs to be its own section. Do we then need rules just for filling out parts of that DN?

Conclusions

Although hierarchical rules seem like an interesting solution to avoid escaping and simplify the configuration in the cert profile itself, I think the interpolation approaches are easier to understand and explain, which is valuable for this already unexpectedly-complex feature.

Even though it is a little counter-intuitive, I lean towards the template interpolation solution rather than the straightforward data interpolation one because it doesn’t incorporate user data until the last step. This would make it incompatible with the existing python-based rules, but those are going to be replaced anyway, and in fact they may be vulnerable to injection attacks as well. Escaping of tags that are to be interpreted by the formatter will still be inconvenient, but we may be able to provide extensions to the template language to make that easier.

If you are interested in discussing any of these options, feel free to email me directly at the address below, or share your thoughts with the freeipa-devel mailing list. Thanks!

July 19, 2016 12:00 AM

July 13, 2016

Red Hat Blog

I Really Can’t Rename My Hosts!

Hello again! In this post I will be sharing some ideas about what you can do to solve a complex identity management challenge.

As the adoption of Identity Management (IdM) grows and especially in the case of heterogeneous environments where some systems are running Linux and user accounts are in the Active Directory (AD) – the question of renaming hosts becomes more and more relevant. Here is a set of requirements that we often hear from customers:

  1. I want to be able access my Linux hosts with credentials stored in Active Directory.
  2. I want to be able to centrally manage access control to my Linux hosts for user accounts stored in Active Directory.
  3. I want to be able to centrally manage privilege escalation (sudo) for user accounts stored in Active Directory.
  4. I want to be able to control automount maps for my Linux systems centrally.
  5. I want to be able to jump between my Linux hosts without requiring to enter passwords all the time (SSO).
  6. I do not want to rename my Linux hosts; they are currently a part of Active Directory DNS domain. There are business critical applications running on them… and (thus) I really can’t rename them.
  7. I want the solution to be cost effective so that I do not have to pay extra for the integration of Linux systems into my Active Directory environment.

Before we move forward it is important to clarify terminology. When we talk about single-sign-on (SSO) we are talking about the ability for a user to authenticate once and to then access different systems and resources without being challenged for authentication again. This is not the same as having a single account. In fact, all solutions as discussed in this post assume that there is a single user account and that it is stored inside Active Directory. But this is not yet SSO. SSO would be achieved if the user is challenged to provide his password once, usually during the login into his workstation, and then was able to access other systems without being prompted to enter their password again. Also, when we talk about SSO inside the enterprise, the technology that provides such capability is called Kerberos. It is implemented both on the Windows and Linux sides.

Now that we’ve clarified the SSO terminology we can look at how the above listed requirements can be met.

The following diagram shows the current the state:

image_one

Let us drill down – exploring different options – to find out how these requirements can be met.

Option 1 – Use 3rd Party Software

image_two

This solution satisfies nearly all of the above listed requirements… the sole exception being cost effectiveness. It also puts everything – including the ability to manage Linux systems – into Active Directory. Sometimes this is desirable, sometimes it is not. For more information on the use of 3rd party software see one of my other articles. The costs associated with such a solution usually generate an interest in exploring additional options.  Let’s continue onward…

Option 2 – Use Direct Integration

I’ve written about direct integration in several of my previous blog posts. The main limitation with direct integration is that while access control can be centrally managed using the basic GPO support available in SSSD, policies like sudo or automount are unmanaged. This fails to meet requirements #3 and #4.

image_three

Option 3 – Use Indirect Integration with IdM

An IdM-based solution provides a lot of benefits as has been mentioned in other sections of my blog however, in this specific case, the problem arises with the hostnames due to the SSO requirement (i.e. requirement #5). To be able to leverage SSO between the hosts with Kerberos the hosts have to be put into a DNS domain managed by IdM rather than one controlled by Active Directory (i.e. they would need to be renamed).

image_four

If the hosts (really) can’t be renamed, the Kerberos-based SSO approach will not work because IdM hosts being in an AD domain will confuse clients. The clients will request Kerberos tickets for IdM hosts from AD instead of IdM.  AD would fail to resolve Kerberos principals since these hosts are joined to IdM and have Kerberos principals from the IdM realm.

image_five

This problem is described in more detail in this document.

Deadlock? Not necessarily. There are couple options that can be explored here.

Option 3a – Use Indirect Integration with IdM and Exclude Hosts

Active Directory allows specifying external hosts. This means that if you have a small amount of hosts that can’t be renamed there is a way to explain to AD that these hosts are really from a different domain. With this setting Active Directory would know to rely on an external domain controller (in this case IdM) to resolve these names.

image_six

This, however, would only work when the number of such hosts is really small. Dozens of hosts would start to take a toll on Active Directory performance (according to specialists) and this is probably the last thing you want to accomplish.

Option 3b – Use Indirect Integration with IdM with SSH SSO

Another approach would be to complement the Kerberos-based authentication (or even completely replace it) with SSH-based SSO. The following two diagrams show how this can be accomplished.

image_seven

Linux hosts will be joined to IdM but will not use Kerberos for SSO. This would allow them to preserve their names. To meet the requirement not to challenge users again with username and password after the initial authentication – SSH keys could be issued to AD users. Users coming from their Windows workstations would use Kerberos SSO to access a jump host and would then be able to SSH to other systems using SSH key authentication. IdM provides centralized user and host SSH public key management – making such a deployment quite simple.

Alternatively, Kerberos SSO can be abandoned for those hosts (it will still work fine for other hosts in the IdM domain and services running on those hosts) and SSH key based authentication can be implemented all the way through.

image_eight

It is important to note that SSH key authentication is not formally “SSO”. It is a key-based authentication tactic. It uses a key pair – a private key as generated by the SSH tools and stored on the user workstation and a public key that can be uploaded into IdM and IdM will make it available automatically to all managed hosts on an as needed basis. Though (again) it is not exactly “SSO”, it does allow us to avoid prompting a user for their password when he or she accesses the host in question. With this in mind – many find that the SSO requirement can either be reformulated or perhaps removed entirely.

Nevertheless, here is an outline of the steps that would need to be taken to get to the situation where all of the requirements are met:

  • Install IdM
  • Establish trust with Active Directory
  • Connect the hosts without renaming to IdM
  • Optionally create a jump host in the IdM domain
  • Configure access control, automount, and privilege escalation policies (as needed)
  • Generate SSH keys (for workstation users) and share public keys with the IdM administrator so that he or she can upload them into IdM
  • Make any / all workstations use SSH with keys directly or via the jump host

Then… success! All of the requirements have been met.

I do hope that you will find this article to be useful.

As always, we are interested in your feedback, questions, and stories – do reach out using the comments section (below).

by Dmitri Pal at July 13, 2016 11:54 PM

July 08, 2016

Adam Young

Merging FreeIPA and Tripleo Undercloud Apache installs

My Experiment yesterday left me with a broken IPA install. I aim to fix that.

To get to the start state:

From my laptop, kick off a Tripleo Quickstart, stopping prior to undercloud deployment:

./quickstart.sh --teardown all -t  untagged,provision,environment,undercloud-scripts  ayoung-dell-t1700.test

SSH in to the machine …

ssh -F /home/ayoung/.quickstart/ssh.config.ansible undercloud

and set up FreeIPA;

$ cat install-ipa.sh

#!/usr/bin/bash

sudo hostnamectl set-hostname --static undercloud.ayoung-dell-t1700.test
export address=`ip -4 addr  show eth0 primary | awk '/inet/ {sub ("/24" ,"" , $2) ; print $2}'`
echo $address `hostname` | sudo tee -a /etc/hosts
sudo yum -y install ipa-server-dns
export P=FreIPA4All
sudo ipa-server-install -U -r `hostname -d|tr "[a-z]" "[A-Z]"` -p $P -a $P --setup-dns `awk '/^name/ {print "--forwarder",$2}' /etc/resolv.conf`

Backup the HTTPD config directory:

 sudo cp -a /etc/httpd/ /root

Now go continue the undercloud install

./undercloud-install.sh 

Once that is done, the undercloud passes a sanity check. Doing a diff between the two directories shows a lot of differences.

sudo diff -r /root/httpd  /etc/httpd/

All of the files in /etc/httpd/conf.d that were placed by the IPA install are gone, as are the following module files in /root/httpd/conf.modules.d

Only in /root/httpd/conf.modules.d: 00-base.conf
Only in /root/httpd/conf.modules.d: 00-dav.conf
Only in /root/httpd/conf.modules.d: 00-lua.conf
Only in /root/httpd/conf.modules.d: 00-mpm.conf
Only in /root/httpd/conf.modules.d: 00-proxy.conf
Only in /root/httpd/conf.modules.d: 00-systemd.conf
Only in /root/httpd/conf.modules.d: 01-cgi.conf
Only in /root/httpd/conf.modules.d: 10-auth_gssapi.conf
Only in /root/httpd/conf.modules.d: 10-nss.conf
Only in /root/httpd/conf.modules.d: 10-wsgi.conf

TO start, I am going to backup the existing HTTPD directory :

 sudo cp -a /etc/httpd/ /home/stack/

Te rest of this is easier to do as root, as I want some globbing. First, I’ll copy over the module config files

 sudo su
 cp /root/httpd/conf.modules.d/* /etc/httpd/conf.modules.d/
 systemctl restart httpd.service

Test Keystone

 . ./stackrc 
 openstack token issue

Get a token…good to go…ok, lets try toe conf.d files.

sudo cp /root/httpd/conf.d/* /etc/httpd/conf.d/
systemctl restart httpd.service

Then as a non admin user

$ kinit admin
Password for admin@AYOUNG-DELL-T1700.TEST: 
[stack@undercloud ~]$ ipa user-find
--------------
1 user matched
--------------
  User login: admin
  Last name: Administrator
  Home directory: /home/admin
  Login shell: /bin/bash
  UID: 776400000
  GID: 776400000
  Account disabled: False
  Password: True
  Kerberos keys available: True
----------------------------
Number of entries returned 1
----------------------------

This is a fragile deployment, as updating either FreeIPA or the Undercloud has the potential to break one or the other…or both. But it is a start.

by Adam Young at July 08, 2016 07:29 PM

De-conflicting Swift-Proxy with FreeIPA

Port 8080 is a popular port. Tomcat uses it as the default port for unencrypted traffic. FreeIA, installs Dogtag which runs in Tomcat. Swift proxy also chose that port number for its traffic. This means that if one is run on that port, the other cannot. Of the two, it is easier to change FreeIPA, as the port is only used for internal traffic, where as Swift’s port is in the service catalog and the documentation.

Changing the port in FreeIPA requires modifications in both the config directories for Dogtag and the Python code that contacts it.

The Python changes are in

/usr/lib/python2.7/site-packages/ipaplatform/base/services.py
/usr/lib/python2.7/site-packages/ipapython/dogtag.py

Look for any instance of 8080 and change them to another port that will not conflict. I chose 8181

The config changes for dogtag are in /etc/pki such as /etc/pki/pki-tomcat/ca/CS.cfg and again, change 8080 to 8181.

Restart the server with:

sudo systemctl status ipa.service

To confirm run a command that hits the CA:

 ipa cert-find

I have a ticket in with FreeIPA to try and get support for this in.

With these changes made, I tested out then installing the undercloud on the same node and it seems to work.

However, the IPA server is no longer running. The undercloud install seems to have cleared out the ipa config files from under /etc/httpd/conf.d. However, DOgtag is still running as shown by

curl localhost:8181

Next experiment will be to see if I can preserve the IPA configuration

by Adam Young at July 08, 2016 04:30 AM

June 30, 2016

Rob Crittenden

Nova join (take 2)

Rich Megginson started a project in the Openstack Nova service to enable automatic IPA enrollment when an instance is created. I extended this to add support for metadata and pushed it into github as novajoin, https://github.com/rcritten/novajoin

This used a hooks function within nova to allow one to extend certain functions (like add, delete, networking) etc. Unfortunately this was not well documented, nor apparently well-used, and the nova team wasn’t too keen on allowing full access to all nova internals, so they killed it.

The successor is an extension of the metadata plugin system, vendordata: https://review.openstack.org/#/c/317739/

The idea is to allow one to inject custom metadata dynamically over a REST call.

IPA will provide a vendordata REST service that will create a host on demand and return the OTP for that host in the metadata. Enrollment will continue to happen via a cloud-init script which fetches the metadata to get the OTP.

A separate service will listen on notifications to capture host delete events.

I’m still working on networking as there isn’t a clear line which IP should be associated with a given hostname, and when. In other words, there is still a lot of handwaving going on.

I haven’t yet pushed the new source yet but I’m going to use the same project after I tag the current bits. There is no point continuing development of the hooks-based approach since nova will kill it after the Newton release.

by rcritten at June 30, 2016 05:41 PM

Powered by Planet